Select Page
How can working from home affect your data privacy?

How can working from home affect your data privacy?

On March 11, the World Health Organization declared the Coronavirus (COVID-19) a global pandemic, sending the world into a mass frenzy. Since that declaration, countries around the world have shut borders, closed schools, requested citizens to stay indoors, and sent workers home. 

While the world may appear to be at a standstill, some jobs still need to get done. Like us at CryptoNumerics, companies have sent their workers home with the tools they need to complete their regularly scheduled tasks from the comfort of their own homes. 

However, with a new influx of people working from home, insecure networks, websites or AI tools can lead company information vulnerable. In this article, we’ll go over where your privacy may be at risk during this work-from-home season.

Zoom’s influx of new users raises privacy concerns.

Zoom is a video-conferencing company used to host meetings, online-charts and online collaboration. Since people across the world are required to work or participate in online schooling, Zoom has seen a substantial increase in users. In February, Zoom shares raised 40%, and in 3 months, it has doubled its monthly active users from the entire year of 2019 (Source). 

While this influx and global exposure are significant for any company, this unprecedented level of usage can expose holes in their privacy protection efforts, a concern that many are starting to raise

Zoom’s growing demand makes them a big target for third-parties, such as hackers, looking to gain access to sensitive or personal data. Zoom is being used by companies large and small, as well as students across university campus. This means there is a grand scale of important, sensitive data could very well be vulnerable. 

Some university professors have decided against Zoom telecommuting, saying the Zoom privacy policy, which states that they may collect information about recorded meetings that take place in video conferences, raises too many concerns of personal privacy. 

On a personal privacy level, Zoom gives the administrator of the conference call the ability to see when a caller has moved to another webpage for over 30 seconds. Many are calling this option a violation of employee privacy. 

Internet-rights advocates have begun urging Zoom to begin publishing transparent reports detailing how they manage data privacy and data security.  

Is your Alexa listening to your work conversations?

Both Google Home and Amazon’s Alexa have previously made headlines for listening to homes without being called upon and saving conversation logs.  

Last April, Bloomberg released a report highlighting Amazon workings listening to and transcribing conversations heard through Alexa’s in people’s homes. Bloomberg reported that most voice assistant technologies rely on human help to help improve the product. They reported that not only were the Amazon employees listening to Alexa’s without the Alexa’s being called on by users but also sharing the things they heard with their co-workers. 

Amazon claims the recordings sent to the “Alexa reviewers” are only provided with an account number, not an address or full name to identify a user with. However, the entire notion of hearing full, personal conversations is uncomfortable.

As the world is sent to work from home, and over 100 million Alexa devices are in American homes, there should be some concern over to what degree these speaker systems are listening in to your work conversations.   

Our advice during this work-from-home-long-haul? Review your online application privacy settings, and be cautious of what devices may be listening when you have important meetings or calls. 

Breaching Data Privacy for a Social Cause

Breaching Data Privacy for a Social Cause

Data partnerships are increasingly justified as a social good, but in a climate where companies are losing consumer trust through data breaches, privacy concerns begin to outweigh the social benefits of data sharing. 

 

This week, Apple is gaining consumer trust with its revamped Privacy Page. Facebook follows Apple’s lead as they become more wary about sharing a petabyte of data with Social Science One researchers due to increasing data privacy concerns. Also, law enforcement may be changing the genetic privacy game as they gain unprecedented access to millions of DNA records to solve homicide cases and identify victims.

Apple is setting the standard for taking consumer privacy seriously—Privacy as a Social Good

Apple is setting the stage for consumer privacy with its redesigned privacy page. Apple CEO Tim Cook announced, “At Apple, privacy is built into everything we make. You decide what you share, how you share it, and who you share it with. Here’s how we protect your data.” (Source)

There is no doubt that Apple is leveraging data privacy. When entering Apple’s new privacy landing page, bold letters are used to emphasize how privacy is a fundamental part of the company, essentially one of their core values (Source). 

Apple’s privacy page explains how they’ve designed their devices with their consumers’ privacy in mind. They also showcase how this methodology applies to their eight Apple apps: Safari browser, Apple Maps, Apple Photos, iMessage, Siri Virtual Assistant, Apple News, Wallet and Apple Pay, and Apple Health.

A privacy feature fundamental to many of Apple’s apps is that the data on an Apple device is locally stored and is never released to Apple’s servers unless the user consents to share their data, or the user personally shares his/her data with others. Personalized features, such as smart suggestions, are based on random identifiers.

  • Safari Browser blocks the data that websites collect about site visitors with an Intelligent Tracking Prevention feature and makes it harder for individuals to be identified by providing a simplified system profile for users. 
  • Apple Maps does not require users to sign in with their Apple ID. This eliminates the risk of user location and search information history linking to their identity. Navigation is based on random identifiers as opposed to individual identifiers.  

Photos taken on Apple devices are processed locally and are not shared unless stored on a cloud or shared by the user.

  • iMessages aren’t shared with Apple and are encrypted via end-to-end device encryption.
  • Siri, Apple’s voice-activated virtual assistant can process information without the information being sent to Apple’s servers. Data that is sent back to Apple is not associated with the user and is only used to update Siri.
  • Apple News curates personalized news and reading content based on random identifiers that are not associated with the user’s identity. 
  • Apple Wallet and Pay creates a device account number anytime a new card is added. Transactional data is only shared between the bank and the individual.
  • Apple Health is designed to empower the user to share their personal health information with whom they choose. The data is encrypted and can only be accessed by the user via passcodes. 

 

Facebook realizes the ethical, legal, and technical concerns in sharing 1,000,000 gigabytes of data with social science researchers

Facebook has been on the wrong side of data privacy ever since the Cambridge Analytica scandal in 2018 where users’ data was obtained, without their consent, for political advertising. Now that Facebook is approaching privacy with users best interest in mind, this is creating tension between the worlds of technology and social science. 

Earlier this year, Facebook and Social Science One partnered in a new model of industry-academic partnership initiative to “help people better understand the broader impact of social media on democracy—as well as improve our work to protect the integrity of elections.” said Facebook (Source). 

Facebook agreed to share 1,000,000 gigabytes of data with Social Science One to conduct research and analysis but has failed to meet their promises. 

According to Facebook, it was almost impossible to apply anonymization techniques such as differential privacy to the necessary data without stripping it completely of its analytical value.   

Facebook half-heartedly released some data as they approached deadlines and pressure, but what they released and what they promised was incomparable. Facebooks’ failure to share the data they agreed to counters the proposed social benefit of using the data to study the impact of disinformation campaigns. 

Facebook is torn between a commitment to contributing to a socially good cause without breaching the privacy of its users. 

This exemplifies how Facebook may not have been fully prepared to shift its business model from one that involved data monetization to a CSR-driven (corporate social responsibility) model where data sharing is used for research while keeping privacy in mind. 

Will Facebook eventually fulfill their promises?

 

Socially Beneficial DNA Data: Should Warrants be given to access Genealogy website databases?

At a police convention last week, Floridian detective, Michael Fields, revealed how he received a valid law enforcement request to access GEDmatch.com data (Source).

GEDmatch is a genealogy website that contains over a million users’ records. But, does the social benefit accrued outweigh the privacy violation to users whose data was exposed without their consent?

Last year, GEDmatch faced a mix of scrutiny and praise when they helped police identify the Golden State Killer after granting them access to their database (Source).  After privacy concerns surfaced, GEDmatch updated its privacy terms. Access was only permitted to law enforcement from users who opted-in to share their data. Additionally, police authorities are limited to searching for the purposes of, “murder, nonnegligent manslaughter, aggravated rape, robbery or aggravated assault” cases (Source).

This recent warrant granted to detective Fields overrode GEDmatch privacy terms by allowing the detective to access data of all users, even those who did not consent. This was the first time a judge agreed to a warrant of this kind. This changes the tone in genetic privacy, potentially setting precedent about who has access to genetic data. 

 

Join our newsletter


Is this a mirror world? Uber defends privacy and science centre exposes 174,000 names

Is this a mirror world? Uber defends privacy and science centre exposes 174,000 names

Typically we expect Uber to be on the wrong side of a privacy debacle. But this week, they claim to be defending the privacy of its users from the LA Department of Transportation. Meanwhile, the Ontario Science Centre experiences a data breach that exposed the personal information of 174,000 individuals. Are the upcoming state-level privacy laws the answer to consumers privacy concerns?

Uber claims LA’s data-tracking tool is a violation of state privacy laws.

LA Department of Transportation (LADOT) wants to use Uber’s dockless scooters and bikes to collect real-time trip data. But, Uber has repeatedly refused due to privacy concerns. This fight is coming to a head, as on Monday, Uber threatened to file a lawsuit and temporary restraining order (Source).

Last year, the general manager of LADOT, Reynolds began developing a system that would improve mobility in the city by enabling communication between them and every form of transportation. To do so, they implemented a mobility data specification (MDS) software program, called Provider, in November that mandated all dockless scooter and bikes operating in LA send their trip data to the city headquarters.

Then, a second piece of software was developed, Agency, which reported and alerted companies about their micro-mobility devices. For example, it would send alerts about an improperly parked scooter or imminent street closure (Source).

This would mean the city has access to each and every single trip consumers take. Yet, according to Reynolds, the data they are gathering is essential to manage the effects of micro-mobility on the streets. “At LADOT, our job is to move people and goods as quickly and safely as possible, but we can only do that if we have a complete picture of what’s on our streets and where.” (Source).

Other cities across the country were thrilled by the results and look to implement similar MDS solutions. 

In reality, the protocols exhibit Big Brother-like implications, and many privacy stakeholders seem to side with Uber. Determining that LADOT’s actions would in fact, “constitute surveillance.” (Source).This includes the EFF who stated that “LADOT must start taking seriously the privacy of Los Angeles residents.” What’s more in a letter to LA, they wrote that “the MDS appears to violate the California Electronic Communications Privacy Act (CalECPA), which prohibits any government entity from compelling the production of electronic device information, including raw trip data generated by electronic bikes or scooters, from anyone other than the authorized possessor of the device without proper legal process.” (Source)

While Uber seems to have validity in their concerns, there is fear that LADOT will revoke their permit to operate because of their refusal to comply (Source). As of Tuesday, the company’s permit was suspended. But with the lawsuit looming, the public can expect the courts to decide the legality of the situation (Source).

Ontario Science Centre data breach exposes 174,000 names

This week the Ontario Science Centre explains that on August 16, 2019, they were made aware of a data breach that affected 174,000 people. This was discovered by Campaigner, the third-party company that performs the mailings, newsletters, and invitations for the OSC. 

Between July 23 and August 7, “someone made a copy of the science centre’s subscriber emails and names without authorization.” (Source

Upon further investigation, it was learned that the perpetrator used a former Campaigner’s login credentials to access the data. While no other personal information was stolen, the mass number of consumers affected highlights the potentially negative consequences associated with using trusted third parties.

Anyone whose data was compromised in this incident was alerted by the science centre and was encouraged to ask any further questions. In addition, the Ontario Information and Privacy Commissioner, Beamish, was alerted about the breach one-day after the notices began going out to the public. 

Moving forward, the Ontario Science Centre is “reviewing data security and retention policies.” alongside Beamish to investigate the incident in full and ensure it is not repeated in the future (Source).

Will more states adopt privacy laws in 2020?

January 1, 2020, marks the implementation of the California Consumer Privacy Act (CCPA). This upcoming law has spread across the media, but soon more state-level privacy laws are expected that will reshape the privacy landscape in America. With a focus on consumer privacy and an increased risk of litigation, businesses are on the edge of their seats anticipating the state’s actions.

Bills in New York, New Jersey, Massachusetts, Minnesota, and Pennsylvania will be debated in the next few months. However, due to the challenge of mediating all stakeholders involved, several of the laws that were expected to have been passed this year were caught up in negotiations. Some have even fallen flat, like those in Arizona, Florida, Kentucky, Mississippi, and Montana. On the other hand, a few states are forming studies that will evaluate current privacy laws and where they should be updated or expanded by digging into data breaches and Internet privacy (Source).

Meanwhile, big tech is lobbying for a federal privacy law in an attempt to supersede state-level architecture (To learn more about this read our blog).

Any way you look at it, more regulations are coming, and the shift of privacy values will create mass changes in the United States and across the globe. This is more necessary than ever, in a new mirror world where Uber claims to be on a mission to protect user privacy and the science centre comes clean about a massive data breach. The question remains, are privacy laws the answer to the data-driven world? Perhaps, 2020 will be the year to make businesses more privacy-conscious.

Join our newsletter


What do Trump, Google, and Facebook Have in Common?

What do Trump, Google, and Facebook Have in Common?

This year, the Trump Administration declared the need for a national privacy law to supersede a patchwork of state laws. But, as the year comes to a close, and amidst the impeachment inquiry, time is running out. Meanwhile, Google plans to roll out encrypted web addresses, and Facebook stalls research into social media’s effect on democracy. Do these three seek privacy or power?
The Trump Administration, Google, and Facebook claim that privacy is a priority, and… well… we’re still waiting for the proof. Over the last year, the news has been awash with privacy scandals and data breaches. Every day we hear promises that privacy is a priority and that a national privacy law is coming, but so far, the evidence of action is lacking. This begs the question, are politicians and businesses using the guise of “privacy” to manipulate people? Let’s take a closer look.

Congress and the Trump Administration: National Privacy Law

Earlier this year, Congress and the Trump Administration agreed they wanted a new federal privacy law to protect individuals online. This rare occurrence was even supported and campaigned for by major tech firms (read our blog “What is your data worth” to learn more). However, despite months of talks, “a national privacy law is nowhere in sight [and] [t]he window to pass a law this year is now quickly closing.” (Source)

Disagreement over enforcement and state-level power are said to be holding back progress. Thus, while senators, including Roger Wicker, who chairs the Senate Commerce Committee, insist they are working hard, there are no public results; and with the impeachment inquiry, it is possible we will not see any for some time (Source). This means that the White House will likely miss their self-appointed deadline of January 2020, when the CCPA goes into effect.

Originally, this plan was designed to avoid a patchwork of state-level legislature that can make it challenging for businesses to comply and weaken privacy care. It is not a simple process, and since “Congress has never set an overarching national standard for how most companies gather and use data.”, much work is needed to develop a framework to govern privacy on a national level (Source). However, there is evidence in Europe with GDPR, that a large governing structure can successfully hold organizations accountable to privacy standards. But how much longer will US residents need to wait?

Google Encryption: Privacy or Power

Google has been trying to get an edge above the competition for years by leveraging the mass troves of user data it acquires. Undoubtedly, their work has led to innovation that has redefined the way our world works, but our privacy has paid the price. Like never before, our data has become the new global currency, and Google has had a central part to play in the matter. 

Google has famously made privacy a priority and is currently working to enhance user privacy and security with encrypted web addresses.

Unencrypted web addresses are a major security risk, as they make it simple for malicious persons to intercept web traffic and use fake sites to gather data. However, in denying hackers this ability, power is given to companies like Google, who will be able to collect more user data than ever before. For the risk is “that control of encrypted systems sits with Google and its competitors.” (Source)

This is because encryption cuts out the middle layer of ISPs, and can change the mechanisms through which we access specific web pages. This could enable Google to become the centralized encryption DNS provider (Source).

Thus, while DoH is certainly a privacy and security upgrade, as opposed to the current DNS system, shifting from local middle layers to major browser enterprises centralizes user data, raising anti-competitive and child-protection concerns. Further, it diminishes law enforcement’s ability to blacklist dangerous sites and monitor those who visit them. This also opens new opportunities for hackers by reducing their ability to gather cybersecurity intelligence from malware activity that is an integral part of being able to fulfil government-mandated regulation (Source).

Nonetheless, this feature will roll out in a few weeks as the new default, despite the desire from those with DoH concerns to wait until learning more about the potential fallout.

Facebook and the Disinformation Fact Checkers

Over the last few years, Facebook has developed a terrible reputation as one of the least privacy-centric companies in the world. But it is accurate? After the Cambridge Analytica scandal, followed by endless cases of data privacy ethical debacles, Facebook stalls its “disinformation fact-checkers” on the grounds of privacy problems.

In April of 2018, Mark Zuckerburg announced that the company would develop machine learning to detect and manage misinformation on Facebook (Source). It then promised to share this information with non-profit researchers who would flag disinformation campaigns as part of an academic study on how social media is influencing democracies (Source). 

To ensure that the data being shared could not be traced back to individuals, Facebook applied differential privacy techniques.

However, upon sending this information, researchers complained data did not include enough information about the disinformation campaigns to allow them to derive meaningful results. Some even insisted that Facebook was going against the original agreement (Source). As a result, some of the people funding this initiative are considering backing out.

Initially, Facebook was given a deadline of September 30 to provide the full data sets, or the entire research grants program would be shut down. While they have begun offering more data in response, the full data sets have not been provided.

A spokesperson from Facebook says, “This is one of the largest sets of links ever to be created for academic research on this topic. We are working hard to deliver on additional demographic fields while safeguarding individual people’s privacy.” (Source). 

While Facebook may be limiting academic research on democracies, perhaps they are finally prioritizing privacy. And, at the end of the day with an ethical framework to move forward, through technological advancement and academic research, the impact of social media and democracy is still measurable without compromising privacy.

In the end, it is clear that privacy promises hold the potential to manipulate people into action. While the US government may not have a national privacy law anywhere in sight, the motives behind Google’s encrypted links may be questionable, and Facebook’s sudden prioritization of privacy may cut out democratic research, at least privacy is becoming a hot topic, and that holds promise for a privacy-centric future for the public.

Join our newsletter


Your health records are online, and Amazon wants you to wear Alexa on your face

Your health records are online, and Amazon wants you to wear Alexa on your face

This week’s news was flooded with a wealth of sensitive medical information landing on the internet, and perhaps, in the wrong hands. Sixteen million patient scans were exposed online, the European Court of Justice ruled Google does not need to remove links to sensitive information, and Amazon released new Alexa products for you to wear everywhere you go.

Over five million patients have had their privacy breached and their private health information exposed online. These documents contain highly sensitive data, like names, birthdays, and in some cases, social security numbers. Worse, the list of compromised medical record systems is rapidly increasing, and the data can all be accessed with a traditional web browser. In fact, Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security, reports “[i]t’s not even hacking,” because the data is so easily accessible to the average person (Source).

One of these systems belongs to MobilexUSA, whose records, which showed patients’ names, date of birth, doctors, and a list of procedures, were found online (Source

Experts report that this could be a direct violation of HIPAA and many warn that the potential consequences of this leak are devastating, as medical data is so sensitive, and if in the wrong hands, could be used maliciously (Source).

According to Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, “[m]edical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice.” (Source

Such a statement signals a privacy crisis in the healthcare industry that requires a desperate fix. According to Pianykh, the problem is not a lack of regulatory standards, but rather that “medical device makers don’t follow them.” (Source) If that is the case, should we expect HIPAA to crackdown the same way GDPR has?

With a patient’s privacy up in the air in the US, a citizens’ “Right to be Forgotten” in the EU is also being questioned. 

The “Right to be Forgotten” states that “personal data must be erased immediately where the data are no longer needed for their original processing purpose, or the data subject has withdrawn [their] consent” (Source). This means that upon request, a data “controller” must erase any personal data in whatever means necessary, whether that is physical destruction or permanently over-writing data with “special software.” (Source)

When this law was codified in the General Data Protection Regulation (GDPR), it was implemented to govern over Europe. Yet, France’s CNIL fined Google, an American company, $110,000 in 2016 for refusing to remove private data from search results. Google argued changes should not need to be applied to the google.com domain or other non-European sites (Source). 

On Tuesday, The European Court of Justice agreed and ruled that Google is under no obligation to extend EU rules beyond European borders by removing links to sensitive personal data (Source). However, the court made a distinct point that Google “must impose new measures to discourage internet users from going outside the EU to find that information.” (Source) This decision sets a precedent for the application of a nation’s laws outside its borders when it comes to digital data. 

While the EU has a firm stance on the right to be forgotten, Amazon makes clear that you can “automatically delete [your] voice data”… every three to eighteen months (Source). The lack of immediate erasure is potentially troublesome for those concerned with their privacy, especially alongside the new product launch, which will move Alexa out of your home and onto your body.

On Wednesday, Amazon launched Alexa earbuds (Echo Buds), glasses (Echo Frames), and rings (Echo Loop). The earbuds are available on the marketplace, but the latter two are an experiment and are only available by invitation for the time being (Source). 

With these products, you will be able to access Alexa support wherever you are, and in the case of the EchoBuds, harness the noise-reduction technology of Bose for only USD $130 (Source). However, while these products promise to make your life more convenient, in using these products Amazon will be able to monitor your daily routines, behaviour, quirks, and more. 

Amazon specified that their goal is to make Alexa “ubiquitous” and “ambient” by spreading it everywhere, including our homes, appliances, cars, and now, our bodies. Yet, at the same time as they open up about their strategy for lifestyle dominance, Amazon claims to prioritize privacy, as the first tech giant to allow users to opt-out of their voice data being transcribed and listened to by employees. Despite this, it is clear that “Alexa’s ambition and a truly privacy-centric customer experience do not go hand in hand.” (Source). 

With Amazon spreading into wearables, Google winning the “Right to be Forgotten” case, and patient records being exposed online, this week is wrapping up to be a black mark on user privacy. Stay tuned for our next weekly news blog to learn about how things shape up. 

Join our newsletter


CryptoNumerics Partners with TrustArc on Privacy Insight Webinar

CryptoNumerics Partners with TrustArc on Privacy Insight Webinar

We’re excited to partner up with TrustArc on their Privacy Insight Series on Thursday, September 26th at 12pm ET to talk about “Leveraging the Power of Automated Intelligence for Privacy Management”! 

With the increasing prevalence of privacy technology, how can the privacy industry leverage the benefits of artificial intelligence and machine learning to drive efficiencies in privacy program management? Many papers have been written on managing the potential privacy issues of automated decision-making, but far fewer on how the profession can utilize the benefits of technology to automate and simplify privacy program management.

Privacy tools are starting to leverage technology to incorporate powerful algorithms to automate repetitive, time-consuming tasks. Automation can generate significant cost and time savings, increase quality, and free up the privacy office’s limited resources to focus on more substantive and strategic work. This session will bring together expert panelists who can share examples of leveraging intelligence within a wide variety of privacy management functions.

 

Key takeaways from this webinar:
  • Understand the difference between artificial Intelligence, machine learning, intelligent systems and algorithms
  • Hear examples of the benefits of using intelligence to manage privacy compliance
  • Understand how to incorporate intelligence into your internal program and/or client programs to improve efficiencies

Register Now!

Can’t make it? Register anyway – TrustArc will automatically send you an email with both the slides and recording after the webinar.

To read more privacy articles, click here.

This content was originally posted on TrustArc’s website. Click here to view the original post.

Join our newsletter