Select Page
How can working from home affect your data privacy?

How can working from home affect your data privacy?

On March 11, the World Health Organization declared the Coronavirus (COVID-19) a global pandemic, sending the world into a mass frenzy. Since that declaration, countries around the world have shut borders, closed schools, requested citizens to stay indoors, and sent workers home. 

While the world may appear to be at a standstill, some jobs still need to get done. Like us at CryptoNumerics, companies have sent their workers home with the tools they need to complete their regularly scheduled tasks from the comfort of their own homes. 

However, with a new influx of people working from home, insecure networks, websites or AI tools can lead company information vulnerable. In this article, we’ll go over where your privacy may be at risk during this work-from-home season.

Zoom’s influx of new users raises privacy concerns.

Zoom is a video-conferencing company used to host meetings, online-charts and online collaboration. Since people across the world are required to work or participate in online schooling, Zoom has seen a substantial increase in users. In February, Zoom shares raised 40%, and in 3 months, it has doubled its monthly active users from the entire year of 2019 (Source). 

While this influx and global exposure are significant for any company, this unprecedented level of usage can expose holes in their privacy protection efforts, a concern that many are starting to raise

Zoom’s growing demand makes them a big target for third-parties, such as hackers, looking to gain access to sensitive or personal data. Zoom is being used by companies large and small, as well as students across university campus. This means there is a grand scale of important, sensitive data could very well be vulnerable. 

Some university professors have decided against Zoom telecommuting, saying the Zoom privacy policy, which states that they may collect information about recorded meetings that take place in video conferences, raises too many concerns of personal privacy. 

On a personal privacy level, Zoom gives the administrator of the conference call the ability to see when a caller has moved to another webpage for over 30 seconds. Many are calling this option a violation of employee privacy. 

Internet-rights advocates have begun urging Zoom to begin publishing transparent reports detailing how they manage data privacy and data security.  

Is your Alexa listening to your work conversations?

Both Google Home and Amazon’s Alexa have previously made headlines for listening to homes without being called upon and saving conversation logs.  

Last April, Bloomberg released a report highlighting Amazon workings listening to and transcribing conversations heard through Alexa’s in people’s homes. Bloomberg reported that most voice assistant technologies rely on human help to help improve the product. They reported that not only were the Amazon employees listening to Alexa’s without the Alexa’s being called on by users but also sharing the things they heard with their co-workers. 

Amazon claims the recordings sent to the “Alexa reviewers” are only provided with an account number, not an address or full name to identify a user with. However, the entire notion of hearing full, personal conversations is uncomfortable.

As the world is sent to work from home, and over 100 million Alexa devices are in American homes, there should be some concern over to what degree these speaker systems are listening in to your work conversations.   

Our advice during this work-from-home-long-haul? Review your online application privacy settings, and be cautious of what devices may be listening when you have important meetings or calls. 

Privacy: The Most Talked About Gadget of CES 2020

Privacy: The Most Talked About Gadget of CES 2020

This week Las Vegas once again saw the Consumer Electronics Show (CES), accompanied by a range of flashy new gadgets. Most significant among the mix; privacy. 

Technology front runners such as Facebook, Amazon, and Google took the main stage in unveiling data privacy changes in their products, as well as headlining discussions surrounding the importance of consumer privacy. However, through each reveal, attendees noticed gaps and missteps in these companies’ attempts at privacy.

Facebook: A New Leader in Data Privacy? 

This year, Facebook attempted to portray itself as a changed company in the eyes of privacy. Complete with comfortable seating and flowers, Facebook’s CES booth revealed a company dedicated to customer privacy, pushing the idea that Facebook does not sell customer data. 

Originally created in 2014, Facebook relaunched a new-and-improved “Privacy Checkup”, complete with easy to manage data-sharing settings. Facebook took the opportunity at this year’s CES to display added features such as the ability to turn off facial recognition, managing who can see a user account or posts, and the ability to remove/add preferences based on personal browsing history.

While these changes to privacy settings are a step in the right direction towards protecting user data, attendees could not help but notice the side-stepping of significant data privacy initiatives of which Facebook is ignoring. Most notably, the lack of user control on how advertisers use personal information. 

Ring’s New Control Center: Fix or Flop?

Ring has been a hot commodity in household security since its purchase by Amazon in 2018. However, recently, the company has come under fire for its law enforcement partnerships. 

In light of mounting hacking concerns, the home security company utilized CES to announce a new dashboard for both Apple and Android users labeled “the control center”. This center provides the user with the opportunity to manage connected Ring devices, third-party devices, as well as providing the user with options for law enforcement to request access to Ring videos. 

Ring has missed initial requests of its customers who are asking for additions such as suspicious activity detection or notifying for new account logins. Ring has continued to add software that in turn places onus onto users to protect themselves. Customers are viewing this so-called privacy update as nothing more than a “cosmetic redesign”. The device continues to provide no significant hacker-protection, and therefore no notable privacy protection for its customers. 

Google Assistant: New Front-Runner in Privacy Adjustments

Each year Google is celebrated for taking full advantage of CES to indulge its visitors into the technology of the company. This year, Google’s efforts focused on Google Assistant.

After last year’s confirmation that third-party workers were monitoring Google Assistant, Google’s efforts to combat data privacy has been at the forefront of this year’s CES panel. On January 7, 2020, Google announced new features to its Assistant, reassuring its dedication to privacy protection. Users are now able to ask their assistant questions such as: 

  • “Are you saving my audio data?”
  • “Hey google, delete everything I said to you this week”
  • “Hey Google, that wasn’t for you”
  • “How are you keeping my information private?”


Of these new user commands, the most significant is “are you saving my audio data?” This command allows users to determine whether or not their Assistant opted into allowing Google access. 

However, some Google Assistant users are accusing Google of placing onus onto the user, instead of creating a product that protects its user. Similar to the Ring controversy, there is frustration that Google is missing the mark for understanding the privacy demands of its users. All that being said, Google is one of few companies taking the step in the right direction to most significantly impact how user information is stored. 

It is clear that this year’s CES, while still delivering new and exciting ‘gadgets of the future’, has experienced a shift towards privacy as the most significant technological topic point. While that was made clear by most front-leading tech companies, many continue to be missing the mark in understanding the privacy their users want.

Facebook, Ring and Google each brought forward privacy changes of topical interest while continuing to exude an ignorant role of misunderstanding what it means to keep their user’s information private. Thus the question we must ask ourselves as consumers of these products continues to be; are these minimal changes enough for us to continue flushing our information into? 

Join our newsletter

Healthcare must prioritize data privacy.

Healthcare must prioritize data privacy.

Healthcare is a system reliant on trust. This is true, not only for front line providers, but across the industry, perhaps most significantly, with researchers. Yet, in recent years, the news has promoted story after story about a lack of patient privacy and insufficient security measures. Just a few days ago, LifeLabs had a breach that leaked the personal information of approximately 15 million Canadians. Healthcare cannot afford to have their methods questioned, doubted, or refused, and one misstep could dismantle the carefully created industry.  

Record releases, deception, and litigation: current threats to healthcare

On December 17, LifeLabs, one of Canada’s largest medical services companies, disclosed that they had suffered a massive cybersecurity breach, in which hackers gained the highly confidential information of up to 15 million customers – largely BC and Ontario residents. The database included health card numbers, names, email addresses, login, passwords, and dates of birth. Worse yet, the hackers obtained test results from 85,000 Ontarians.

“I’m sorry this happened and we’ll do everything we can to win back the confidence of our customers,” LifeLabs chief executive Charles Brown said in an interview. “[Private companies, government, and hospitals have] got to do more to make sure all our customers feel secure.”

At the time of the attack, LifeLabs paid a ransom (amount undisclosed) in an attempt to secure the information. This move was condemned by experts, as it implies a reliance on the information, inability to secure it in other ways, and makes no guarantee that the files ill be returned. Some have even suggested that paying ransom increases the likelihood that LifeLabs will be the target of another attack.

Now that the hackers have seen the files, there are two main concerns (1) that they will release the test records, and (2) that they will use the identifiable information to perform nefarious acts for financial benefits, like obtaining a loan or getting a credit card.

This risk is why identifiable information is so valuable, and it is why organizations, especially in the healthcare field, have a duty to protect it. This means investing in both cybersecurity controls and privacy solutions.

In relation to the LifeLabs scandal, the Ontario Privacy Commissioner, Brian Beamish, said: “Public institutions and health-care organizations are ultimately responsible for ensuring that any personal information in their custody and control is secure and protected at all times.”

LifeLab may be at risk of civil litigation from victims seeking compensation. After all, there is a precedent in the matter, whereby two class-action lawsuits were brought to the Quebec Superior Court over a similar incident with a Desjardins Group breach earlier this year.

Similar concerns over the safety and security of patient data exist not only across the health caregivers but also the organizations performing research and using the data. Such is seen in the uproar surrounding the contract between NHS and Amazon, by which the virtual assistant, Alexa, gained access to health information. (Read more about the NHS-Alexa deal in our blog post: “You are the product: People are feeling defeatist in the surveillance age.”)

Privacy will be foundational to healthcare innovation: predictions from experts

Eleonora Harwich, director of research and head of innovation at Reform, said that “The key issue of 2020 will be establishing what fair commercial relationships look like between the private sector, the public sector and patients when data are used to create digital healthcare products or services. People are increasingly unhappy with the status quo in which they have little knowledge or agency over what is done with information about them.”

Her comments are just one of many that echo frustration over privacy and security concerns with healthcare. As the year comes to a close, healthcare experts have begun making predictions for the year ahead. Comments range from the significance of AI to ideas of telemedicine. However, all iterate the paramount importance of data privacy moving forward.

The cross between innovation and healthcare has reached a never before seen magnitude, which requires a shift in focus. Organizational and technical controls must be implemented to prevent the exposure of sensitive information.

Join our newsletter

A 2019 Review of GDPR Fines

A 2019 Review of GDPR Fines

As the year comes to a close, we must reflect on the most historic events in the world of privacy and data science, so that we can learn from the challenges, and improve moving forward.

In the past year, General Data Protection Regulation (GDPR) has had the most significant impact on data-driven businesses. The privacy law has transformed data analytics capacities and inspired a series of sweeping legislation worldwide: CCPA in the United States, LGPD in Brazil, and PDPB in India. Not only has this regulation moved the needle on privacy management and prioritization, but it has knocked major companies to the ground with harsh fines. 

Since its implementation in 2018, €405,871,210 in fines have been actioned against violators, signalling that the DPA supervisory authority has no mercy in its fervent search for the unethical and illegal actions of businesses. This is only the beginning, as the deeper we get into the data privacy law, the more strict regulatory authorities will become. With the next wave of laws hitting the world on January 1, 2020, businesses can expect to feel pressure from all locations, not just the European Union.


The two most breached GDPR requirements are Article 5 and Article 32.

These articles place importance on maintaining data for only as long as is necessary and seek to ensure that businesses implement advanced measures to secure data. They also signal the business value of anonymization and pseudonymization. After all, once data has been anonymized (de-identified), it is no longer considered personal, and GDPR no longer applies.

Article 5 affirms that data shall be “kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed.”

Article 32 references the importance of “the pseudonymization and encryption of personal data.”

The frequency of a failure to comply with these articles signals the need for risk-aware anonymization to ensure compliance. Businesses urgently need to implement a data anonymization solution that optimizes privacy risk reduction and data value preservation. This will allow businesses to measure the risk of their datasets, apply advanced anonymization techniques, and minimize the analytical value lost throughout the process.

If this is implemented, data collection on EU citizens will remain possible in the GDPR era, and businesses can continue to obtain business insights without risking their reputation and revenue. However, these actions can now be done in a way that respects privacy.

Sadly, not everyone has gotten the message, as nearly 130 fines have been actioned so far.

The top five regulatory fines

GDPR carries a weighty fine:  4% of a business’s annual global turnover, or €20M, whichever is greater. A fine of this size could significantly derail a business, and if paired alongside brand and reputational damage, it is evident that GDPR penalties should encourage businesses to rethink the way they handle data

1. €204.6M: British Airways

Article 32: Insufficient technical and organizational measures to ensure information security

User traffic was directed to a fraudulent site because of improper security measures, compromising 500,000 customers’ personal data. 

 2. €110.3M: Marriott International

Article 32: Insufficient technical and organizational measures to ensure information security

The guest records of 339 million guests were exposed in a data breach due to insufficient due diligence and a lack of adequate security measures.

3. €50M: Google

Article 13, 14, 6, 5: Insufficient legal basis for data processing

Google was found to have breached articles 13, 14, 6, and 5 because it created user accounts during the configuration stage of Android phones without obtaining meaningful consent. They then processed this information without a legal basis while lacking transparency and providing insufficient information.

4. €18M: Austrian Post

Article 5, 6: Insufficient legal basis for data processing

Austrian Post created more than three million profiles on Austrians and resold their personal information to third-parties, like political parties. The data included home addresses, personal preferences, habits, and party-affinity.

5. €14.5M: Deutsche Wohnen SE

Article 5, 25: Non-compliance with general data processing principles

Deutsche Wohnen stored tenant data in an archive system that was not equipped to delete information that was no longer necessary. This made it possible to have unauthorized access to years-old sensitive information, like tax records and health insurance, for purposes beyond those described at the original point of collection.

Privacy laws like GDPR seek to restrict data controllers from gaining access to personally identifiable information without consent and prevent data from being handled in manners that a subject is unaware of. If these fines teach us anything, it is that investing in technical and organizational measures is a must today. Many of these fines could have been avoided had businesses implemented Privacy by Design. Privacy must be considered throughout the business cycle, from conception to consumer use. 

Businesses cannot risk violations for the sake of it. With a risk-aware privacy software, they can continue to analyze data while protecting privacy -with the guarantee of a privacy risk score.

Resolution idea for next year: Avoid ending up on this list in 2020 by adopting risk-aware anonymization.

Join our newsletter

You are the product: People are feeling defeatist in the surveillance age

You are the product: People are feeling defeatist in the surveillance age

As the year wraps up, a new study showcases that Canadians are feeling defeatist over their lack of privacy. Such signals the place for privacy-prioritizing businesses to achieve a competitive advantage -especially in the thick of Amazon’s latest plan to acquire patient data and Toys “R” Us’ return as a surveillance showroom.

Amazon’s latest partnership endorses the collection of patient data

“Dr. Alexa, I have a headache.”

“Don’t worry, that’s just a symptom of Amazon’s latest data harvesting plan.”

In a recent UK government contract, Amazon was given access to healthcare information collected by the National Health Service (NHS). While the material shared does not include patient data, Amazon will now be able to acquire that information straight from their mouths.

Now, if you ask Alexa about a health symptom, it will search through the NHS information on symptoms, causes, and definitions of conditions, and provide you with a diagnosis. It will even offer to call 999.

In principle, this will reduce the workload on GPs by answering questions at home for the 14% of households the own an Amazon device. However, Amazon can use this data to track individuals and improve their ad targetting, create new products, or sell data with third-parties. An NHS spokesperson has assured the public that “appropriate safeguards” have been put in place, but today it is unclear what they will be.

The worry is that “the NHS has not only handed over expertise and allowed the e-commerce giant to directly profit from it – it has also endorsed Amazon to gather patient data straight from our own mouths, in a way that identifies us, with little regard for the consequences.” (Wired)


Toys to Surveillance: How Toys “R” Us is following in Amazon’s footsteps

Taking a nod from the competitor that nearly put them out of business, Toys “R” Us has been turned into a “private equity surveillance project.” (Vice)

Last week, Toys “R” Us opened up a second store, and customers quickly took notice. The stores were smaller and they lacked stock. Some even compared them to glorified showrooms. In reality, the new locations were not designed for shopping, but surveillance. Throughout the store, ceiling sensors, cameras, and additional tech were deployed to capture your experience.

This will allow the business to measure shoppers’ paths and behaviour. It has been reported that some of the cameras will blur faces to prevent identification and some children won’t be recorded (so long as they are under 4 feet from the ground). Yet, the company has boasted that traffic data is anonymous. Haphazard anonymization is not anonymization.

People have very harsh words to say about Toys “R” Us’ reincarnation. For example, Karl Bode wrote in a Vice article that, “Toys “R” Us and mascots like Geoffrey the Giraffe could have just died a quiet death, secure in the annals of retail history. Instead the brand has been lobotomized and re-animated into something decidedly different—a private equity-backed playground where everything from your location to your LEGO play is collected, stored, and monetized, whether you like it or not.”


People are feeling defeatist about private data security 

73% of Canadians don’t know who holds their data, but they do know that they don’t trust what businesses are doing with it. In a recent IBM Canada survey, the company found out that Canadians don’t feel they have privacy, despite the fact that 83% find it important, believing businesses should do more to protect them from cybersecurity threats. 

It concluded that Canadians are “paying more attention to which companies make the security of their information a priority, and it’s starting to impact their buying decisions. In this digital economy, Canadians need to trust their privacy is protected.”

With 73% saying that they would not purchase again from a company that shared their information, Canadians are growing increasingly adamant. They want privacy.

This is not to suggest that the digital economy is cancelled, but rather that businesses who make a point to prioritize privacy through disclosure, anonymization, and communication, will be rewarded.

Don’t fall into the surveillance trap blazed forward by Amazon, and now Toys “R” Us. Consumers are willing to turn away from businesses that do not respect their privacy. As such, we predict the future of the digital economy is anonymization — and not the ad-hoc form deployed by Toys “R” Us.

Join our newsletter

Your health records are online, and Amazon wants you to wear Alexa on your face

Your health records are online, and Amazon wants you to wear Alexa on your face

This week’s news was flooded with a wealth of sensitive medical information landing on the internet, and perhaps, in the wrong hands. Sixteen million patient scans were exposed online, the European Court of Justice ruled Google does not need to remove links to sensitive information, and Amazon released new Alexa products for you to wear everywhere you go.

Over five million patients have had their privacy breached and their private health information exposed online. These documents contain highly sensitive data, like names, birthdays, and in some cases, social security numbers. Worse, the list of compromised medical record systems is rapidly increasing, and the data can all be accessed with a traditional web browser. In fact, Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security, reports “[i]t’s not even hacking,” because the data is so easily accessible to the average person (Source).

One of these systems belongs to MobilexUSA, whose records, which showed patients’ names, date of birth, doctors, and a list of procedures, were found online (Source

Experts report that this could be a direct violation of HIPAA and many warn that the potential consequences of this leak are devastating, as medical data is so sensitive, and if in the wrong hands, could be used maliciously (Source).

According to Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, “[m]edical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice.” (Source

Such a statement signals a privacy crisis in the healthcare industry that requires a desperate fix. According to Pianykh, the problem is not a lack of regulatory standards, but rather that “medical device makers don’t follow them.” (Source) If that is the case, should we expect HIPAA to crackdown the same way GDPR has?

With a patient’s privacy up in the air in the US, a citizens’ “Right to be Forgotten” in the EU is also being questioned. 

The “Right to be Forgotten” states that “personal data must be erased immediately where the data are no longer needed for their original processing purpose, or the data subject has withdrawn [their] consent” (Source). This means that upon request, a data “controller” must erase any personal data in whatever means necessary, whether that is physical destruction or permanently over-writing data with “special software.” (Source)

When this law was codified in the General Data Protection Regulation (GDPR), it was implemented to govern over Europe. Yet, France’s CNIL fined Google, an American company, $110,000 in 2016 for refusing to remove private data from search results. Google argued changes should not need to be applied to the domain or other non-European sites (Source). 

On Tuesday, The European Court of Justice agreed and ruled that Google is under no obligation to extend EU rules beyond European borders by removing links to sensitive personal data (Source). However, the court made a distinct point that Google “must impose new measures to discourage internet users from going outside the EU to find that information.” (Source) This decision sets a precedent for the application of a nation’s laws outside its borders when it comes to digital data. 

While the EU has a firm stance on the right to be forgotten, Amazon makes clear that you can “automatically delete [your] voice data”… every three to eighteen months (Source). The lack of immediate erasure is potentially troublesome for those concerned with their privacy, especially alongside the new product launch, which will move Alexa out of your home and onto your body.

On Wednesday, Amazon launched Alexa earbuds (Echo Buds), glasses (Echo Frames), and rings (Echo Loop). The earbuds are available on the marketplace, but the latter two are an experiment and are only available by invitation for the time being (Source). 

With these products, you will be able to access Alexa support wherever you are, and in the case of the EchoBuds, harness the noise-reduction technology of Bose for only USD $130 (Source). However, while these products promise to make your life more convenient, in using these products Amazon will be able to monitor your daily routines, behaviour, quirks, and more. 

Amazon specified that their goal is to make Alexa “ubiquitous” and “ambient” by spreading it everywhere, including our homes, appliances, cars, and now, our bodies. Yet, at the same time as they open up about their strategy for lifestyle dominance, Amazon claims to prioritize privacy, as the first tech giant to allow users to opt-out of their voice data being transcribed and listened to by employees. Despite this, it is clear that “Alexa’s ambition and a truly privacy-centric customer experience do not go hand in hand.” (Source). 

With Amazon spreading into wearables, Google winning the “Right to be Forgotten” case, and patient records being exposed online, this week is wrapping up to be a black mark on user privacy. Stay tuned for our next weekly news blog to learn about how things shape up. 

Join our newsletter