Select Page
Data sharing is an issue across industries

Data sharing is an issue across industries

Privacy, as many of our previous blogs have enforced, is essential not only on a business-customer relationship but also on a moral level. The recent Fitbit acquisition by Google has created big waves in the privacy sphere, as the customer’s health data is at risk, due to Google’s past dealings with personal information. On the topic of healthcare data, the recent Coronavirus panic has thrown patient privacy out the window, as the fear of the spreading virus rises. Finally, data sharing continues to raise eyes as a popular social media app, TikTok scrambles to protect its privacy reputation.  

Fitbit acquisition causing major privacy concerns

From its in-house command system to being the world’s most used search engine, Google has infiltrated most aspects of regular life. There are seemingly no corners left untouched by the search engine. 

In 2014, Google released its Wear OS, a watch technology for monitoring health, as well as for use compatible with phone technology. While wearable technology has soared to the top of technology chart, as a popular way to track and manage your health and lifestyle, Google’s Wear OS has not gained the popularity necessary to maintain itself as a strong tech competitor.  

In November of last year, Google announced its acquisition of Fitbit for $2.1 billion. Fitbit has sold over 100 million devices and is worn by over 28 million people, 24 hours a day, 7 days a week. Many are calling this Google’s attempt to recover from its failing project.

But there is more to this acquisition than staying on top of the market; personal data. 

Google’s terrible privacy reputation is falling onto Fitbit, as fears that the personal information FitBit holds, like sleep patterns or heart rate, will fall into the hands of third parties and advertisers.  

Healthcare is a large market, one of which Google has been silently buying into for years. Accessing personal health information gives Google an edge in the healthcare partnerships it’s been looking for. 

Fitbit has come under immense scrutiny after its announced partnership with Google, seeing sales drop 5% in 2019. Many are urging Fitbit consumers to ditch their products amidst the acquisition.

However, Fitbit still maintains that users will be in full control of their data and that the company will not see personal information to Google. 

The partnership will be followed with a close eye going forward, as government authorities such as the Australian Competition and Consumer Commission open inquiries into the companies intentions.

TikTok scrambling to fix privacy reputation

TikTok is a social media app that has taken over video streaming services. With over 37 million users in the U.S. last year, TikTok has been downloaded over 1 billion times. And that number is expected to rise 22% this year

While the app is reporting these drastically high numbers for downloading, the app has been continuously reprimanded for its terrible privacy policy and its inability to protect its user’s information. After already being banned from companies across the U.S, one Republican Senator, Josh Hawley, is introducing legislation to prohibit federal workers from using the app. This comes from several security flaws reported against the app in January, addressing user location and access to user information. 

The CEO of Reddit recently criticized TikTok, saying he tells people, “don’t install that spyware on your phone.”

These privacy concerns stem from the app’s connection with the Chinese government. In 2017, viral app was acquired and merged with TikTok by Beijing company, ByteDance, for $1 billion. Chinese law requires companies to comply with government intelligence operations if asked, meaning apps like TikTok would have no authority to decline government access to their data.

In response to their privacy backlash, the company made a statement last year saying all their data centers are located entirely outside of China. However, their privacy policy does state that they share a variety of user data with third parties. 

In new attempts to combat all privacy concerns, ex-APD, Roland Cloutier has been hired as Chief Information Security Officer to oversee privacy information issues within the popular app.

With Cloutier’s long history in cybersecurity, there is hope that the most popular app among will soon gain a better privacy reputation.

Coronavirus raising concerns over person information 

The Coronavirus is a deadly, fast-spreading respiratory illness that has moved quickly throughout China and now reported in 33 countries across the world. 

Because of this, China has been thrown into a rightful panic and has gone to all lengths to combat and protect its spreading. However, in working to protect the continuous spread of the virus, many are saying that patient privacy is being thrown out the window.

Last month China put out a ‘close contact’ app, testing people to see if they’ve been around people who have or contracted the virus. The app assigns a colour code to users; green for safe, yellow for required 7day quarantine, and red is a 14day quarantine. 

Not only is the app required to enter public places like subways or malls, but the data is also shared with police. 

The New York Times released that the app sends a person’s location, city name and an identifying code number to the authorities. China’s already high-tech surveillance has reached new limits, as the times reports that surveillance cameras placed around neighborhoods are being strictly monitored, watching residents who present yellow or red cards.

South Korea has also thrown patient privacy to the wind, as text messages are sent out, highlighting every movement of individuals who contracted the virus. One individual’s extra-marital affair was exposed through the string of messages, revealing his every move before contracting the virus, according to the Guardian.

The question on everyone’s mind now is, what happens to privacy when the greater good is at risk?

For more privacy blogs, click here

Join our newsletter

CCPA is here. Are you compliant?

CCPA is here. Are you compliant?

As of January 1, 2020, the California Consumer Privacy Act (CCPA) came into effect and has already altered the ways companies can make use of user data. 

Before the CCPA implementation, Big Data companies had the opportunity to harvest user data and use it for data science, analytics, AI, and ML projects. Through this process, consumer data was monetized without protection for privacy. With the official introduction of the CCPA, companies now have no choice but to oblige or pay the price. Therefore begging the question; Is your company compliant?

CCPA Is Proving That Privacy is not a Commodity- It’s a Right

This legislation enforces that consumers are safe from companies selling their data for secondary purposes. Without explicit permission to use data, companies are unable to utilize said data.

User data is highly valuable for companies’ analytics or monetization initiatives. Thus, risking user opt-outs can be detrimental to a company’s progressing success. By de-identifying consumer data, companies can follow CCPA guidelines while maintaining high data quality. 

The CCPA does not come without a highly standardized ruleset for companies to satisfy de-identification. The law comes complete with specific definitions and detailed explanations of how to achieve its ideals. Despite these guidelines in place, and the legislation only just being put into effect, studies have found that only 8% of US businesses are CCPA compliant.  

For companies that are not CCPA compliant as of yet, the time to act is now. By thoroughly understanding the regulations put out by the CCPA, companies can protect their users while still benefiting from their data. 

To do so, companies must understand the significance of maintaining analytical value and the importance of adequately de-identified data. By not complying with CCPA, an organization is vulnerable to fines up to $7500 per incident, per violation, as well as individual consumer damages up to $750 per occurrence.

For perspective, after coming into effect in 2019, GDPR released that its fines impacted companies at an average of 4% of their annual revenue.

To ensure a CCPA fine is not coming your way, assess your current data privacy protection efforts to ensure that consumers:

  • are asked for direct consent to use their data
  • can opt-out or remove their data for analytical purposes
  • data is not re-identifiable

In essence, CCPA is not impeding a company’s ability to use, analyze, or monetize data. CCPA is enforcing that data is de-identified or aggregated, and done so to the standards that its legislation requires.

Our research found that 60% of datasets believed, by companies, to be de-identified, had a high re-identification risk. There are three methods to reduce the possibility of re-identification: 

  • Use state-of-the-art de-identification methods
  • Assess for the likelihood of re-identification
  • Implement controls, so data required for secondary purposes is CCPA compliant

Read more about these effective privacy automation methods in our blog, The business Incentives to Automate Privacy Compliance under CCPA.

Manual Methods of De-Identification Are Tools of The Past

A standard of compliance within CCPA legislation involves identifying which methods of de-identification leaves consumer data susceptible to re-identification. The manual way, which is extremely common, can leave room for re-identification. By doing so, companies are making themselves vulnerable to CCPA.

Protecting data to a company’s best abilities is achievable through techniques such as k-anonymity and differential privacy. However, applying manual methods is impractical for meeting the 30-day gracing period CCPA provides or in achieving high-quality data protection.

Understanding CCPA ensures that data is adequately de-identification and has removed risk, all while meeting all legal specifications.

Achieving CCPA regulations means ditching first-generation approaches to de-identification, and adopting privacy automation defers the possibility of re-identification. Using privacy automation as a method to protect and utilize consumer’s data is necessary for successfully maneuvering the new CCPA era. 

The solution of privacy automation ensures not only that user data is correctly de-identified, but that it maintains a high data quality. 

CryptoNumerics as the Privacy Automation Solution

Despite CCPA’s strict guidelines, the benefits of using analytics for data science and monetization are incredibly high. Therefore, reducing efforts to utilize data is a disservice to a company’s success.

Complying with CCPA legislation means determining which methods of de-identification leave consumer data susceptible to re-identification. Manual approach methods of de-identification including masking, or tokenization, leave room for improper anonymization. 

Here, Privacy Automation becomes necessary for an organization’s analytical tactics. 

Privacy automation abides CCPA while benefiting tools of data science and analytics. If a user’s data is de-identified to CCPA’s standards, conducting data analysis remains possible. 

Privacy automation revolves around assessment, quantification, and assurance of data. Simultaneously, a privacy automation tool measures the risk of re-identification, applying data privacy protection techniques, and providing audit reports. 

A study by PossibleNow indicated that 45% of companies are in the process of preparing, but had not expected to be compliant by the CCPA’s implementation date. Putting together a privacy automation tool to better process data and prepare for the new legislation is critical in a companies success with the CCPA. Privacy automation products such as CN-Protect allow companies to succeed in data protection while benefiting from the data’s analytics. (Learn more about CN-Protect)

Join our newsletter

How data partnerships unlock valuable second-party data

How data partnerships unlock valuable second-party data

Sharing data is fundamental to advancing business strategy, especially for marketing. Over the last five years, analytics has become an essential role of the marketer, who has grown used to purchasing datasets on the open markets to create a more holistic understanding of customers.

However, amongst the new wave of privacy regulations and demand for transparency, achieving the same level of understanding has become a challenge. It has also increased the risk of using third-party data, because businesses cannot trust that the outside sources have met compliance regulations or provided accurate data. Consequently, more companies are turning to second-party data sources.

Second-party data is essentially someone else’s first-party data. It is data purchased directly from another company, assuring trust and high quality. In essence, these strategic partnerships enable businesses to build their customer databases and gain insights quickly.

Purchasing another business’s data will increase the breadth of data lakes, but it also opens organizations up to regulatory fines and reputational damage. Harnessing their data as-is requires you to trust a data partner’s processes. Comparably, relying on your own (first-party) data can guarantee privacy, but it lacks the breadth associated with other types of data. 

Second-party data is an important addition for businesses looking to piece together the puzzle of each individual’s data records. However, compliance, security, and a loss of control are problems that must be addressed. There are two options: anonymization and privacy-protected data collaboration.


Anonymize and share: a data partnership plan

Under regulations like GDPR, data cannot be used for secondary purposes without first obtaining the consent of the data subject. This means that if you are looking to share data and establish a data partnership, you must first obtain meaningful consent from every party – a fruitless process!

To avoid this expense while still respecting the principle behind the law, businesses can rely on anonymization to securely protect consumers and regain control over the information. Once data has been anonymized, it is no longer considered personal. This means businesses can perform data exchanges and achieve desired marketing efforts.

However, in sharing data with another business, you lose control over what happens to it. A better solution is privacy protected data collaboration.

Privacy-protected data collaboration builds data partnerships securely

By leveraging an advanced tool, like CN-Insight – that uses secure multi-party computation (SMC) and private set intersection (PSI) – businesses can acquire insights without sharing the data itself.

While this may seem odd, the reality is, you don’t want the data, you want the insights – and you can still get those without exposing your data.

That, in essence, is what CN-Insight enables, thanks to sophisticated cryptographic techniques that have been researched and developed at reputable institutions like Bristol University and Aarhus University. To learn more about how it works, check out our product page

Through privacy-protected data collaboration, your data is never exposed, but you receive the insights you need. This is the best solution for marketers looking to regain the holistic understanding they had of customers before regulatory authorities began imposing high fines and strict laws. Not only will businesses avoid the expensive and time-consuming contract process associated with traditional second-party data sharing, but they can trust that their data was never shared and they are not at risk of regulatory penalties. 

Data partnerships through virtual data collaboration are the solution to unlock second-party data in a privacy-preserving way.

Join our newsletter

De-identify your data, or be in violation of CCPA

De-identify your data, or be in violation of CCPA

On January 1, 2020, California implemented a landmark law that is reshaping data analytics and data science worldwide. This is the day the CCPA became effective, and businesses’ consumer data became a significant legal and financial risk to the company. 

While the tech industry has tried to restrict the legislation since its birth, its lobbying efforts have fallen short. In one month, business as usual will result in class-action lawsuits. At this time, Californians will enjoy a new set of privacy rights and regain ownership over their own information.


CCPA transforms data from a commodity to a privilege that can be revoked

Under the CCPA, Californians will be able to demand access to the data that companies collect on them, and how they have used it. Not only does this put the onus on businesses to manage verifiable consumer requests, but also to ensure that all collection and uses of data are in the best interest of people.

However, the CCPA is much more extensive than that. Businesses not only have to give customers access to the data they have on them but have to inform them and provide the opportunity to opt-out or request deletion when they want to leverage that data.

Through de-identification, businesses unlock consumer insights

Under the CCPA, if you want to use data beyond the original purpose for which it was collected, you have two choices:

  1. Inform consumers of every data use and risk deletion requests, or
  2. De-identify the data.

The first option is impractical. The second is possible, but not through traditional methods of privacy protection. Our research demonstrates that at least 60% of datasets that are thought to be de-identified are not de-identified. The result is that most organizations will be unknowingly violating the CCPA.

Every time data is used in a way that violates the CCPA, businesses risk $7500 in civil penalties and $750 in statutory damages per consumer.

To avoid this, businesses need a guarantee that their data has been de-identified. This is something only an automated risk assessment tool can provide. Yet, “according to Ethyca, more than 70% of companies have not built any sort of engineering solution for policy compliance.” (Source)


Traditional anonymization strategies will not satisfy CCPA.

Traditional approaches to anonymization are unreliable, ineffective, and often wipe the analytical value of the data. Legacy approaches, like masking, were never intended to ensure privacy. Rather, these were cybersecurity techniques evolved in a time when organizations did not rely on the insights derived from consumer data. 

Manual approaches, where risk and compliance teams restrict access to data lakes and warehouses, impede business goals. Worse, they are cumbersome, involving significant and impractical overheads. The volume and velocity at which data is accumulated in data lakes make traditional methods of anonymization impractical. 

It is only possible to truly anonymize data to a CCPA-compliant level and retain the analytical value of the data by using a solution that optimizes for a reduced privacy risk score and minimal information loss. 

Consequently, to continue deriving insights in the CCPA-era, enterprises need to invest in optimized anonymization now. Combining advanced privacy-preserving techniques with privacy risk scoring will allow for a balance between privacy compliance and business insight goals.

By handling indirect or quasi-identifier information carefully – and using advanced privacy-protecting techniques like k-anonymity and differential privacy – enterprises can have the best of both worlds. Compliance and data science success.

However, this privacy stance cannot be achieved manually. It requires a dedicated, automated, specialist privacy platform. 


Avoiding the de-identification illusion

To ensure this de-identification process is defensible, businesses must understand, to a high degree of accuracy, the proportion of records that would be correctly identified in a given dataset by an attacker. This is what is known as a privacy risk score, and is based on the principle of Marketer Risk. The methodology is approached from the perspective of someone who wished to re-identify as many records as possible in a disclosed dataset. 

From this point of view, businesses are able to gain an accurate understanding of how privacy actions affect their dataset, and continue to adjust their techniques until an acceptable risk threshold is met (Learn more:
If businesses invest in privacy risk scoring and advanced protection solutions, they can ensure privacy compliance is automatically enforced throughout their data pipeline. Effective anonymization leaves data monetizable and provides a necessary degree of certainty for leadership that analytics will not harm your business. Anonymization is the only viable solution for data-driven companies to meet CCPA-regulations without harming their business model.

Join our newsletter

The top five things we learned about privacy in 2019

The top five things we learned about privacy in 2019

2019 has been a trailblazing year for data privacy, that left us with a few clear messages about the future. We’ve collected our top lessons to help inform your privacy governance strategy moving forward.

1. Privacy is a multi-dimensional position: legal, ethical, and economic

Since the implementation of GDPR in May 2018, people have been quick to consider privacy from a legal perspective – as something that must be mitigated to avoid lawsuits and regulatory fines. In doing so, they have all missed the other important factors to consider: the people and the data utility advantage.

When your business collects consumer information, it is important to remember that this is personal data. As such, there is an intrinsic duty and trust linked to the collection. There is an ethical responsibility to do right by your customers, determining that you will only use their data for reasons they are aware of and have consented to, and that you will not share the data with others. Responsible data management is fundamental to your relationship with customers, and it will have a significant advantage to your business to do so.

Economically speaking, positioning your business as a privacy leader is the best strategy, and not only from a brand perspective. If you anonymize personal information, your analysts will have increased access to a valuable resource that can help improve strategy and a product or service.

2. Privacy is not one-size-fits-all

Consumer data contains an inherent privacy risk, even after it has been de-identified. That is why a privacy risk score is essential to understanding the effects of privacy protection methods. Even if you mask the data, you don’t know how successful your process was until you assess the re-identifiable risk. That is why we believe a privacy risk score is so fundamental to the anonymization process.

However, we’ve learned that a score also enables businesses to customize their personal risk thresholds based on activities.

Such is important because businesses do not use all of their data to undertake the same activities, nor do they all manage the same level of sensitive information. As a consequence, privacy-preservation is not a uniform process. In general, we suggest following these guidelines when assessing your privacy risk score:

  • Greater than 33% implies that your data is identifiable.
  • 33% is an acceptable level if you are releasing to a highly trusted source.
  • 20% is the most commonly accepted level of privacy risk.
  • 11% is used for highly sensitive data.
  • 5% is used for releasing to an untrusted source.

3. Automation is central to protecting data assets

Old privacy solutions are no match for modern threats to data privacy. Legacy approaches, like masking, were never intended to ensure privacy. Rather, these were cybersecurity techniques evolved in a time when organizations did not rely on the insights derived from consumer data. 

Even worse, many businesses still rely on manual approaches to anonymize the data. With the volume and necessary precision, this is an impossible undertaking doomed for non-compliance.

What businesses require to effectively privacy protect their data today is privacy automation: a solution that combines AI and advanced privacy protection to assess, anonymize, and preserve datasets at scale.

4. Partnerships across your business teams are essential

Privacy cannot be the role of one individual. Across an organization, stakeholders operate in isolation, pursuing their own objectives with individualized processes and tools. This has led to fragmentation between legal, risk and compliance, IT security, data science, and business teams. In consequence, a mismatch between values has led to dysfunction between privacy protection and analytics priorities. 

In reality, privacy has an impact on all of these figures, and their values should not be pitted against each other. In today’s regulation era, one is reliant on the other. Teams must establish a unified goal to protect privacy in order to unlock data. 

The solution is to implement an enterprise-wide privacy control system that generates quantifiable assessments of the re-identification risk and information loss. This enables businesses to set predetermined risk thresholds and optimize their compliance strategies for minimal information loss. By allowing companies to measure the balance of risk and loss, privacy stakeholder silos can be broken, and a balance can be found that ensures data lakes are privacy-compliant and valuable.

5. Privacy is a competitive advantage

If you want to take cues from Apple, the most significant is that positioning privacy as central to your business is a competitive advantage. 

Businesses should address privacy as a component of their customer engagement strategy. Not only does compliance avoid regulatory penalties and reputational damage, but embedding privacy into your operations is also a method to gain trust, attention, and build a reputation for accountability. 

A Pew Research Center study investigated the way Americans feel about the state of privacy, and their concerns radiated from the findings. 

  • 60% believe it is not possible to go through daily life without companies and the government collecting their personal data.
  • 79% are concerned about the way companies are using their data.
  • 72% say they gain nothing or very little from company data collected about them.
  • 81% say that the risks of data collection by companies outweigh the benefits.

Evidently, people feel they have no control over their data and do not believe businesses have their best interests at heart. Break the mould by prioritizing privacy. There is room for your business to stand out, and people are waiting for you to do so.

Privacy had a resurgence this year that has reshaped law and consumer expectations. Businesses must make protecting sensitive information a business priority across their teams by investing in an automated de-identification solution that fits their needs. Doing so will improve the customer experience, unlock data, and serve as a differential advantage with target markets. 

Privacy is not only the future. Privacy is the present. Businesses must act today.

Join our newsletter

A 2019 Review of GDPR Fines

A 2019 Review of GDPR Fines

As the year comes to a close, we must reflect on the most historic events in the world of privacy and data science, so that we can learn from the challenges, and improve moving forward.

In the past year, General Data Protection Regulation (GDPR) has had the most significant impact on data-driven businesses. The privacy law has transformed data analytics capacities and inspired a series of sweeping legislation worldwide: CCPA in the United States, LGPD in Brazil, and PDPB in India. Not only has this regulation moved the needle on privacy management and prioritization, but it has knocked major companies to the ground with harsh fines. 

Since its implementation in 2018, €405,871,210 in fines have been actioned against violators, signalling that the DPA supervisory authority has no mercy in its fervent search for the unethical and illegal actions of businesses. This is only the beginning, as the deeper we get into the data privacy law, the more strict regulatory authorities will become. With the next wave of laws hitting the world on January 1, 2020, businesses can expect to feel pressure from all locations, not just the European Union.


The two most breached GDPR requirements are Article 5 and Article 32.

These articles place importance on maintaining data for only as long as is necessary and seek to ensure that businesses implement advanced measures to secure data. They also signal the business value of anonymization and pseudonymization. After all, once data has been anonymized (de-identified), it is no longer considered personal, and GDPR no longer applies.

Article 5 affirms that data shall be “kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed.”

Article 32 references the importance of “the pseudonymization and encryption of personal data.”

The frequency of a failure to comply with these articles signals the need for risk-aware anonymization to ensure compliance. Businesses urgently need to implement a data anonymization solution that optimizes privacy risk reduction and data value preservation. This will allow businesses to measure the risk of their datasets, apply advanced anonymization techniques, and minimize the analytical value lost throughout the process.

If this is implemented, data collection on EU citizens will remain possible in the GDPR era, and businesses can continue to obtain business insights without risking their reputation and revenue. However, these actions can now be done in a way that respects privacy.

Sadly, not everyone has gotten the message, as nearly 130 fines have been actioned so far.

The top five regulatory fines

GDPR carries a weighty fine:  4% of a business’s annual global turnover, or €20M, whichever is greater. A fine of this size could significantly derail a business, and if paired alongside brand and reputational damage, it is evident that GDPR penalties should encourage businesses to rethink the way they handle data

1. €204.6M: British Airways

Article 32: Insufficient technical and organizational measures to ensure information security

User traffic was directed to a fraudulent site because of improper security measures, compromising 500,000 customers’ personal data. 

 2. €110.3M: Marriott International

Article 32: Insufficient technical and organizational measures to ensure information security

The guest records of 339 million guests were exposed in a data breach due to insufficient due diligence and a lack of adequate security measures.

3. €50M: Google

Article 13, 14, 6, 5: Insufficient legal basis for data processing

Google was found to have breached articles 13, 14, 6, and 5 because it created user accounts during the configuration stage of Android phones without obtaining meaningful consent. They then processed this information without a legal basis while lacking transparency and providing insufficient information.

4. €18M: Austrian Post

Article 5, 6: Insufficient legal basis for data processing

Austrian Post created more than three million profiles on Austrians and resold their personal information to third-parties, like political parties. The data included home addresses, personal preferences, habits, and party-affinity.

5. €14.5M: Deutsche Wohnen SE

Article 5, 25: Non-compliance with general data processing principles

Deutsche Wohnen stored tenant data in an archive system that was not equipped to delete information that was no longer necessary. This made it possible to have unauthorized access to years-old sensitive information, like tax records and health insurance, for purposes beyond those described at the original point of collection.

Privacy laws like GDPR seek to restrict data controllers from gaining access to personally identifiable information without consent and prevent data from being handled in manners that a subject is unaware of. If these fines teach us anything, it is that investing in technical and organizational measures is a must today. Many of these fines could have been avoided had businesses implemented Privacy by Design. Privacy must be considered throughout the business cycle, from conception to consumer use. 

Businesses cannot risk violations for the sake of it. With a risk-aware privacy software, they can continue to analyze data while protecting privacy -with the guarantee of a privacy risk score.

Resolution idea for next year: Avoid ending up on this list in 2020 by adopting risk-aware anonymization.

Join our newsletter