Select Page
Privacy Risk Scoring: The new standard for privacy compliance

Privacy Risk Scoring: The new standard for privacy compliance

Datasets contain an inherent privacy risk. By holding customer data, you create the potential for exposing your organization to legal action and a loss of consumer trust. To manage this, businesses have begun to de-identify their data. However, without privacy risk scoring, enterprises cannot ensure that privacy-protection actions have actually de-identified the data.

In recent years, new privacy regulations have emerged that restrict the use of data to produce valuable insights. This has led to an increase in businesses utilizing privacy-preservation techniques to anonymize their data and take it out of scope from overhead-heavy legislation like GDPR and CCPA. 

However, businesses today are unable to measure the effectiveness of their de-identification strategies because they do not evaluate their data with a privacy risk score.

Under the latest privacy regulations, using data to perform most forms of analytics is against the law without consent – unless data has been de-identified. This means that wrongly assuming that data is anonymized could cost your business as much as 4% of your annual revenue.

Fines of this nature could rock the bottom line of any business. Fortunately, they are entirely avoidable thanks to privacy risk scoring. 

 

Privacy risk scoring quantifies the risk of analytics to your business.

When a dataset undergoes an automated risk assessment, the privacy risk is measured based on metadata classification. A quantifiable score is then produced that assesses the likelihood of re-identification of individuals in the dataset.

Through this process, direct and indirect identifiers are used to assess the privacy risk of the data that a company holds. This is essential, because de-identification is much more complex than merely masking the direct identifiers like name and social insurance number. Yet this is the point at which most organizations believe they have properly de-identified data. 

This means that the approach your business is taking is likely ineffective, and you don’t even know it. Taking this risk is unnecessary and naive, it is like locking your door but not checking your windows.

A privacy risk score of 100% means you have identifiers in your data. If the score is less than 100%, then it corresponds to the probability of re-identification of an average number of records using just quasi-identifiers.

For example, suppose you have a dataset with 2 features and 2 values each: sex (M, F) and political affiliation (R, D). 

 

Republican Democrat
Male M+R M+D
Female F+R F+D

 

This could create 4 possible groups, also known as equivalence classes: M+R, M+D, F+R, and F+D. 

      • Suppose the input database has 40 people with an even spread across each equivalence class (10 people each). Risk is then calculated as 1 over the average number of people in the equivalence classes, in this case, 1 over 10, or 10%.  
      • If all 40 people were in the same equivalence class, say M+D, the risk would be 1 over 40 or 2.5%.  
      • If each person was in a different equivalence class of 40 possible classes, the risk would be 1 over 1 or 100%.

Automating the risk assessment process is the only way to manage the volume of data.

Businesses use data to understand and influence their decision-making process every day. But when it comes to privacy, they often rely on traditional methods to apply privacy-protection and manage risk. Why would you use AI to clean your floor, but manual checks to determine that a dataset is considered de-identified? 

Data lakes contain an exorbitant quantity of data that expands at a rapid rate every single day. It is impractical to quantify the re-identification risk associated with each dataset accurately by hand. This means it is impossible to determine that all datasets being used for analytics are genuinely de-identified.

Privacy risk scoring is an automated process that can occur throughout the privacy protection cycle so that businesses can quantify their risk and make informed decisions. A system of this nature will break down the guesswork that accompanies traditional methods of anonymization, and empowers enterprise to define acceptable risk thresholds.

 

Businesses must customize risk thresholds based on their data use case.

Businesses do not use all of their data to undertake the same activities, nor do they all manage the same level of sensitive information. As a consequence, privacy-preservation is not a uniform process. In general, we suggest following these guidelines when assessing your privacy risk score:

      • Greater than 33% implies that your data is identifiable.
      • 33% is an acceptable level if you are releasing to a highly trusted source.
      • 20% is the most commonly accepted level of privacy risk.
      • 11% is used for highly sensitive data.
      • 5% is used for releasing to an untrusted source.

With a privacy risk score, businesses can continue to adjust their privacy protection techniques until an acceptable score is returned. Businesses can act with certainty that their data has been properly anonymized and is safe to perform analytics on. This also gives consumers and regulatory authorities the peace of mind that your business has incorporated privacy-values into your analytics process. Privacy risk scoring is the new standard for privacy compliance.

Join our newletter


The Top 5 Things We Learned at FIMA

The Top 5 Things We Learned at FIMA

Last week, the CryptoNumerics team attended FIMA Europe, Europe’s leading financial data management conference. The event was overflowing with banking executives, data managers, and analysts, and was an outstanding opportunity to network with attendees and learn about the privacy governance challenges experienced by the finance industry. Here’s what we learned:

 

1. Privacy silos exist across organizations of all sizes.

Who owns privacy governance? At FIMA, this was the question everyone was asking. Sadly, we don’t have a definite answer. Truthfully, it depends on how your organization operates. However, what is essential is that there are a number of privacy stakeholders, including legal, risk, compliance, IT security, data science, and business teams.

These stakeholders often operate in isolation, pursuing their own objectives with individualized processes and tools. As a consequence, a fragmentation of values leads to dysfunction between privacy protection and analytics priorities. 

Without an organizational system to manage privacy, and the commitment from the board and executive team, values of privacy compliance and business insights will always be perceived as polarizing. 

Businesses should implement an enterprise-wide privacy control system that generates quantifiable assessments of the re-identification risk and information loss. This enables businesses to set predetermined risk thresholds and optimize their compliance strategies for minimal information loss. By allowing companies to measure the balance of risk and loss, privacy stakeholder silos can be broken, and a balance can be found that ensures data lakes are privacy-compliant and valuable.

 

2. GDPR consent management is a challenge.

Up until May 2018, businesses were able to repurpose consumer data for analytics and data science purposes at will. Through GDPR, regulators sought to ensure that people’s data was only being used for purposes that they were aware of and had consented to. As a result, GDPR outlined six legal bases on which businesses can process consumer data: consent, legitimate interest, contract, legal obligation, vital interests, and public tasks.

Businesses are consistently using data for purposes beyond those that were specified at the point of collection. As a result, they are often required to get in contact with consumers and request consent for every single additional use case. This process not only risks an increase in data deletion requests, but it is incredibly expensive and time-consuming to manage.

Luckily, businesses have another alternative. Only personal information is governed by GDPR, so if data has been anonymized, the regulation no longer applies to it. As a result, if businesses anonymize their data, they can process data at will, without needing to manage consent.

 

3. Businesses believe removing the direct identifiers from a dataset renders it anonymous.

Redacting the names and social security numbers does not anonymize the data. Not even close. However, in our conversations at FIMA, we learned most organizations believe that it does. They also didn’t realize that other types of information, such as quasi-identifiers (gender, ZIP code, and age), can re-identify individuals or expose sensitive information when combined. (Research at Carnegie Melon in 2000 using the US Census demonstrated that removing the direct identifiers, and leaving the quasi-identifiers, left the dataset 89% re-identifiable.)

This misunderstanding of anonymization could cost a business millions. Under, GDPR data that has been anonymized is no longer considered personal, and thus is not restricted from use. The trouble is, most businesses are operating as if their data is anonymous, thus mistakingly violating the terms of GDPR. 

Businesses can avoid the de-identification illusion by implementing an advanced privacy-protection solution that assesses your datasets and privacy protects your data to an acceptable risk threshold while maximizing data value retention. Solutions like CN-Protect will also provide a privacy risk score so that businesses can have peace of mind that the risk of re-identification is minimal.

 

4. Businesses are using synthesized data as a solution to GDPR management.

Some businesses are leaning on synthetic data as a way to avoid costly GDPR overhead. However, while this technique has advanced significantly in recent years, it does not preserve the analytical value in the same way as differential privacy or k-anonymity.

Creating high-quality synthetic data is challenging, because if the data does not perfectly mirror the real world consumer information, the decision-making process will be compromised. Moreover, since the model attempts to replicate trends, important outlier behaviour can be missed. As a result, rather than generate synthetic data, businesses should anonymize actual data and have certainty that they are analyzing reality.

 

5. European businesses are not preparing for the CCPA.

At FIMA, we noticed that businesses were not prepared for or concerned about the CCPA. While we expected GDPR to be a greater focus, given the location of the event, we were alarmed by the limited awareness of the upcoming law.

In just over a month, the CCPA will come into effect. Despite being a state-level law, the impact will be global, as it regulates over not just California-based businesses, but businesses that hold information on Californians.

While GDPR is more proactive and largely stricter, being GDPR-compliant does not ensure CCPA-compliance. For example, two key differences are that:

      1. The CCPA affords consumers the right not to be discriminated against because of an exercise of their rights. This means you cannot limit your service offerings to a consumer who chooses not to share or allow you to use personal information.

         

      2. Consumers can pursue statutory penalties without proving injury. With the minimum damages per individual being so high (USD 100), attorneys are encouraged to bring class-action lawsuits, even in smaller incidents. Comparatively, GDPR does not provide figures for potential damages.

More importantly, the CCPA is part of a growing global trend looking to put more power back in the hands of people. Businesses must continue to look at the global regulatory changes, especially those that operate outside of Europe.

Attending FIMA was an excellent learning opportunity for our team, and we thoroughly enjoyed speaking with the other attendees. Some of our takeaways were scary, but breaking down top misconceptions is an essential part of building a more privacy-conscious future. Financial data management is a challenging ordeal, but we believe that by prioritizing the privacy of consumers, businesses can generate trust and insights simultaneously.

Join our newletter


Consumer purchasing decisions rely on product privacy

Consumer purchasing decisions rely on product privacy

79% of Americans are concerned about the way companies are using their data. Now, they are acting by avoiding products, like Fitbit after the Google acquisition. *Privacy Not Included, a shopping guide from Mozilla, signals that these privacy concerns will impact what (and from whom) consumers shop for over the holidays.

Consumers are concerned about the ways businesses are using their data

A Pew Research Center study investigated the way Americans feel about the state of privacy, and their concerns radiated from the findings. 

    • 60% believe it is not possible to go through daily life without companies and the government collecting their personal data.
    • 79% are concerned about the way companies are using their data.
    • 72% say they gain nothing or very little from company data collected about them.
    • 81% say that the risks of data collection by companies outweigh the benefits.

This study determined that most people feel they have no control over the data that is collected on them and how it is used.

Evidently, consumers lack trust in companies and do not believe that most have their best interests at heart. In the past, this has not been such a big deal, but today, businesses will live and die by their privacy reputation. Such is reflected by the wave of privacy regulations emerging across the world, with GDPR, CCPA, and LGPD.

However, the legal minimum outlined in privacy regulations is not enough for many consumers, suggesting that meeting the basic requirements without embedding privacy into your business model is insufficient.

Such is seen with Fitbit, and the many users pledging to toss their devices in light of the Google acquisition. Google’s reputation has been tarnished in recent months with €50 million GDPR fine and backlash over their secret harvesting of health records in the Ascension partnership.

Google’s acquisition of Fitbit highlights the risks of a failure to prioritize privacy

On November 1, Google acquired Fitbit for $2.1 billion in an effort, we presume, to breach the final frontier of data: health information. Fitbit users are now uprising against the fact that Google will have access not just to their search data, location, and behaviour, but now, their every heartbeat.

In consequence, thousands of people have threatened to discard their Fitbits out of fear and started their search for alternatives, like the Apple Watch. This validates the Pew study and confirms that prioritizing privacy is a competitive advantage.

Despite claims that it will not sell personal information or health data, Fitbit users are doubtful. One user said, “I’m not only afraid of what they can do with the data currently, but what they can do with it once their AI advances in 10 or 20 years”. Another wrote this tweet:

 

This fear is hinged on the general concern over how big tech uses consumer data, but is escalated by the company’s historical lack of privacy-prioritization. After all, why would Google invest $2.1 billion if they would not profit from the asset? It can only be assumed that Google intends to use this data to break into the healthcare space. This notion is validated by their partnership with Ascension, where they have started secretly harvesting the personal information of 50 million Americans, and the fact that they have started hiring healthcare executives.

Privacy groups are pushing regulators to block the acquisition that was originally planned to close in 2020.

Without Privacy by Design, sales will drop

On November 20, the third annual *Privacy Not Included report was launched by Mozilla, which determines if connected gadgets and toys on the market are trustworthy. This “shopping guide” looks to “arm shoppers with the information they need to choose gifts that protect the privacy of their friends and family. And, spur the tech industry to do more to safeguard customers.” (Source)

This year, 76 products across six categories of gifts (Toys & Games; Smart Home; Entertainment; Wearables; Health & Exercise; and Pets) were evaluated based on their privacy policies, product specifications, and encryption/bug bounty programs.

To receive a badge, products must:

    • Use encryption
    • Have automatic security updates
    • Feature strong password mechanics
    • Manage security vulnerabilities
    • Offer accessible privacy policies

62 of those products met the Minimum Security Requirements, but Ashley Boyd, Mozilla’s Vice President of Advocacy, warns that that is not enough, because “Even though devices are secure, we found they are collecting more and more personal information on users, who often don’t have a whole lot of control over that data.”

8 products, on the other hand, failed to meet the Minimum Security Standards, including:

    • Ring Video Doorbell
    • Ring Indoor Cam
    • Ring Security Cams
    • Wemo Wifi Smart Dimmer
    • Artie 3000 Coding Robot
    • Little Robot 3 Connect
    • OurPets SmartScoop Intelligent Litter Box
    • Petsafe Smart Pet Feeder

These products fail to protect consumer privacy and adequately portray the risks associated with using their products. They are the worst nightmare of consumers, and the very reason 79% are concerned about the way companies are using their data.

Through this study, there was an evident lack of privacy prioritization across businesses, especially small ones, despite positive security measures. And those that did prioritize privacy, tended to make customers pay for it. This signals, that the market is looking for more privacy-focused products, and there is room to move in.

Businesses should embed privacy into the framework of their products and have the strictest privacy settings as the default. In effect, privacy operations management must a guiding creed from stage one, across IT systems, business practices, and data systems. This is what is known as Privacy by Design and Privacy by Default. These principles address the increasing awareness of data privacy and ensure that businesses will consider consumer values throughout the product lifecycle. To learn more read this: https://cryptonumerics.com/privacy-compliance/.

Customers vote with their money, coupling the Pew study results with the Fitbit case, it is clear that customers are privacy-conscious and willing to boycott not only products but companies who do not represent the same values. This week serves as a lesson that businesses must act quickly to bring their products in line with the privacy values, to move beyond basic regulatory requirements, and meet the demands of customers.

Join our newsletter


GDPR, data cemeteries, and million-dollar fines: Deutsche Wohnen SE Case Study

GDPR, data cemeteries, and million-dollar fines: Deutsche Wohnen SE Case Study

On October 30, 2019, Germany dealt out its largest GDPR fine to date: €14.5 million (EUR). The business receiving this fine was Deutsche Wohnen SE, a major property company. This case study will analyze Deutsche Wohnen SE’s legal infractions and the decision-making process of the Berlin Data Protection Authority (DPA), and explain what Deutsche Wohnen SE should have done differently. Through this, we hope to help your business avoid making the same mistake.

Deutsche Wohnen SE was found to have stored the personal data of tenants in an archive system whose architecture was not designed to delete data deemed no longer necessary. This meant that data that was years old could be utilised for purposes other than those specified at the point of collection. This clearly violates GDPR, as the company had no legal grounds to store information that is not relevant to the original business purpose.

A fine of this magnitude signifies that particular importance has been attributed to data graveyards, given the unnecessary risks associated with cyber breaches in this repository. Storing data for excessive periods of time is covered comprehensively by the GDPR, and the damages demonstrate that these articles will be enforced extensively. Regulatory bodies expect businesses to embed privacy into their software design, to minimize data as much as possible, and to implement changes to the way they store and process data.

 

How Deutsche Wohnen SE violated Articles 5 and 25 of GDPR

Examinations by the Berlin DPA in June 2017 and March 2019 determined that the tenant data stored in Deutsche Wohnen SE’s archive system was not essential to business operations and thus could not legally be stored longer than the necessary period of time. However, there was no system implemented to erase unnecessary data. This violates Article 5 and Article 25 of the GDPR.

Article 5 (e): Personal data shall be “kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; personal data may be stored for longer periods insofar as the personal data will be processed solely for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) subject to implementation of the appropriate technical and organisational measures required by this Regulation in order to safeguard the rights and freedoms of the data subject (‘storage limitation’)”

Deutsche Wohnen SE’s actions infringed upon the processing principles outlined in Article 5, which determines that data should only be kept for as long as is necessary to complete the original purpose for which it was collected, to benefit the general public, or for scientific/historical research. This means that under the law, tenant data should have been deleted as soon as the tenant ended their connection with the company.

Article 25 (1):Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.”

Article 25 outlines that privacy should be baked into the framework of data storage systems in an effort to offer data subjects the highest possible level of data protection. This is known as Privacy by Design. Deutsche Wohnen SE failed to meet this criterion because they had no system in place to erase unnecessary data. Since it was determined by the DPA that the tenant information was not vital to operations, a systematic process should have been in place to erase the data as soon as it was no longer pertinent.

 

Why the Berlin DPA fined Deutsche Wohnen SE 4.5 million euros

Inspectors from Berlin’s DPA first flagged the archive system in an audit in June 2017. Then, in March 2019, more than 1.5 years after the initial examination and nine months after the implementation of GDPR, another audit was performed that demonstrated the system had still not been brought into compliance. 

Consequently, it was determined that Deutsche Wohnen SE deliberately created an archival system that they knew for over a year violated consumer privacy and the law.

The company did initiate a project to attempt to remedy the potential non-compliance, but the measures were determined to be inadequate. Though ineffective, by taking an initial step to remedy the illegal data management structures and by cooperating with the DPA, Deutsche Wohnen SE was able to limit the magnitude of the fine, which could have amounted to as much as 4% of their annual revenue of nearly 1.5 billion euros.

In a press release, the Berlin Commissioner for Data Protection and Freedom of Information, Maja Smoltczyk, said:

Unfortunately, in supervisory practice, we often encounter data cemeteries such as those found at Deutsche Wohnen SE. The explosive nature of such misconduct is unfortunately only made aware to us when it has come to improper access to the mass hoarded data, for example, in cases of cyber-attacks. But even without such serious consequences, we are dealing with a blatant infringement of the principles of data protection, which are intended to protect the data subjects from precisely such risks.

The DPA’s ruling reflects that being unable to prove that data had been disclosed to third parties or accessed unlawfully is irrelevant to the case. If the architecture of data storage was not designed with privacy in mind, it violates GDPR.

This signifies the risk of storing old data in the GDPR era. After all, data cemeteries are just waiting to be mishandled and exposed in data breaches.

GDPR makes provisions for the risks of data breaches, and seeks to limit them by enforcing proactive privacy regulations. These objectives are what the commissioner looked to uphold when he determined the monetary penalty for the German property company. In consequence, Deutsche Wohnen SE was fined 14.5 million euros, the highest German GDPR fine to date, for failing to encompass Privacy by Design. Additional fines were also imposed (between EUR 6,000 and 17,000) for “the inadmissible storage of personal data of tenants in 15 specific individual cases.” (Source)

 

What Deutsche Wohnen SE should have done, and how you can avoid the same fate

In that same press release, Maja Smoltczyk remarked that it is gratifying to be able to impose sanctions on structural deficiencies under GDPR before data breaches occur. In addition, he gave a warning: “I recommend all organizations processing personal data review their data archiving for compliance with the GDPR.”

The definitive recommendation and high fine signify that the Berlin DPA will meet data cemetery cases with a hard hand. This sets a precedent that the commissioner intends to impose penalties on companies before massive breaches occur, as a means of being proactive. The threat of proactive penalties should incite fear across all data-driven organizations because the impact of audits and finding GDPR non-compliance will undoubtedly disrupt operations and cost money.

However, there is another salient lesson to be learned here: customer information that has been anonymized is no longer considered personal and thus is not regulated by the GDPR. This means that had Deutsche Wohnen SE anonymized their data cemeteries, they would have avoided the €14.5 million regulatory penalties and protected their tenants’ data.

In light of this penalty, it is clear that businesses should implement anonymization strategies into the design of their data repositories. This can be done through privacy automation solutions, like CN-Protect, which assess, quantify, and assure privacy compliance through a four step process:

      1. Metadata classification: identifies the direct, indirect, and sensitive data in an automated manner, to help businesses understand what kind of data they have. 
      2. Protect data: applies advanced privacy techniques, such as k-anonymization and differential privacy, to tables, text, images, video, and audio.
      3. Quantify risk: calculates the risk of re-identification of individuals and provides a privacy risk score.
      4. Automate privacy protection: implements policies to determine how data is treated in your pipelines.

Businesses should use this four step processes to confirm that their dataset has truly been anonymized, and gain certainty that they won’t be next on the GDPR chopping block. In turn, privacy automation will smooth out the compliance process and empower businesses to mitigate any risks from data cemeteries. 

Taking the step to anonymize data minimizes the risk of identification. With de-personalized data, control belongs to you, and GDPR-risks are eliminated. Through this process, privacy protection and data analysis can occur simultaneously. You can be sure that Deutsche Wohnen SE are now wishing they had performed anonymization, as it would have saved them millions of Euros. Don’t get caught in the same position.

 

Join our newletter


Most organizations are violating CCPA’s de-identification regulations and don’t realize it

Most organizations are violating CCPA’s de-identification regulations and don’t realize it

Our research clearly shows that 60% of data sets believed to be deidentified are not deidentified at all. If this remains the case in the CCPA era, businesses will put themselves at risk for class action lawsuits and brand and reputational damage. 

Businesses need to address reality: first-generation privacy protection techniques are insufficient, and without a quantifiable privacy risk assessment, they have no way to assure defensible deidentification. The consequences for a lack of understanding are too great. Businesses need to invest in state-of-the-art privacy automation solutions – now.

 

What does the CCPA say about deidentification?

The CCPA transforms data from a commodity to a privilege, forever altering the way businesses approach consumer privacy. What’s more, its overheads – like verifiable consumer requests and data breach notifications – will prove restrictive in data science and analytics environments. However, the law does not mean doom for data-driven businesses. There are ways to take data out of scope for the CCPA.

The CCPA provides exemptions for data that has been defensibly deidentified. In fact, such data is no longer covered under the CCPA at all. This will enable businesses who deidentify any consumer data to use that data for lucrative secondary purposes without having to notify customers or offer thousands of data deletion opportunities. This makes the business incentive to deidentify data higher than ever. 

However, the CCPA carries a high standard for data to be considered deidentified: 

CCPA Clause 1798.140 (h): “Deidentified” means information that cannot reasonably identify, relate to, describe, be capable of being associated with, or be linked, directly or indirectly, to a particular consumer, provided that a business that uses deidentified information:

(1) Has implemented technical safeguards that prohibit reidentification of the consumer to whom the information may pertain.

(2) Has implemented business processes that specifically prohibit reidentification of the information.

(3) Has implemented business processes to prevent inadvertent release of deidentified information.

(4) Makes no attempt to reidentify the information.

It is only when advanced privacy techniques are applied correctly, and a reidentification score is quantified, that deidentification can be proved to meet the legal requirement.

If data is not defensively deidentified, data remains subject to the CCPA and therefore at risk of class-action lawsuits, fines, and loss of consumer trust. This is critical to understand, as according to our research, 60% of data sets that are believed to be deidentified are not. While this may be primarily due to a lack of understanding or honest oversight, the CCPA does not accept belief as a measure for deidentification. 

 

Download our CCPA Infographic


Deidentification: the illusion and the solution

Protecting consumer privacy is much more complex than removing personally identifiable information (PII). Other types of information, such as quasi-identifiers, can reidentify individuals or expose sensitive information when combined with other markers. The ability to link additional information and reidentify an individual through inference attacks and the mosaic effect is now well documented. 

In fact, research at Carnegie Melon in 2000 using the US Census demonstrated that removing the direct identifiers left the data set 89% reidentifiable.

As a whole, first-generation data security methods of deidentification and manual approaches to assessing the risk of reidentification won’t cut it. Businesses must adopt an automated and defensible deidentification strategy to limit and prevent the reidentification of individuals in their data sets. This solution must include a reidentification risk score, because even if a business applies privacy-protection techniques, it still remains entirely uncertain if a data set has been effectively deidentified without quantifying the risk of reidentification. 

As a result, when companies say that they have deidentified the data sets, the first question they need to answer is: How do you know that data cannot be reidentified? Being able to do so could be the difference between saving your brand and bottom-line.

The scale and the legal significance of proving privacy compliance under the CCPA are too great to take a “best-attempt” approach. In the CCPA era, how businesses handle personal information will define their risk exposure to legal actions and brand and reputational damage. 

Join our newsletter