CCPA is here. Are you compliant?

CCPA is here. Are you compliant?

As of January 1, 2020, the California Consumer Privacy Act (CCPA) came into effect and has already altered the ways companies can make use of user data. 

Before the CCPA implementation, Big Data companies had the opportunity to harvest user data and use it for data science, analytics, AI, and ML projects. Through this process, consumer data was monetized without protection for privacy. With the official introduction of the CCPA, companies now have no choice but to oblige or pay the price. Therefore begging the question; Is your company compliant?

CCPA Is Proving That Privacy is not a Commodity- It’s a Right

This legislation enforces that consumers are safe from companies selling their data for secondary purposes. Without explicit permission to use data, companies are unable to utilize said data.

User data is highly valuable for companies’ analytics or monetization initiatives. Thus, risking user opt-outs can be detrimental to a company’s progressing success. By de-identifying consumer data, companies can follow CCPA guidelines while maintaining high data quality. 

The CCPA does not come without a highly standardized ruleset for companies to satisfy de-identification. The law comes complete with specific definitions and detailed explanations of how to achieve its ideals. Despite these guidelines in place, and the legislation only just being put into effect, studies have found that only 8% of US businesses are CCPA compliant.  

For companies that are not CCPA compliant as of yet, the time to act is now. By thoroughly understanding the regulations put out by the CCPA, companies can protect their users while still benefiting from their data. 

To do so, companies must understand the significance of maintaining analytical value and the importance of adequately de-identified data. By not complying with CCPA, an organization is vulnerable to fines up to $7500 per incident, per violation, as well as individual consumer damages up to $750 per occurrence.

For perspective, after coming into effect in 2019, GDPR released that its fines impacted companies at an average of 4% of their annual revenue.

To ensure a CCPA fine is not coming your way, assess your current data privacy protection efforts to ensure that consumers:

  • are asked for direct consent to use their data
  • can opt-out or remove their data for analytical purposes
  • data is not re-identifiable

In essence, CCPA is not impeding a company’s ability to use, analyze, or monetize data. CCPA is enforcing that data is de-identified or aggregated, and done so to the standards that its legislation requires.

Our research found that 60% of datasets believed, by companies, to be de-identified, had a high re-identification risk. There are three methods to reduce the possibility of re-identification: 

  • Use state-of-the-art de-identification methods
  • Assess for the likelihood of re-identification
  • Implement controls, so data required for secondary purposes is CCPA compliant

Read more about these effective privacy automation methods in our blog, The business Incentives to Automate Privacy Compliance under CCPA.

Manual Methods of De-Identification Are Tools of The Past

A standard of compliance within CCPA legislation involves identifying which methods of de-identification leaves consumer data susceptible to re-identification. The manual way, which is extremely common, can leave room for re-identification. By doing so, companies are making themselves vulnerable to CCPA.

Protecting data to a company’s best abilities is achievable through techniques such as k-anonymity and differential privacy. However, applying manual methods is impractical for meeting the 30-day gracing period CCPA provides or in achieving high-quality data protection.

Understanding CCPA ensures that data is adequately de-identification and has removed risk, all while meeting all legal specifications.

Achieving CCPA regulations means ditching first-generation approaches to de-identification, and adopting privacy automation defers the possibility of re-identification. Using privacy automation as a method to protect and utilize consumer’s data is necessary for successfully maneuvering the new CCPA era. 

The solution of privacy automation ensures not only that user data is correctly de-identified, but that it maintains a high data quality. 

CryptoNumerics as the Privacy Automation Solution

Despite CCPA’s strict guidelines, the benefits of using analytics for data science and monetization are incredibly high. Therefore, reducing efforts to utilize data is a disservice to a company’s success.

Complying with CCPA legislation means determining which methods of de-identification leave consumer data susceptible to re-identification. Manual approach methods of de-identification including masking, or tokenization, leave room for improper anonymization. 

Here, Privacy Automation becomes necessary for an organization’s analytical tactics. 

Privacy automation abides CCPA while benefiting tools of data science and analytics. If a user’s data is de-identified to CCPA’s standards, conducting data analysis remains possible. 

Privacy automation revolves around assessment, quantification, and assurance of data. Simultaneously, a privacy automation tool measures the risk of re-identification, applying data privacy protection techniques, and providing audit reports. 

A study by PossibleNow indicated that 45% of companies are in the process of preparing, but had not expected to be compliant by the CCPA’s implementation date. Putting together a privacy automation tool to better process data and prepare for the new legislation is critical in a companies success with the CCPA. Privacy automation products such as CN-Protect allow companies to succeed in data protection while benefiting from the data’s analytics. (Learn more about CN-Protect)

Join our newsletter

Big data privacy regulations can only be met with privacy automation

Big data privacy regulations can only be met with privacy automation

GDPR demands that businesses obtain explicit consent from data subjects before collecting or using data. CCPA affords consumers the right to request that their data is deleted if they don’t like how a business is using it. PIPEDA requires consumers to provide meaningful consent before their information is collected, used, and disclosed. New privacy laws are coming to India (PDPB), Brazil (LGPD), and over 100 other countries. In the US alone, over 25 state privacy laws have been proposed, with a national one in the works. Big data privacy laws are expansive, restrictive, and they are emerging worldwide faster than you can say, “what about analytics?”.

Such has made it challenging for businesses to (1) keep up, (2) get compliant, and (3) continue performing analytics. Not only are these regulations inhibitive, but a failure to meet the standards will result in astronomical fines — like British Airway’s 204.6 M euros. As such, much distress and confusion has ensued in the big data community.


Businesses are struggling to adapt to the rapid increase in privacy regulations

Stakeholders cannot agree whose responsibility it is to ensure compliance, they are struggling with consent management, and they are under the interpretation that removing direct identifiers renders data anonymous.

Major misconceptions can cost businesses hundreds of millions. So let’s break them down.

  1. “Consent management is the only way to keep performing analytics.”

While consent is essential at the point of collection, the odds are that, down the road, businesses will want to repurpose data. Obtaining permission in these cases, due to the sheer volume of data repositories, is an unruly and unmanageable process. A better approach is to anonymize the data. Once this has occurred, data is no longer personal, and it goes from consumer information to business IP.

2. “I removed the direct identifiers, so my data is anonymized”

If this were the case, anonymization would be an easy process. Sadly, it is not so. In fact, it has been widely acknowledged that simply redacting directly identifying information, like names, is nowhere near sufficient. In almost all cases, this leaves most of the dataset re-identifiable.

3. “Synthetic data is the best way to manage emerging regulations.”

False! Synthetic data is a great alternative for testing, but when it comes to achieving insights, it is not the way to go. Since this process attempts to replicate trends, important outlier information can be missed. As a result, the data is unlikely to mirror real-world consumer information, compromising the decision-making process.

What’s evident from our conversations with data-driven organizations is that businesses need a better solution. Consent management is slowing them down, legacy approaches to anonymization are ineffective, and current workarounds skew insights or wipe data value.


Privacy automation: A better approach to big data privacy laws

The only manageable and effective solution to big data privacy regulations is privacy automation. This process measures the risk of re-identification, applies privacy-protection techniques, and provides audit reports throughout the anonymization process. It is embedded in an organization’s data pipeline, spreading the solution enterprise-wide and harmonizing the needs of stakeholders by optimizing for anonymization and preservation of data value.

This solution will simplify the compliance process by enabling privacy rules definition, risk assessments, application of privacy actions, and compliance reporting to happen within a single application. In turn, privacy automation allows companies to unlock data in a manner that protects and adds value to consumers.

Privacy automation is the best method for businesses to handle emerging laws and regain the mission-critical insights they have come to rely on. Through this approach, privacy unlocks insights.

Join our newsletter

How data partnerships unlock valuable second-party data

How data partnerships unlock valuable second-party data

Sharing data is fundamental to advancing business strategy, especially for marketing. Over the last five years, analytics has become an essential role of the marketer, who has grown used to purchasing datasets on the open markets to create a more holistic understanding of customers.

However, amongst the new wave of privacy regulations and demand for transparency, achieving the same level of understanding has become a challenge. It has also increased the risk of using third-party data, because businesses cannot trust that the outside sources have met compliance regulations or provided accurate data. Consequently, more companies are turning to second-party data sources.

Second-party data is essentially someone else’s first-party data. It is data purchased directly from another company, assuring trust and high quality. In essence, these strategic partnerships enable businesses to build their customer databases and gain insights quickly.

Purchasing another business’s data will increase the breadth of data lakes, but it also opens organizations up to regulatory fines and reputational damage. Harnessing their data as-is requires you to trust a data partner’s processes. Comparably, relying on your own (first-party) data can guarantee privacy, but it lacks the breadth associated with other types of data. 

Second-party data is an important addition for businesses looking to piece together the puzzle of each individual’s data records. However, compliance, security, and a loss of control are problems that must be addressed. There are two options: anonymization and privacy-protected data collaboration.


Anonymize and share: a data partnership plan

Under regulations like GDPR, data cannot be used for secondary purposes without first obtaining the consent of the data subject. This means that if you are looking to share data and establish a data partnership, you must first obtain meaningful consent from every party – a fruitless process!

To avoid this expense while still respecting the principle behind the law, businesses can rely on anonymization to securely protect consumers and regain control over the information. Once data has been anonymized, it is no longer considered personal. This means businesses can perform data exchanges and achieve desired marketing efforts.

However, in sharing data with another business, you lose control over what happens to it. A better solution is privacy protected data collaboration.

Privacy-protected data collaboration builds data partnerships securely

By leveraging an advanced tool, like CN-Insight – that uses secure multi-party computation (SMC) and private set intersection (PSI) – businesses can acquire insights without sharing the data itself.

While this may seem odd, the reality is, you don’t want the data, you want the insights – and you can still get those without exposing your data.

That, in essence, is what CN-Insight enables, thanks to sophisticated cryptographic techniques that have been researched and developed at reputable institutions like Bristol University and Aarhus University. To learn more about how it works, check out our product page

Through privacy-protected data collaboration, your data is never exposed, but you receive the insights you need. This is the best solution for marketers looking to regain the holistic understanding they had of customers before regulatory authorities began imposing high fines and strict laws. Not only will businesses avoid the expensive and time-consuming contract process associated with traditional second-party data sharing, but they can trust that their data was never shared and they are not at risk of regulatory penalties. 

Data partnerships through virtual data collaboration are the solution to unlock second-party data in a privacy-preserving way.

Join our newsletter

The top five things we learned about privacy in 2019

The top five things we learned about privacy in 2019

2019 has been a trailblazing year for data privacy, that left us with a few clear messages about the future. We’ve collected our top lessons to help inform your privacy governance strategy moving forward.

1. Privacy is a multi-dimensional position: legal, ethical, and economic

Since the implementation of GDPR in May 2018, people have been quick to consider privacy from a legal perspective – as something that must be mitigated to avoid lawsuits and regulatory fines. In doing so, they have all missed the other important factors to consider: the people and the data utility advantage.

When your business collects consumer information, it is important to remember that this is personal data. As such, there is an intrinsic duty and trust linked to the collection. There is an ethical responsibility to do right by your customers, determining that you will only use their data for reasons they are aware of and have consented to, and that you will not share the data with others. Responsible data management is fundamental to your relationship with customers, and it will have a significant advantage to your business to do so.

Economically speaking, positioning your business as a privacy leader is the best strategy, and not only from a brand perspective. If you anonymize personal information, your analysts will have increased access to a valuable resource that can help improve strategy and a product or service.

2. Privacy is not one-size-fits-all

Consumer data contains an inherent privacy risk, even after it has been de-identified. That is why a privacy risk score is essential to understanding the effects of privacy protection methods. Even if you mask the data, you don’t know how successful your process was until you assess the re-identifiable risk. That is why we believe a privacy risk score is so fundamental to the anonymization process.

However, we’ve learned that a score also enables businesses to customize their personal risk thresholds based on activities.

Such is important because businesses do not use all of their data to undertake the same activities, nor do they all manage the same level of sensitive information. As a consequence, privacy-preservation is not a uniform process. In general, we suggest following these guidelines when assessing your privacy risk score:

  • Greater than 33% implies that your data is identifiable.
  • 33% is an acceptable level if you are releasing to a highly trusted source.
  • 20% is the most commonly accepted level of privacy risk.
  • 11% is used for highly sensitive data.
  • 5% is used for releasing to an untrusted source.

3. Automation is central to protecting data assets

Old privacy solutions are no match for modern threats to data privacy. Legacy approaches, like masking, were never intended to ensure privacy. Rather, these were cybersecurity techniques evolved in a time when organizations did not rely on the insights derived from consumer data. 

Even worse, many businesses still rely on manual approaches to anonymize the data. With the volume and necessary precision, this is an impossible undertaking doomed for non-compliance.

What businesses require to effectively privacy protect their data today is privacy automation: a solution that combines AI and advanced privacy protection to assess, anonymize, and preserve datasets at scale.

4. Partnerships across your business teams are essential

Privacy cannot be the role of one individual. Across an organization, stakeholders operate in isolation, pursuing their own objectives with individualized processes and tools. This has led to fragmentation between legal, risk and compliance, IT security, data science, and business teams. In consequence, a mismatch between values has led to dysfunction between privacy protection and analytics priorities. 

In reality, privacy has an impact on all of these figures, and their values should not be pitted against each other. In today’s regulation era, one is reliant on the other. Teams must establish a unified goal to protect privacy in order to unlock data. 

The solution is to implement an enterprise-wide privacy control system that generates quantifiable assessments of the re-identification risk and information loss. This enables businesses to set predetermined risk thresholds and optimize their compliance strategies for minimal information loss. By allowing companies to measure the balance of risk and loss, privacy stakeholder silos can be broken, and a balance can be found that ensures data lakes are privacy-compliant and valuable.

5. Privacy is a competitive advantage

If you want to take cues from Apple, the most significant is that positioning privacy as central to your business is a competitive advantage. 

Businesses should address privacy as a component of their customer engagement strategy. Not only does compliance avoid regulatory penalties and reputational damage, but embedding privacy into your operations is also a method to gain trust, attention, and build a reputation for accountability. 

A Pew Research Center study investigated the way Americans feel about the state of privacy, and their concerns radiated from the findings. 

  • 60% believe it is not possible to go through daily life without companies and the government collecting their personal data.
  • 79% are concerned about the way companies are using their data.
  • 72% say they gain nothing or very little from company data collected about them.
  • 81% say that the risks of data collection by companies outweigh the benefits.

Evidently, people feel they have no control over their data and do not believe businesses have their best interests at heart. Break the mould by prioritizing privacy. There is room for your business to stand out, and people are waiting for you to do so.

Privacy had a resurgence this year that has reshaped law and consumer expectations. Businesses must make protecting sensitive information a business priority across their teams by investing in an automated de-identification solution that fits their needs. Doing so will improve the customer experience, unlock data, and serve as a differential advantage with target markets. 

Privacy is not only the future. Privacy is the present. Businesses must act today.

Join our newsletter

Healthcare must prioritize data privacy.

Healthcare must prioritize data privacy.

Healthcare is a system reliant on trust. This is true, not only for front line providers, but across the industry, perhaps most significantly, with researchers. Yet, in recent years, the news has promoted story after story about a lack of patient privacy and insufficient security measures. Just a few days ago, LifeLabs had a breach that leaked the personal information of approximately 15 million Canadians. Healthcare cannot afford to have their methods questioned, doubted, or refused, and one misstep could dismantle the carefully created industry.  

Record releases, deception, and litigation: current threats to healthcare

On December 17, LifeLabs, one of Canada’s largest medical services companies, disclosed that they had suffered a massive cybersecurity breach, in which hackers gained the highly confidential information of up to 15 million customers – largely BC and Ontario residents. The database included health card numbers, names, email addresses, login, passwords, and dates of birth. Worse yet, the hackers obtained test results from 85,000 Ontarians.

“I’m sorry this happened and we’ll do everything we can to win back the confidence of our customers,” LifeLabs chief executive Charles Brown said in an interview. “[Private companies, government, and hospitals have] got to do more to make sure all our customers feel secure.”

At the time of the attack, LifeLabs paid a ransom (amount undisclosed) in an attempt to secure the information. This move was condemned by experts, as it implies a reliance on the information, inability to secure it in other ways, and makes no guarantee that the files ill be returned. Some have even suggested that paying ransom increases the likelihood that LifeLabs will be the target of another attack.

Now that the hackers have seen the files, there are two main concerns (1) that they will release the test records, and (2) that they will use the identifiable information to perform nefarious acts for financial benefits, like obtaining a loan or getting a credit card.

This risk is why identifiable information is so valuable, and it is why organizations, especially in the healthcare field, have a duty to protect it. This means investing in both cybersecurity controls and privacy solutions.

In relation to the LifeLabs scandal, the Ontario Privacy Commissioner, Brian Beamish, said: “Public institutions and health-care organizations are ultimately responsible for ensuring that any personal information in their custody and control is secure and protected at all times.”

LifeLab may be at risk of civil litigation from victims seeking compensation. After all, there is a precedent in the matter, whereby two class-action lawsuits were brought to the Quebec Superior Court over a similar incident with a Desjardins Group breach earlier this year.

Similar concerns over the safety and security of patient data exist not only across the health caregivers but also the organizations performing research and using the data. Such is seen in the uproar surrounding the contract between NHS and Amazon, by which the virtual assistant, Alexa, gained access to health information. (Read more about the NHS-Alexa deal in our blog post: “You are the product: People are feeling defeatist in the surveillance age.”)

Privacy will be foundational to healthcare innovation: predictions from experts

Eleonora Harwich, director of research and head of innovation at Reform, said that “The key issue of 2020 will be establishing what fair commercial relationships look like between the private sector, the public sector and patients when data are used to create digital healthcare products or services. People are increasingly unhappy with the status quo in which they have little knowledge or agency over what is done with information about them.”

Her comments are just one of many that echo frustration over privacy and security concerns with healthcare. As the year comes to a close, healthcare experts have begun making predictions for the year ahead. Comments range from the significance of AI to ideas of telemedicine. However, all iterate the paramount importance of data privacy moving forward.

The cross between innovation and healthcare has reached a never before seen magnitude, which requires a shift in focus. Organizational and technical controls must be implemented to prevent the exposure of sensitive information.

Join our newsletter