Select Page
Key terms to know to navigate data privacy

Key terms to know to navigate data privacy

As the data privacy discourse continues to grow, it’s crucial that the terms used to explain data science, data privacy and data protection are accessible to everyone. That’s why we at CryptoNumerics have compiled a continuously growing Privacy Glossary, to help people learn and better understand what’s happening to their data. 

Below are 25 terms surrounding privacy legislations, personal data, and other privacy or data science terminology to help you better understand what our company does, what other privacy companies do, and what is being done for your data.

Privacy regulations

    • General Data Protection Regulation (GDPR) is a privacy regulation implemented in May 2018 that has inspired more regulations worldwide. The law determined data controllers must establish a specific legal basis for each and every purpose where personal data is used. If a business intends to use customer data for an additional purpose, then it must first obtain explicit consent from the individual. As a result, all data in data lakes can only be made available for use after processes have been implemented to notify and request permission from every subject for every use case.
    • California Consumer Privacy Act (CCPA) is a sweeping piece of legislation that is aimed at protecting the personal information of California residents. It will give consumers the right to learn about the personal information that businesses collect, sell, or disclose about them, and prevent the sale or disclosure of their personal information. It includes the Right to Know, Right of Access, Right to Portability, Right to Deletion, Right to be Informed, Right to Opt-Out, and Non-Discrimination Based on Exercise of Rights. This means that if consumers do not like the way businesses are using their data, they request for it to be deleted -a risk for business insights 
    • Health Insurance Portability and Accountability Act (HIPAA) is a health protection regulation passed in 1998 by President Clinton. This act gives patients the right to privacy and covers 18 personal identifiers that are required to be de-identified. This Act is applicable not only in hospitals but in places of work, schooling, etc.

Legislative Definitions of Personal Information

  • Personal Data (GDPR): Any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person’ (source)
  • Personal Information (PI) (CCPA): “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” (source)
  • Personal Health Information (PHI) (HIPAA): considered to be any identifiable health information that is used, maintained, stored, or transmitted by a HIPAA-covered entity – A healthcare provider, health plan or health insurer, or a healthcare clearinghouse – or a business associate of a HIPAA-covered entity, in relation to the provision of healthcare or payment for healthcare services. PHI is made up of 18 identifiers, including names, social security number, and medical record numbers (source)

Privacy terms

 

  • Anonymization is a process where personally identifiable information (whether direct or indirect) from data sets is removed or manipulated to prevent re-identification. This process must be made irreversible. 
  • Data controller is a person, an authority or a body that determines the purposes for which and the means by which personal data is collected.
  • Data lake is a collection point for the data a business collects. 
  • Data processor is a person, an authority or a body that processes personal data on behalf of the controller. 
  • De-identified data is the result of removing or manipulating direct and indirect identifiers to break any links so that re-identification is impossible. 
  • Differential privacy is a privacy framework that characterizes a data analysis or transformation algorithm rather than a dataset. It specifies a property that the algorithm must satisfy to protect the privacy of its inputs, whereby the outputs of the algorithm are statistically indistinguishable when any one particular record is removed in the input dataset.
  • Direct identifiers are pieces of data that identify an individual without the need for more data, ex. name, SSN, etc.
  • Homomorphic encryption is a method of performing a calculation on encrypted information (ciphertext) without decrypting it (to plaintext) first.
  • Identifier: Unique information that identifies a specific individual in a dataset. Examples of identifiers are names, social security numbers, and bank account numbers. Also, any field that is unique for each row. 
  • Indirect identifiers are pieces of data that can be used to identify an individual indirectly, or with the combination of other pieces of information, ex. date of birth, gender, etc.
  • Insensitive: Information that is not identifying or quasi-identifying and that you do not want to be transformed.
  • k-anonymity is where identifiable attributes of any record in a particular database are indistinguishable from at least one other record.
  • Perturbation: Data can be perturbed by using additive noise, multiplicative noise, data swapping (changing the order of the data to prevent linkage) or generating synthetic data.
  • Pseudonymization is the processing of personal data in a way that the personal data can no longer be attributed to a specific data subject without the use of additional information. This is provided that such additional information is kept separately and is subject to technical and organizational
  • Quasi-identifiers (also known as Indirect identifiers) are pieces of information that on its own are not sufficient to identify a specific individual but when combined with other quasi-identifiers is possible to re-identify an individual. Examples of quasi-identifiers are zip code, age, nationality, and gender.
  • Re-identification, or de-anonymization, is when anonymized data (de-identified data) is matched with publicly available information, or auxiliary data, in order to discover the individual to which the data belong to.
  • Secure multi-party computation (SMC), or Multi-Party Computation (MPC), is an approach to jointly compute a function over inputs held by multiple parties while keeping those inputs private. MPC is used across a network of computers while ensuring that no data leaks during computation. Each computer in the network only sees bits of secret shares — but never anything meaningful.
  • Sensitive: Information that is more general among the population, making it difficult to identify an individual with it. However, when combined with quasi-identifiers, sensitive information can be used for attribute disclosure. Examples of sensitive information are salary and medical data. Let’s say we have a set of quasi-identifiers that form a group of women aged 40-50, a sensitive attribute could be “diagnosed with breast cancer.” Without the quasi-identifiers, the probability of identifying who has breast cancer is low, but once combined with the quasi-identifiers, the probability is high.
  • Siloed data is data stored away in silos with limited access, to protect it against the risk of exposing private information. While these silos protect the data to a certain extent, they also lock the value of the data.
Banking and fraud detection; what is the solution?

Banking and fraud detection; what is the solution?

As the year comes to a close, we must reflect on the most historic events in the world of privacy and data science, so that we can learn from the challenges, and improve moving forward.

In the past year, General Data Protection Regulation (GDPR) has had the most significant impact on data-driven businesses. The privacy law has transformed data analytics capacities and inspired a series of sweeping legislation worldwide: CCPA in the United States, LGPD in Brazil, and PDPB in India. Not only has this regulation moved the needle on privacy management and prioritization, but it has knocked major companies to the ground with harsh fines. 

Since its implementation in 2018, €405,871,210 in fines have been actioned against violators, signalling that the DPA supervisory authority has no mercy in its fervent search for the unethical and illegal actions of businesses. This is only the beginning, as the deeper we get into the data privacy law, the more strict regulatory authorities will become. With the next wave of laws hitting the world on January 1, 2020, businesses can expect to feel pressure from all locations, not just the European Union.

 

The two most breached GDPR requirements are Article 5 and Article 32.

These articles place importance on maintaining data for only as long as is necessary and seek to ensure that businesses implement advanced measures to secure data. They also signal the business value of anonymization and pseudonymization. After all, once data has been anonymized (de-identified), it is no longer considered personal, and GDPR no longer applies.

Article 5 affirms that data shall be “kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed.”

Article 32 references the importance of “the pseudonymization and encryption of personal data.”

The frequency of a failure to comply with these articles signals the need for risk-aware anonymization to ensure compliance. Businesses urgently need to implement a data anonymization solution that optimizes privacy risk reduction and data value preservation. This will allow businesses to measure the risk of their datasets, apply advanced anonymization techniques, and minimize the analytical value lost throughout the process.

If this is implemented, data collection on EU citizens will remain possible in the GDPR era, and businesses can continue to obtain business insights without risking their reputation and revenue. However, these actions can now be done in a way that respects privacy.

Sadly, not everyone has gotten the message, as nearly 130 fines have been actioned so far.

The top five regulatory fines

GDPR carries a weighty fine:  4% of a business’s annual global turnover, or €20M, whichever is greater. A fine of this size could significantly derail a business, and if paired alongside brand and reputational damage, it is evident that GDPR penalties should encourage businesses to rethink the way they handle data

1. €204.6M: British Airways

Article 32: Insufficient technical and organizational measures to ensure information security

User traffic was directed to a fraudulent site because of improper security measures, compromising 500,000 customers’ personal data. 

 2. €110.3M: Marriott International

Article 32: Insufficient technical and organizational measures to ensure information security

The guest records of 339 million guests were exposed in a data breach due to insufficient due diligence and a lack of adequate security measures.

3. €50M: Google

Article 13, 14, 6, 5: Insufficient legal basis for data processing

Google was found to have breached articles 13, 14, 6, and 5 because it created user accounts during the configuration stage of Android phones without obtaining meaningful consent. They then processed this information without a legal basis while lacking transparency and providing insufficient information.

4. €18M: Austrian Post

Article 5, 6: Insufficient legal basis for data processing

Austrian Post created more than three million profiles on Austrians and resold their personal information to third-parties, like political parties. The data included home addresses, personal preferences, habits, and party-affinity.

5. €14.5M: Deutsche Wohnen SE

Article 5, 25: Non-compliance with general data processing principles

Deutsche Wohnen stored tenant data in an archive system that was not equipped to delete information that was no longer necessary. This made it possible to have unauthorized access to years-old sensitive information, like tax records and health insurance, for purposes beyond those described at the original point of collection.

Privacy laws like GDPR seek to restrict data controllers from gaining access to personally identifiable information without consent and prevent data from being handled in manners that a subject is unaware of. If these fines teach us anything, it is that investing in technical and organizational measures is a must today. Many of these fines could have been avoided had businesses implemented Privacy by Design. Privacy must be considered throughout the business cycle, from conception to consumer use. 

Businesses cannot risk violations for the sake of it. With a risk-aware privacy software, they can continue to analyze data while protecting privacy -with the guarantee of a privacy risk score.

Resolution idea for next year: Avoid ending up on this list in 2020 by adopting risk-aware anonymization.

Differential Privacy in the Decennial U.S. Census

Differential Privacy in the Decennial U.S. Census

On April 1st, 2020, people across the United States will receive the decennial census to complete. There are minimal changes made to the census itself, but large scale changes in how each person’s privacy is protected and managed. 

Since the United States’ first census in 1790, the public attitude towards privacy has changed drastically. And as the world shifts further into a technological future, determining how to protect 327 million individuals data is the U.S Census Bureau’s most important decision.

What is the census?

The U.S. Census is a decennial survey sent out to every U.S resident. Its primary purpose is to determine the number of assigned congressional seats per state. The census helps in determining the proper distribution of federal funds, as well as disaster preparation, housing development, job markets, and community needs. Questions asked of residents include the number of people in one household, ages, or gender. 

Census data has many use cases outside congress. The information it provides helps determine the introduction of specific protocols within a city, town or state. This includes deciding how to prepare for disasters based on population density, the type of care needed for an area’s demographic (eg. An influx of new mothers may call for more daycares, while an ageing population would require the introduction of more senior living centers. This type of information can be detrimental to how these areas function.

How has privacy been dealt with previously? 

Privacy has remained an essential discourse in the census’s history since 1920. And in 1952, the U.S Census bureau instituted the agreement that is personally identifying information is to be kept privacy protected for 72 years. From 1970 to 1990, the Bureau implemented full data table suppression in order to protect access to data. 

Since 2000, the Bureau has applied a privacy technique called ‘data swapping.’ What this technique did, was swap quasi-identifiers, like a person’s race, with another person in a different dataset. It’s unknown how many profiles are masked using data swapping in these datasets. 

There has been no previous evidence of individuals being re-identified from the U.S Census, or any other privacy attacks, however, there is still the possibility. In 2010, the Bureau performed a reconstruction attack on its data that was able to re-identify 46% of the U.S. population.

Previously, the Bureau typically releases aggregate-level data and implements various disclosure avoidance techniques, including collapsing data or variable suppression.

Click here for an infographic released by the U.S. Census Bureau, highlighting its privacy history.

What is Differential Privacy and how will it be used?

In 2018, the Bureau released its plans to utilize differential privacy as its privacy-protecting tactic. 

Differential privacy is a privacy model that mathematically guarantees that an individual is not identifiable to the point that it is impossible to distinguish if they are in a dataset or not. This technique works through noise injection or synthetic creation. The Bureau will apply differential privacy in such a way to balance privacy loss.

The Bureau has said that differential privacy will not change the total population statistics per state. However, smaller towns or counties will have injected noise, which may alter its population on the released dataset. Other numbers that will not change including the number of those above voting ages and below, number of vacant houses, and number of householders.

What are the concerns?

There have been many expressed concerns coming from citizens and professionals alike. Many concerns stem from the data being altered such that information used in critical situations, like in disaster relief, is considerably changed and therefore impacting how citizens can be reached when most necessary. 

The U.S Census Bureau released a paper highlighting main concerns for deploying differential privacy onto the dataset. These concerns include: 

  • Obtaining qualified personal and a suitable computing environment
  • The difficulty for all uses of the confidential data
  • Lack of release mechanisms that align with data user needs 
  • Expectations on the part of data users that will have access to microdata
  • Difficulty in setting privacy loss parameter (epsilon) 
  • Lack of tools and trained individuals to verify the correctness of differential privacy implementations

The Bureau is continuing to work through any issues brought up in points above. Many people are showing concern for the data being altered. However, one website says, “there’s been inaccuracies in the data forever. Differential privacy just lets the Bureau be transparent about how much it’s fiddled with it.” 

Despite the many circulating concerns about differential privacy, the Bureau released that this census is the easiest for them to make differentially private.

To read more privacy articles, click here. 

Join our newsletter


Facial recognition, data marketplaces and AI changing the future of data privacy

Facial recognition, data marketplaces and AI changing the future of data privacy

With the emerging Artificial Intelligence (AI) market comes the everso popular privacy discourse. Data regulations that are being introduced left and right, while effective, are not yet representative of the growing technologies like facial recognition or data marketplaces. 

Companies like Clearview AI are once again making headlines after receiving cease-and-desist from big tech, despite there being no current facial recognition laws they are violating. As well, Nature released an article calling for an international code of conduct for genomic research aggregation. Between both AI and healthcare, Microsoft has announced a $40million AI for health initiative.  

Facial recognition company hit with cease-and-desist  

A few weeks ago, we released a blog introducing the facial recognition start-up, Clearview AI, as a threat to privacy.

Since then, Clearview AI has continued to make headlines, and most recently, has received cease-and-desist from Big Tech companies like Google, Facebook and Twitter. 

To recap, Clearview AI is a facial recognition company that has created a database of over 3 billion searchable faces, scrapped from different social media platforms. The company has introduced its software in more than 600 police departments across Canada and the US. 

The company’s CEO, Hoan Ton-That, has repeatedly defended its company, telling CBS

“Google can pull in information from all different websites, so if it’s public, you know, and it’s out there, it could be inside Google search engine it can be inside ours as well.”

Google then responded, saying this was ‘inaccurate.’ Google says they are a public search option and give sites choices in what they put out, as well as give opportunities to withdraw images. All options Clearview does not provide, as they go as far as holding images in their database after it’s been deleted from its source.

While Google and Facebook have both provided Clearview with a cease-and-desist, Clearview has maintained that they are within their first amendment rights to use the information. One privacy attorney told Cnet, “I don’t really buy it. It’s really frightening if we get into a world where someone can say, ‘The first amendment allows me to violate everyone’s privacy.’” 

While cities like San Francisco have started banning facial recognition, there are currently no federal laws addressing it as an issue, thus allowing more leeway for companies like Clearview AI to create potentially dangerous software.  

Opening up genomic data for researchers across the world

With these introductions to new health care initiatives, privacy becomes more relevant than ever. Healthcare data contains some of the most sensitive information for an individual. Thus the idea of big tech buying and selling such personal data is scary.

Last week, Nature, an international journal of science, released that over 800 terabytes of genomic data are available to investigators all over the world. The eight authors worked explicitly to protect the privacy of the thousands of patients/volunteers who consented to have their data used in this research.

The article reports the six-year collection of 2,658 cancer genomes between 468 institutions in 34 different countries is creating an open market of genome data. This project, called the Pan-Cancer Analysis of Whole Genomes (PCAWG), was the first attempt to aggregate a variety of subprojects and release a dataset globally.

A significant emphasis of this article was on the lack of clarity within the healthcare research community on how to protect data in compliance with the ongoing changes to privacy legislation.

Some issues in these genomic marketplaces are in the strategic attempts to not only comply with the variety of privacy legislation but also in ensuring that no individual can be re-identified using this information. Protecting patient data is not just a legislative issue but a moral one. 

The majority of the privacy unclarity came from questions of what vetting should occur before gaining access to information, or what checks should be made before the data is internationally shared.

As the article says, “Genomic researches urgently need clear data-sharing rules that are harmonized across jurisdictions.” The report calls for an international code of conduct to overcome the current hurdles that come with the different emerging privacy regulations. 

The article also said that the Biobanking and BioMolecular Resources Research Infrastructure (BBMRI-ERIC), had announced back in 2017 that it would develop an EU Code of Conduct on Health-Related Data. Once completed and approved, 

Microsoft to add another installment to AI for Good

The ability to collect patient data and share in an open market for researchers or doctors is helping cure and diagnose patients at a faster rate than ever before seen. In addition to this, AI is seen as another vital tool for the growing healthcare industry.

Last week, Microsoft announced its fifth installment to its ‘AI for Good’ project, ‘AI for Health.’ This project, similar to its cohorts, will support healthcare initiatives such as providing access to cash grants, AI tools, cloud computing, and Microsoft researchers. 

The project will focus on three different AI strategies, including: 

  • Accelerating medical research
  • Increase the understanding of mortality to guard various global health crises.
  • Reducing health injustices 

The program will be emphasizing supporting individual non-profits and under-served communities. As well, Microsoft released in a video their focus on addressing Sudden Infant Death Syndrome, eliminating Leprosy and diabetic retinopathy-driven blindness in partnership with different non-for-profits. 

AI is essential to healthcare, and it has lots of data that companies like Microsoft are utilizing. But with this, privacy has to remain at the forefront of the action. 

Similar to Nature’s data, protecting user information is extremely important and complicated when looking to utilize the data’s analytical value, all while complying with privacy regulations. Microsoft announced that it would be using Differential Privacy as its privacy solution. 

Like Microsoft, we at CryptoNumerics user differential privacy as a method of anonymization and data value preserving. Learn more about differential privacy and CryptoNumeric solutions.

 

Join our newsletter


2 ways in which organizations can open their data in a privacy protected way

2 ways in which organizations can open their data in a privacy protected way

Opening data means making it accessible for any person to view, use, and share. While seemingly daunting, opening data helps researchers, scientists, governments, and companies better provide for the greater good of society. 

The McKinsey Global Insitute predicted that open data could unlock USD 5 trillion per year in economic value globally. However, opening data can’t happen without privacy protection.

Data marketplaces have begun popping up to make data more accessible and, in some cases monetizing it. These marketplaces are collections of data, with a degree of privacy protection, organized for buying and selling between companies for analytic purposes. 

After ensuring your company’s user data is privacy-protected properly, marketplaces are an excellent way to generate additional revenue sources, open up partnerships, and expand the value of your data.

Healthcare data marketplace.

Healthcare is an industry that has a lot to benefit from opening data. Making more data available can help find cures for rare diseases, monetizing data can provide much needed financial resources to hospitals, and letting external experts analyze data can lead to discoveries.

To take advantage of this opportunity, Mayo Clinic, a not-for-profit, academic medical center committed to clinical practice, education, and research, announced the launch of a healthcare data platform. The medical center is looking to digitize 25 million pathology slides in the next two years, creating the largest source of labeled medical data in the world that is easily accessible for doctors and other researchers to find the information necessary to diagnose or educate. 

Mayo Clinic is emphasizing the importance of eliminating the risk of re-identification of Personal Health Information (PHI) while maintaining the value of the data. 

Marketplaces in other industries

Data marketplaces have appeared in other industries, from finance to marketing, and government. The available information is beneficial to everyone involved.

Governments have been leading in the open data movement; for example, The United States Census Bureau created the leading source of statistical information of US citizens. Through their website, researchers can find data on employment, population, and other statistics relevant to American people. This data is collected, and de-identified such that any American person can be re-identified, while the information remains valuable. 

On the marketing side, there are a couple of examples. Salesforce launched Data Studio, and Oracle created the Oracle Data Marketplace. These projects allow companies to buy and sell their data, to better understand their customers and marketing activities. 

How is CryptoNumeric’s contributing to open data? 

Recently, we partnered with Clearsense, a healthcare data analytics company that is reimagining how healthcare organizations manage and utilize their data.

Clearsense helps healthcare organizations unlock the power of their disparate data sources to lower cost, increase revenue, and improve outcomes. 

Through the use of our products, CN-Protect and CN-Insight, Clearsense helps it’s healthcare partners in two ways:

  •  Anonymize their data so that it can be shared. Following HIPAA standards and using state-of-the-art privacy protection techniques, the original datasets are transformed, resulting in a privacy protected dataset that preserved its analytical value. 
  • Perform privacy-protected analytics. There are cases in which there is a need to combine various datasets; however, for regulatory restrictions, these datasets can not be moved, limiting its usefulness. With the help of CN-Insight, Clearsense was able to overcome this challenge and perform analytics on datasets as if they were combined, without the need of relocating them. 

Clearsense now can offer its customers an opportunity to open up their data in a way that is compliant with regulations and cost-effective.

By protecting data privacy while maintaining its value, opening up collected information is helping move toward a privacy-safe future that benefits from the enormous amounts of data generated every second.

To learn more about our partnership with Clearsense watch our webinar Facilitating Multi-Site Research: Privacy, HIPAA and Data Residency

To learn more about how the Mayo Clinic project, read our blog Two Paths of Data Monetization: Exploitation or Protection

 

Join our newsletter


CCPA 1 month in review

CCPA 1 month in review

The California Consumer Privacy Act (CCPA) is privacy legislation that regulates companies that collect and process data of California residents, even if the company is based elsewhere. The law requires that consumers are given the option to opt-out of data collection/selling, and/or have their data completely removed from those datasets. 

As well, any data that is collected still has to be protected. Not only does this protect consumers, but it makes it easier for companies to comply with data deletion requests. 

While CCPA came into effect on January 1st, it has yet to create the waves in privacy that many were hoping for. 

What is happening to my data privacy? 

As of right now, not too much. Many large companies, such as Facebook, have made changes to their privacy policies in order to be compliant, however many others are slow-moving to do so. Rules of compliance continue to be a work in progress, generating both mass confusion and the slow start of some companies fulfilling the changing laws. 

Mary Stone Ross, associate director of the electronic privacy information center, says that enforcement of CCPA will likely not start for months, as well as will be an underfunded program. Not only this, it appears the likelihood of prosecuting CCPA cases will be limited to just 3 cases per year. 

Because of this, CCPA’s enforcement date for companies will start in July, despite its implementation already passing. 

Part of the legislation includes the opportunity to request my data. Is this something companies have started abiding by? 

While many companies are complying with CCPA and returning user data, others are making the interaction more complicated than necessary. Some companies are redirecting their customers to multiple outside organizations while others are offering to send data and then never following through. 

One writer at the Guardian requested her data from Instagram, and while she received 3.92GB of memory, there was plenty of information that the photo-sharing giant left out from her report. 

Despite the 8000 photos, direct messages, and search history, there was not much that couldn’t be found in the app already. The company failed to send the metadata of which they have stated in their data policy to storing. This metadata could include information regarding the location of where photos were taken. 

Instagram is not the only application to send incomplete information when requested. Spotify, a leading music streaming platform, complies with CCPA in sharing data. However, after denying one user’s original request, the platform responded with a light 4.7-megabyte file, despite this person having a 9-year-old account. 

Another social media, Twitter, sent users their files in Javascript, making it impossible for users without coding knowledge to understand the contents of their Twitter history.

Such companies are getting away by complying at a bare minimum -and they are allowed to do this. Companies like Instagram can send snippets of data when requested, and users cannot prove that they did not receive all of it. 

Because CCPA has not seen a total resurrection, companies are pushing around users into thinking they are abiding by the law, without adequately protecting their data.

Is my data still being sold? 

CCPA requires that companies provide users with the opportunity to opt-out of data sharing/selling. However, in many cases, information is often buried, small print, and unclear for a user to find. 

Data aggregators have partnered with companies participating in data sharing and are the go-to when users want to opt-out of data sharing. 

Acxiom is an example of a company taking the edge off consumers who want their data back. By placing information into the Acxiom site, the authorized agent scours sights requesting the deletion or viewing of your data. 

The issue with sites such as Acxiom is that the majority of internet users are unfamiliar with these types of applications. Thus, finding ways to view and delete your data becomes exhausting. 

The average Internet user participates in over 6 hours on the Internet per day. With the human attention span decreasing, the number of websites one person may visit per day could be well over 50. User’s visiting a webpage for only one article, or for only a few minutes, would most likely not spend the extra time searching for a Do Not Sell link. 

Because of this, companies remain compelled to hide the opportunity for users to take control of their data. And while CCPA should be effective for the average user’s data, it is still unclear the impact it will have.

Join our newsletter