Approach CCPA Amendments as a Competitive Advantage, Not a Compliance Overhead

Approach CCPA Amendments as a Competitive Advantage, Not a Compliance Overhead

Following CCPA Amendments, find a practical guide to understanding the business advantages to de-identified data and leveraging privacy risk advantages for data driven organizations.  

“Separate your ‘front end’ and ‘back end’ into two separate streams of CCPA compliance work”
“Taking data lakes and warehouses out of scope for CCPA”
“Approach CCPA as a competitive advantage rather than a compliance overhead” for your back end compliance requirements.”

This blog will summarise the amendments and clarification relating to ‘de-identified data’ and then will focus on the business advantages to implement more automated ‘state-of-the-art’ tools as part of the CCPA organisational and technical controls requirements to meet the CCPA legal specifications of de-identified data.

The verdict is in: Only five CCPA amendments made it through the California legislature.  These amendments are limited in scope. They make only incremental changes to the CCPA – and, in some cases, expand the private right of action for consumers. They do not fundamentally change the fact that the CCPA will impose substantial new compliance obligations on companies. As expected, a largely intact CCPA will come into effect on January 1, 2020. 

Organizations that will be affected by CCPA can no longer justify delaying, or adopting a wait-and-see policy toward potential further amendments. It was tempting for enterprises to use potential further clarifications as an excuse to put off real work toward becoming CCPA compliant. But time’s up. They need to initiate CCPA compliance programs, and start implementing the necessary organisational and technical controls, today. 

With this being the case, organizations are understandably seeing CCPA as a compliance overhead and business restrictive, that brings additional costs and prevents it from doing business in the way they’re used to.

But here’s the good news: Viewed the right way, CCPA can be not only a compliance overhead, but also a competitive advantage.

 

How can enterprises turn CCPA amendments into an advantage?


All sensible companies should be ensuring they can meet the new CCPA obligations, particularly obligations that may require more significant lead time. They should be implementing the organisational and technical controls required to meet the finer points of compliance: Right to know, right to erasure, right to be forgotten, and so on.

But they should also be seeking to gain the advantages that CCPA will bring. 

Let’s break this down. The key here is back-end uses of consumer information. CCPA places restrictions on how and why a company can use consumer data beyond the primary purpose for which it was originally collected. Most modern organisations are heavily data-driven, and they leverage data science and data analytics tools and environments. If they aren’t careful, they will find that their data science and data analytics projects are heavily impacted by CCPA.

However, if they approach CCPA compliance correctly, organizations can continue to reap the benefits of their data science and data analytics projects in which they have already invested heavily. They can do this by properly de-identifying their data so it falls out of scope for CCPA. Now, CCPA compliance becomes a business advantage, not a compliance overhead.

Remember CCPA’s Disclosure Requirements: At or before the point of collection, businesses must inform consumers of the categories and specific pieces of personal information they are collecting; the sources from which that information is collected; the purpose of collecting or selling such personal information; the categories of personal information sold; and the categories of third parties with whom the personal information is shared. 

In light of the recent CCPA amendments, here are three areas where organisations can comply with the CCPA Disclosure Requirements, and thus gain an advantage, by ensuring that the data in their data science and data analytics environments is de-identified:

  • Definitions of “personal information” and “publicly available information.”
  • Exemption for business customers and clarification on de-identified information.
  • Data breach notification requirements and scope

 

Definitions of “personal information” and “publicly available information” – AB874


AB874 includes several helpful clarifications with respect to the scope of “personal information” regulated under CCPA. Previously, “personal information” was defined as including all information that “identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”  

The amended definition of “personal information” clarifies that information must be “reasonably capable of being associated with” a particular consumer or household. Separately, the bill clarifies that “publicly available information” means information that is lawfully made available from federal, state, or local records, regardless of whether the data is used for a purpose that is compatible with the purpose for which the data was made publicly available. Further, the bill revises the definition of “personal information” to clarify that it does not include de-identified or aggregate information.

 

Exemption for business customers and clarification on de-identified information – AB1335


AB1335 clarifies that the CCPA’s private right of action does not apply if personal information is either encrypted or redacted. It also makes certain technical corrections, including revising the exemption for activities involving consumer reports that are regulated under the Fair Credit Reporting Act, and clarifying that de-identified or aggregate consumer information is excluded from the definition of “personal information.”

 

Data breach notification requirements – AB1130 


AB1130 clarifies the definition of “personal information” under California’s data breach notification law as including biometric data (such as “a fingerprint, retina, or iris image”), tax identification numbers, passport numbers, military identification numbers, and unique identification numbers issued on a government document. 

Additionally, there is a significant gem hidden in the detail, clarifying CCPA Section 1798.150: Class-action lawsuits may not be brought for data breaches when “data breach personal information” is either encrypted or redacted (not both); and de-identified and aggregate information are exempt from the statute.

 

Making the CCPA amendments work for you


These amendments clarify a broader truth about CCPA: It is imperative that organizations establish controls to prove that personal information can be transformed to meet the CCPA legal specifications for de-identified data. This is the only way that the business advantages that accrue from data science and data analytics can continue to accrue.  

Under the CCPA, information is only de-identified if it “cannot reasonably identify, relate to, describe, be capable of being associated with, or be linked, directly or indirectly, to a particular consumer.” In addition, the business using the data must adopt technical and procedural safeguards to prevent its re-identification, have business processes to prohibit re-identification, and must not make any attempt to re-identify the data. Businesses may today view information as “de-identified” even when information relates to a specific-but-unidentified individual.

The clock is ticking. All businesses that want to continue to maximize their data science and data analytics need to start moving toward meeting this de-identification standard. Here are some immediate questions that every organization should be asking themselves.

  1. Do you know what information your company holds on its consumers in your data lakes and data warehouses?  
  2. Do you understand each and every purpose for which you are holding and processing consumer data?
  3. Are you profiling or aggregating consumer data in your data science and analytics projects for an additional purpose beyond what the data was first collected for?
  4. Are you combining or linking consumer data with other available information that increases the risks of identification of the consumer?
  5. Do you have a need to re-identify consumer data that was de-identified or aggregated?
  6. Are you sharing or selling your customer data?
  7. Are your data protection, encryption and redaction methods sufficient to prevent the risks of re-identification, and to meet the CCPA legal specification of de-identified data?

 

Practical state-of-the-art automated approaches for the back-end of CCPA Compliance


Data discovery and classification projects can do a lot for front-end CCPA compliance. But newer, state-of-the-art, automated solutions are the best answer to CCPA back-end compliance. These approaches leverage ML techniques that can effectively perform automated and instant metadata classification on your structured data; help you instantly understand aspects of the data relating to CCPA compliance requirements; and identify the risks of re-identification in your data science and data analytics environments. 

These automated and instant metadata processes, combined with a systems-based understanding and knowledge of the risk of re-identification, enable you to transform your data to meet the legal specifications of ‘de-identified’ information. This happens by applying modern and integrated privacy protection actions such as generalisations, hierarchies, redactions, and differential privacy to ensure that the data remains de-identified but still retains the data utility and analytical value for your data science and data analytics projects.

Subscribe to our newsletter



How Third Parties who Act as Brokers of Data will Struggle as the Future of Data Collaboration Changes

How Third Parties who Act as Brokers of Data will Struggle as the Future of Data Collaboration Changes

Today, everyone understands that, as The Economist put it, “data is the new oil.”

And few understand this better than data aggregators. Data aggregators can loosely be defined as third parties who act as brokers of data to other businesses. Verisk Analytics is perhaps the largest and best-known example, but many other companies exist as well: Yodlee, Plaid, MX and many more.

These data aggregators understand the importance of data, and how the right data can be leveraged to create value through data science for consumers and companies alike. But the future of data collaboration is starting to look very different. Their businesses may well start to struggle.

Why data aggregators face a tricky future

As the power of data has become more widely recognized, so too has the importance of privacy. In 2018, the European Union implemented GDPR (General Data Protection Regulation), the most comprehensive data privacy regulation of its kind, with broad-sweeping jurisdiction. GDPR did its work right away, with a succession of privacy leaks across multiple industries that led to highly negative media coverage. Facebook suffered a $5-billion fine.

Where once many were skeptical, today, few people deny the importance of data privacy. Privacy itself has become a separate dimension, distinct from security. The data scientist community has come to understand that datasets must not only be secure from hackers, but de-identified, to ensure no individual can have their information stolen as the data is shared.

In the new era of privacy controls, third party data aggregators will face two problems: 

  1. Privacy Protection Requirements
    Using a third party to perform data collaboration is a flawed approach. No matter what regulations or protections you enforce, you are still moving your data out of your data centers, and exposing your raw information (which contains both PII and IP-sensitive items) to someone else. Ultimately, third party consortiums do not maintain a “privacy-by-design” frame, which is the standard required for GDPR compliance.

  2. Consumers Don’t Consent to Have their Data Used
    The GDPR requires that collectors of data also collect the consent of their consumers for its use. If I have information that I’ve collected, I can only use it for the specific purpose the consumer has allowed for. I cannot just share it with anyone, or use it however I like.

These challenges are serious obstacles to data collaboration, and will affect data aggregators the most due to their unique value proposition.Many see data aggregators as uniquely flawed in their dealings with these issues, and that has generated some negative traction against them. A recent Nevada state law required all who qualified to sign up for a public registry. 

There is a need for these aggregators to come out ahead of this, in order to overcome challenges to their business model, and to avoid negative media attention.

How CryptoNumerics can help

At CryptoNumerics, we recognise the genuine ethical need for privacy. But we also recognize the vast good that data science can provide. In our opinion, no-one should have to choose one over the other. Hence we have developed new technology that enables both.

CN-Insight uses a concept we refer to as Virtual Data Collaboration. Using technologies like secure multi-party computation and secret share cryptography, CN-Insight enables companies to perform machine learning and data science across distributed datasets. Instead of succumbing to the deficits of the third-party consortium model, we enable companies to keep their data sets on-prem, without need of co-location or movement of any kind, and without needing to expose any raw information. The datasets are matched using feature engineering, and our technology enables enterprises to build the models as if the data sets were combined.

Data aggregators must give these challenges serious thought, and make use of these new technology innovations in order to stay ahead of a new inflection point in their industry. Privacy is here to stay, and as the data brokers that lead the industry, they have an opportunity to play a powerful role in leading the way forward, and improving their business future.

Subscribe to our newsletter


What do Trump, Google, and Facebook Have in Common?

What do Trump, Google, and Facebook Have in Common?

This year, the Trump Administration declared the need for a national privacy law to supersede a patchwork of state laws. But, as the year comes to a close, and amidst the impeachment inquiry, time is running out. Meanwhile, Google plans to roll out encrypted web addresses, and Facebook stalls research into social media’s effect on democracy. Do these three seek privacy or power?

The Trump Administration, Google, and Facebook claim that privacy is a priority, and… well… we’re still waiting for the proof. Over the last year, the news has been awash with privacy scandals and data breaches. Every day we hear promises that privacy is a priority and that a national privacy law is coming, but so far, the evidence of action is lacking. This begs the question, are politicians and businesses using the guise of “privacy” to manipulate people? Let’s take a closer look.

Congress and the Trump Administration: National Privacy Law

Earlier this year, Congress and the Trump Administration agreed they wanted a new federal privacy law to protect individuals online. This rare occurrence was even supported and campaigned for by major tech firms (read our blog “What is your data worth” to learn more). However, despite months of talks, “a national privacy law is nowhere in sight [and] [t]he window to pass a law this year is now quickly closing.” (Source)

Disagreement over enforcement and state-level power are said to be holding back progress. Thus, while senators, including Roger Wicker, who chairs the Senate Commerce Committee, insist they are working hard, there are no public results; and with the impeachment inquiry, it is possible we will not see any for some time (Source). This means that the White House will likely miss their self-appointed deadline of January 2020, when the CCPA goes into effect.

Originally, this plan was designed to avoid a patchwork of state-level legislature that can make it challenging for businesses to comply and weaken privacy care. It is not a simple process, and since “Congress has never set an overarching national standard for how most companies gather and use data.”, much work is needed to develop a framework to govern privacy on a national level (Source). However, there is evidence in Europe with GDPR, that a large governing structure can successfully hold organizations accountable to privacy standards. But how much longer will US residents need to wait?

Google Encryption: Privacy or Power

Google has been trying to get an edge above the competition for years by leveraging the mass troves of user data it acquires. Undoubtedly, their work has led to innovation that has redefined the way our world works, but our privacy has paid the price. Like never before, our data has become the new global currency, and Google has had a central part to play in the matter. 

Google has famously made privacy a priority and is currently working to enhance user privacy and security with encrypted web addresses.

Unencrypted web addresses are a major security risk, as they make it simple for malicious persons to intercept web traffic and use fake sites to gather data. However, in denying hackers this ability, power is given to companies like Google, who will be able to collect more user data than ever before. For the risk is “that control of encrypted systems sits with Google and its competitors.” (Source)

This is because encryption cuts out the middle layer of ISPs, and can change the mechanisms through which we access specific web pages. This could enable Google to become the centralized encryption DNS provider (Source).

Thus, while DoH is certainly a privacy and security upgrade, as opposed to the current DNS system, shifting from local middle layers to major browser enterprises centralizes user data, raising anti-competitive and child-protection concerns. Further, it diminishes law enforcement’s ability to blacklist dangerous sites and monitor those who visit them. This also opens new opportunities for hackers by reducing their ability to gather cybersecurity intelligence from malware activity that is an integral part of being able to fulfil government-mandated regulation (Source).

Nonetheless, this feature will roll out in a few weeks as the new default, despite the desire from those with DoH concerns to wait until learning more about the potential fallout.

Facebook and the Disinformation Fact Checkers

Over the last few years, Facebook has developed a terrible reputation as one of the least privacy-centric companies in the world. But it is accurate? After the Cambridge Analytica scandal, followed by endless cases of data privacy ethical debacles, Facebook stalls its “disinformation fact-checkers” on the grounds of privacy problems.

In April of 2018, Mark Zuckerburg announced that the company would develop machine learning to detect and manage misinformation on Facebook (Source). It then promised to share this information with non-profit researchers who would flag disinformation campaigns as part of an academic study on how social media is influencing democracies (Source). 

To ensure that the data being shared could not be traced back to individuals, Facebook applied differential privacy techniques.

However, upon sending this information, researchers complained data did not include enough information about the disinformation campaigns to allow them to derive meaningful results. Some even insisted that Facebook was going against the original agreement (Source). As a result, some of the people funding this initiative are considering backing out.

Initially, Facebook was given a deadline of September 30 to provide the full data sets, or the entire research grants program would be shut down. While they have begun offering more data in response, the full data sets have not been provided.

A spokesperson from Facebook says, “This is one of the largest sets of links ever to be created for academic research on this topic. We are working hard to deliver on additional demographic fields while safeguarding individual people’s privacy.” (Source). 

While Facebook may be limiting academic research on democracies, perhaps they are finally prioritizing privacy. And, at the end of the day with an ethical framework to move forward, through technological advancement and academic research, the impact of social media and democracy is still measurable without compromising privacy.

In the end, it is clear that privacy promises hold the potential to manipulate people into action. While the US government may not have a national privacy law anywhere in sight, the motives behind Google’s encrypted links may be questionable, and Facebook’s sudden prioritization of privacy may cut out democratic research, at least privacy is becoming a hot topic, and that holds promise for a privacy-centric future for the public.

Join our newsletter
The Business Incentives to Automate Privacy Compliance Under CCPA

The Business Incentives to Automate Privacy Compliance Under CCPA

On January 1, 2020, the California Consumer Privacy Act (CCPA) comes into effect. The CCPA is a sweeping piece of legislation, aimed at protecting the personal information of California residents. It is going to force businesses to make major changes to how they handle their data. 

Lots of the CCPA’s regulations are built upon the bill’s conception of “personal information.” The CCPA defines personal information as “any information that identifies, relates to, describes, is capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household.” In the CCPA era, how businesses handle this personal information will define whether or not they stay compliant and stay successful. 

The Wild West days of monetizing secondary data are over

CCPA compliance is multifaceted. But here is one of the biggest challenges it creates for businesses: the need to implement organizational and technical controls to demonstrate that data used for secondary purposes – such as data science, analytics and monetization – is truly de-personalized. 

Why? Because the CCPA says that if it classifies as personal data, it can’t be used for secondary purposes. Unless each user gives their explicit consent for every different instance of use – an impossible undertaking for any business. 

In effect, the real challenge here is that the cat is already out of the bag, and businesses need a way to catch it, and put it back in.

This is because until now, many data science, analytics, AI and ML projects have had open access to consumer data. The understandable appetite for greater business and customer access has led to a Wild West approach, where monetization teams have enjoyed limitless access to unprotected customer data. This is exactly the area of focus the CCPA wants to bring under control. The bill will force organizations to establish controls that protect consumer privacy rights. 

These controls will require minimizing or removing personally identifiable data (PI) to prevent the risk of re-identification. This is the only way to put the cat back into the bag, and the only way not to violate CCPA. 

But: organizations are desperate for their data to stay useful and monetizable. How can this balance be achieved?

How to de-identify data but retain its analytical value?

Governance processes and practices need to show CCPA compliance. But various forms of analytics, data science and data monetization are incredibly valuable to enterprises, powering precious business and customer insights. 

If you apply data security encryption-based approaches and techniques like hashing, you will reduce or remove the analytical value of the data for data science. No good; you lose the value. 

You want to de-identify the date, remove the risk of re-identification, and meet the legal specifications of de-identified data under CCPA. But you want to do this while preserving analytical value. 

Both sides of the coin carry a financial imperative.The business motivation and incentive for organizations to ensure its customer data is de-identified is critical, as the overheads and restrictions of CCPA with regard to secondary use of data is prohibitive. Get it wrong, and what you believe to be de-identified data will in fact be in violation of CCPA. 

This means you will be on the wrong side of the law. And under CCPA, intentional violations can bring civil penalties of up to $7,500 per violation, and consumer lawsuits can result in statutory damages of up to $750 per consumer per incident.

However, keeping your secondary data useable is financially essential, as correctly de-identified data meeting a CCPA specification will be considered outside the scope of CCPA. This means you can continue to use it for valuable analytics, data science and data monetization. 

But you can only do this if the de-identification techniques you have used haven’t rendered the data unusable.

This is the central challenge CCPA creates: How can I de-identify my data and meet the legal specifications outlined under CCPA, but still leverage my data for important organizational initiatives? 

The CCPA creates the need for Privacy Automation

To be clear: the CCPA does not restrict an organization’s ability to collect, use, retain, sell, or disclose a consumer’s information that is de-identified or aggregated. However, the CCPA establishes a very high bar for claiming data is de-identified or aggregated. 

In practice, the only way to meet this bar will be through Privacy Automation: state-of-the-art risk assessments tools for the risk of identification; advanced privacy protection actions that retain the analytical value of datasets; and audit reporting. These and other techniques make up what is being termed ‘Privacy by Design’ and ‘Privacy by Default.’

In the CCPA era, a manual, ‘two eyes’ approach to assessing the risk of re-identification won’t cut it. The scale and the legal significance of proving privacy compliance under the CCPA is too great. 

Effective de-identification can be broken into three focus areas. You must:

  • Use a ‘state-of-the-art’ de-identification method. You need a process whereby consumer and personal data (as defined under CCPA) is transformed so that this data becomes de-personalized. This practice is at the heart of meeting, demonstrating and defending CCPA privacy compliance. This has to include cutting-edge privacy protection tools that retain the analytical value of the data for data science, rather than data encryption tools that break the analytical value of the data.
  • Assess for the likelihood of re-identification:  Research in 2000 proved that 87% of U.S. citizens can be re-identified on the basis of their gender, ZIP code and age. Just de-identifying direct identifiers alone still leaves an individual at risk of being identified from other information, whether within or without the dataset. Demonstrating the risk of re-identification using automated ‘state-of-the-art’ tools must be prioritized as organizations can no longer depend on manual processes.
  • Implement Segregation of Duties: Companies need to ensure that customer data is only shared with departments and individuals who have a legitimate purpose in receiving the consumer personal information. They need to implement appropriate controls so that segregation of duties exists, and so that data required for secondary purposes is truly de-personalized and thus CCPA-compliant. 

Instead, organizations must look to invest in automation and leverage new tools that instantly assess the risk of re-identification. These tools constitute an automated system that makes compliance watertight, while approaches also offer the starting point for  transforming the data that retains that much needed analytical value for data science, but is a key component of a privacy governance framework to demonstrate privacy compliance but to also easily defend privacy compliance, especially using automated and systems based approaches to the risks of re-identification.

Post CCPA, almost all privacy programs will require updating and modifying to accommodate the imposed requirements relating to CCPA, and to leverage the availability of new automated and state-of-the-art tools and systems.

These tools and systems will be ones that instantly assess the risk of re-identification, and constitute an automated system that makes compliance watertight, while also enabling data science by not ruining the insight value of datasets. 

CryptoNumerics Privacy Automation helps this CCPA dilemma by;

  • Promoting a better understanding of how it is possible to de-identify a dataset and still preserve the analytical value of the data for data science.
  • Leveraging systems-based technology to assess the risks of re-identification of an individual or individuals in datasets.
  • Applying modern anonymization protection using privacy protection actions such as generalisation, hierarchies, and differential privacy techniques to demonstrate that datasets are fully anonymized.
  • Build Privacy Automation into the heart of your data compliance plans and strategy.
  • Through data protection by default and by design, make privacy by default and by design the heart of your compliance strategy and plan and defend compliance with PIAs and Audit reporting.

For more information, read our blog on the Power of AI and Big Data.

Join our newsletter



CryptoNumerics Partners with TrustArc on Privacy Insight Webinar

CryptoNumerics Partners with TrustArc on Privacy Insight Webinar

We’re excited to partner up with TrustArc on their Privacy Insight Series on Thursday, September 26th at 12pm ET to talk about “Leveraging the Power of Automated Intelligence for Privacy Management”! 

With the increasing prevalence of privacy technology, how can the privacy industry leverage the benefits of artificial intelligence and machine learning to drive efficiencies in privacy program management? Many papers have been written on managing the potential privacy issues of automated decision-making, but far fewer on how the profession can utilize the benefits of technology to automate and simplify privacy program management.

Privacy tools are starting to leverage technology to incorporate powerful algorithms to automate repetitive, time-consuming tasks. Automation can generate significant cost and time savings, increase quality, and free up the privacy office’s limited resources to focus on more substantive and strategic work. This session will bring together expert panelists who can share examples of leveraging intelligence within a wide variety of privacy management functions.

 

Key takeaways from this webinar:
  • Understand the difference between artificial Intelligence, machine learning, intelligent systems and algorithms
  • Hear examples of the benefits of using intelligence to manage privacy compliance
  • Understand how to incorporate intelligence into your internal program and/or client programs to improve efficiencies

Register Now!

Can’t make it? Register anyway – TrustArc will automatically send you an email with both the slides and recording after the webinar.

To read more privacy articles, click here.

This content was originally posted on TrustArc’s website. Click here to view the original post.

Join our newsletter


What is your data worth?

What is your data worth?

How much compensation would you require to give a company complete access to your data? New studies demonstrate that prescribing a price tag to data may be the wrong approach to go about fines for noncompliance. Meanwhile, 51 CEOs write an open letter to Congress to request a federal consumer data privacy law and the Internet Associations joins them in their campaign. At the same time, Facebook is caught using Bluetooth in the background to track users and drive up profits.

Would you want your friends to know every facet of your digital footprint? How about your:

  • Location
  • Visited sites
  • Searched illnesses
  • Devices connected to the internet
  • Content read
  • Religious views
  • Political views
  • Photos
  • Purchasing habits


How about strangers? No? We didn’t think so. Then, the question remains, why are we sharing non-anonymized or improperly-anonymized copies of our personal information with companies? 

Today, many individuals are regularly sharing their data unconsciously with companies who collect it for profit. This data is used to monitor behaviour and profile you for targeted advertising that will make big data and tech companies, like Facebook, $30 per year in revenue per North American user (Source). Due to the profitability of data mining and the increasing number of nine-figure fines for data breaches, researchers have become fascinated by the economics of privacy. 

A 2019 study in the Journal of Consumer Policy questioned how users value their data. In the study, individuals stated they would only be willing to pay $5/month to protect personal data. While the low price tag may sound like privacy is a low priority, it is more likely that individuals’ believe their privacy should be a given, rather than something they have to pay to receive. This theory is corroborated by the fact that in reversing ownership in the question, and asking how much users would accept for full access to their data, there was a median response of $80/month (Source). 

While this study demonstrates a clear value placed on data from the majority, some individuals attributed a much higher cost and others said they would share data for free. Thus, the study concluded that “both willingness to pay and willingness to accept measures are highly unreliable guides to the welfare effects of retaining or giving up data privacy.” (Source)

In calling into question the ability of traditional measures of economic value to determine fines for data breaches and illegally harvesting data, other influential players in the data privacy research were asked how to go about holding corporations accountable to privacy standards. Rebecca Kelly Slaughter, Federal Trade Commission (FTC) Commissioner, stated that “injury to the public can be difficult to quantify in monetary terms in the case of privacy violations.” (Source

Rohit Chopra, a fellow FTC commissioner, also explained that current levels of monetary fines are not a strong deterrent for companies like Facebook, as their business model will remain untouched. As a result, the loss could be recouped through the further monetization of personal data. Consequently, both commissioners suggested that holding Facebook executives personally liable would be a stronger approach (Source).

If no price can equate to the value of personal data, and fines do not deter prolific companies like Facebook, should we continue asking what data is worth? Alessandro Acquisti, of Carnegie Mellon University, suggests an alternative method to look at data privacy is to view it as a human right. This model of thinking poses an interesting line of inquiry for both big data players and lawmakers, especially as federal data privacy legislature increases in popularity in the US (Source).

On September 10, 51 top CEOs, members of Business Roundtable, an industry lobbying organization, sent an open letter to Congress to request a US federal data privacy law that would supersede state-level privacy laws to simplify product design, compliance, and data management. Amongst the CEOs were the executives from Amazon, IBM, Salesforce, Johnson & Johnson, Walmart, and Visa.  

Throughout the letter, the giants accredited the patchwork of privacy regulations on a state-level for the disorder of consumer privacy in the United States. Today, companies face an increasing number of state and jurisdictional legislation that uphold varying standards to which organizations must comply. This, the companies argue, is inefficient to protect citizens, whereas a federal consumer data privacy law would provide reliable and consistent protections for Americans.

The letter also goes so far as to offer a proposed Framework for Consumer Privacy Legislation that the CEOs believe should be the base for future legislation. This framework states that data privacy law should…

  1. Champion Consumer Privacy and Promote Accountability.
  2. Foster Innovation and Competitiveness
  3. Harmonize Regulations
  4. Achieve Global Interoperability

While a unified and consistent method to hold American companies accountable could benefit users, many leading privacy advocates, and even some tech giants, have pointed out the immoral intentions of the CEOs. This is because they regarded the proposal as a method “to aggregate any privacy lawmaking under one roof, where lobby groups can water-down any meaningful user protections that may impact bottom lines.” (Source)

This pattern of a disingenuous push for a federal privacy law continued last week as the Internet Association (IA), a trade group funded by the largest tech companies worldwide, launched a campaign to request the same. Members are largely made up of companies who make a profit through the monetization of consumer data, including Google, Microsoft, Facebook, Amazon, and Uber (Source).

In an Electronic Frontier Foundation (EFF) article, this campaign was referred to as a “disingenuous ploy to undermine real progress on privacy being made around the country at the state level.” (Source) Should this occur, the federal law would supersede state laws, like The Illinois Biometric Information Privacy Act (BIPA) that makes it illegal to collect biometric data without opt-in consent, and the California Consumer Privacy Act (CCPA) which will give state residents the right to access and opt-out of the sale of their personal data (Source). 

In the last quarter alone, the IA has spent close to USD $176,000 to try and weaken CCPA before it takes effect without success. As a result, now, in conjunction with Business Roundtable and Technet, they have called for a “weak national ‘privacy’ law that will preempt stronger state laws.” (Source)

One of the companies campaigning to develop a national standard is Facebook, who is caught up, yet again, in a data privacy scandal.

Apple’s new iOS 13 update looks to rework the smartphone operating system to prioritize privacy for users (Source). Recent “sneak peeks” showed that it will notify users of background activity from third-party apps surveillance infrastructure used to generate profit by profiling individuals outside their app-usage. The culprit highlighted, unsurprisingly, is Facebook, who has been caught using Bluetooth to track nearby users

While this may not seem like a big deal, in “[m]atching Bluetooth (and wif-fi) IDs that share physical location [Facebook could] supplement the social graph it gleans by data-mining user-to-user activity on its platform.” (Source) Through this, Facebook can track not just your location, but the nature of your relationship with others. In pairing Bluetooth-gathered interpersonal interactions with social tracking (likes, followers, posts, messaging), Facebook can escalate its ability to monitor and predict human behaviour.

While you can opt-out of location services on Facebook, this means you cannot use all aspects of the app. For instance, Facebook Dating requires location services to be enabled, a clause that takes away a user’s ability to make a meaningful choice about maintaining their privacy (Source).

In notifying users about apps using their data in the background, iOS 13 looks to bring back a measure of control to the user by making them aware of potential malicious actions or breaches of privacy.

In the wake of this, Facebook’s reaction has tested the bounds of reality. In an attempt to get out of the hot seat, they have rebranded the new iOS notifications as “reminders” (Source) and, according to Forbes, un-ironically informed users “that if they protect their privacy it might have an adverse effect on Facebook’s ability to target ads and monetize user data.” (Source) At the same time, Facebook PR has also written that “We’ll continue to make it easier for you to control how and when you share your location,” as if to take credit for Apple’s new product development (Source).

With such comments, it is clear that in the upcoming months, we will see how much individuals value their privacy and convenience. Between the debate over the value of data, who should govern consumer privacy rights, and another privacy breach by Facebook, the relevance of the data privacy conversation is evident. To stay up to date, sign up for our monthly newsletter and keep an eye out for our weekly blogs on privacy news.

Join our newsletter