Leveraging GDPR “Legitimate Interests Processing” for Data Science

Leveraging GDPR “Legitimate Interests Processing” for Data Science

The GDPR is not intended to be a compliance overhead for controllers and processors. It is intended to bring higher and consistent standards and processes for the secure treatment of personal data. It’s fundamentally intended to protect the privacy rights of individuals. This cannot be more true than in emerging data science, analytics, AI and ML environments where due to the nature of vast amounts of data sources there is higher risk of identifying the personal and sensitive information of an individual. 

The GDPR requires that personal data be collected for “specified, explicit and legitimate purposes,” and also that a data controller must define a separate legal basis for each and every purpose for which, e.g., customer data is used. If a bank customer took out a bank loan, then the bank can only use the collected account data and transactional data for managing and processing that customer for the purpose of fulfilling its obligations for offering a bank loan. This is colloquially referred to as the “primary purpose” for which the data is collected.  If the bank now wanted to re-use this data for any other purpose incompatible with or beyond the scope of the primary purpose, then this is referred to as a “secondary purpose” and will require a separate legal basis for each and every such secondary purpose. 

For the avoidance of any doubt, if the bank wanted to use that customer’s data for profiling in a data science environment, then under GDPR the bank is required to document a legal basis for each and every separate purpose for which it stores and processes this customer’s data. So, for example, a ‘cross sell and up sell’ is one purpose, while ‘customer segmentation’ is another and separate purpose. If relied upon as the lawful basis, consent must be freely given, specific, informed, and unambiguous, and an additional condition, such as explicit consent, is required when processing special categories of personal data, as described in GDPR Article 9.   Additionally, in this example, the Loan division of the bank cannot share data with its credit card or mortgage divisions without the informed consent of the customer. We should not get confused with a further and separate legal basis the bank has which is processing necessary for compliance with a legal obligation to which the controller is subject (AML, Fraud, Risk, KYC, etc.). 

The challenge arises when selecting a legal basis for secondary purpose processing in a data science environment as this needs to be a separate and specific legal basis for each and every purpose. 

It quickly becomes an impractical exercise for the bank, let alone annoying to its customers, to attempt obtaining consent for each and every single purpose in a data science use case. Evidence shows anyway a very low level of positive consent using this approach. Consent management under GDPR is also tightening up. No more will blackmail clauses or general and ambiguous consent clauses be deemed acceptable. 

GDPR offers controllers a more practical and flexible legal basis for exactly these scenarios and encourages controllers to raise their standards towards protecting the privacy of their customers especially in data science environments. Legitimate interests processing (LIP) is an often misunderstood legal basis under GDPR.  This is in part because reliance on LIP may entail the use of additional technical and organizational controls to mitigate the possible impact or the risk of a given data processing on an individual. Depending on the processing involved, the sensitivity of the data, and the intended purpose, traditional tactical data security solutions such as encryption and hashing methods may not go far enough to mitigate the risk to individuals for the LIP balancing test to come out in favour of the controller’s identified legitimate interest . 

If approached correctly, GDPR LIP can provide a framework with defined technical and organisational controls to support controllers’ use of customer data in data science, analytics, AI and ML applications legally. Without it, controllers may be more exposed to possible non-compliance with GDPR and the risks of legal actions as we are seeing in many high profile privacy-related lawsuits. 

Legitimate Interests Processing is the most flexible lawful basis for secondary purpose processing of customer data, especially in data science use cases. But you cannot assume it will always be the most appropriate. It is likely to be most appropriate where you use an individual’s data in ways they would reasonably expect and which have a minimal privacy impact, or where there is a compelling justification for the processing.

If you choose to rely on GDPR LIP, you are taking on extra responsibility not only for, where needed, implementing technical and organisational controls to support and defend LIP compliance, but also for demonstrating the ethical and proper use of your customer’s data while fully respecting and protecting their privacy rights and interests. This extra responsibility may include implementing enterprise class, fit for purpose systems and processes (not just paper-based processes). Automation based privacy solutions such as CryptoNumerics CN-Protect that offer a systems-based (Privacy by Design) risk assessment and scoring capability that detects the risk of re-identification, integrated privacy protection that still retains the analytical value of the data in data science while protecting the identity and privacy of the data subject are available today as examples of demonstrating technical and organisational controls to support LIP.   

Data controllers need to initially perform the GDPR three-part test to validate using LIP as a valid legal basis. You need to:

  • identify a legitimate interest;
  • show that the processing is necessary to achieve it; and
  • balance it against the individual’s interests, rights and freedoms.

The legitimate interests can be your own interests (controllers) or the interests of third parties (processors). They can include commercial interests (marketing), individual interests (risk assessments) or broader societal benefits. The processing must be necessary. If you can reasonably achieve the same result in another less intrusive way, legitimate interests will not apply. You must balance your interests against the individual’s. If they would not reasonably expect the processing, or if it would cause unjustified harm, their interests are likely to override your legitimate interests.  Conducting such assessments for accountability purposes is happily now also easier than ever, such as with TrustArc’s Legitimate Interests Assessment (LIA) and Balancing Test that identifies the benefits and risks of data processing, which assigns numerical values to both sides of the scale and uses conditional logic and back-end calculations to generate a full report on the use of legitimate interests at the business process level.

What are the benefits of choosing Legitimate Interest Processing?

Because this basis is particularly flexible, it may be applicable in a wide range of different situations such as data science applications. It can also give you more on-going control over your long-term processing than consent, where an individual could withdraw their consent at any time. Although remember that you still have to consider managing marketing opt outs independently of whatever legal basis you’re using to store and process customer data.  

It also promotes a risk-based approach to data compliance as you need to think about the impact of your processing on individuals, which can help you identify risks and take appropriate safeguards. This can also support your obligation to ensure “data protection by design,” performing risk assessments for re-identification and demonstrating privacy controls applied to balance out privacy with the demand for retaining analytical value of the data in data science environments. This in turn would contribute towards demonstrating your PIAs (Privacy Impact Assessments) which forms part of your DPIA (Data Protection Impact Assessment) requirements and obligations.

LIP as a legal basis, if implemented correctly and supported by the correct organisational and technical controls, also provides the platform to support data collaboration and data sharing.  However, you may need to demonstrate that the data has been sufficiently de-identified, including by showing that the risk assessments for re-identification are performed not just on direct identifiers but also on all indirect identifiers as well.  

Using LIP as a legal basis for processing may help you avoid bombarding people with unnecessary and unwelcome consent requests and can help avoid “consent fatigue.” It can also, if done properly, be an effective way of protecting the individual’s interests, especially when combined with clear privacy information and an upfront and continuing right to object to such processing. Lastly, using LIP not only gives you a legal framework to perform data science it also provides a platform that demonstrates the proper and ethical use of customer data, a topic and business objective of most boards of directors. 


About the Authors:

Darren Abernethy is Senior Counsel at TrustArc in San Francisco.  Darren provides product and legal advice for the company’s portfolio of consent, advertising, marketing and consumer-facing technology solutions, and concentrates on CCPA, GDPR, cross-border data transfers, digital ad tech and EMEA data protection matters. 

Ravi Pather of CryptoNumerics has been working for the last 15 years helping large enterprises address various data compliance such as GDPR, PIPEDA, HIPAA, PCI/DSS, Data Residency, Data Privacy and more recently CCPA compliance. I have a good working knowledge of assisting large and global companies, implement Privacy Compliance controls as it particularly relates to more complex secondary purpose processing of customer data in a Data Lakes and Warehouse environments. 

Join our newsletter

The Three Greatest Regulatory Threats to Your Data Lakes

The Three Greatest Regulatory Threats to Your Data Lakes

Emerging privacy laws restrict the use of data lakes for analytics. But organizations who invest in privacy automation maintain the use of these valuable business resources for strategic operations and innovation.


Over the past five years, as businesses have increased their dependence on customer insights to make informed business decisions, the amount of data stored and processed in data lakes has risen to unprecedented levels. In parallel, privacy regulations have emerged across the globe. This has limited the functionality of data lakes and turned the analytical process from a corporate asset into a business nightmare.

Under GDPR and CCPA, data is restricted from being used for purposes beyond that which was initially specified — in turn, shutting off the flow of insights from data lakes. As a consequence, most data science and analytics actions fail to meet the standards of privacy regulations. Under GDPA, this can result in fines of up to 4% of a business’s annual global revenue.

However, businesses don’t need to choose between compliance and insights. Instead, a new mindset and approach should be adopted to meet both needs. To continue to thrive in the current regulatory climate, enterprises need to do three things:

  1. Anonymize data to preserve its use for analytics
  2. Manage the privacy governance strategy within the organization
  3. Apply privacy protection at scale to unlock data lakes


Anonymize data to preserve its use for analytics

While the restrictions vary slightly, privacy regulations worldwide establish that customer data should only be used for instances that the subject is aware of and has given permission for. GDPR, for example, determined that if a business intends to use customer data for an additional purpose, then it must first obtain consent from the individual. As a result, all data in data lakes can only be made available for use after processes have been implemented to notify and request permission from every subject for every use case. This is impractical and unreasonable. Not only will it result in a mass of requests for data erasure, but it will slow and limit the benefits of data lakes.

Don’t get us wrong. We think protecting consumer privacy is important. We just think this is the wrong way to go about it.

Instead, businesses should anonymize or pseudonymize the data in their data lakes to take data out of the scope of privacy regulations. This will unlock data lakes and protect privacy, regaining the business advantage of customer insights while protecting individuals. The best of both worlds. 


Manage the privacy governance strategy within the organization

Across an organization, stakeholders operate in isolation, pursuing their own objectives with individualized processes and tools. This has led to fragmentation between legal, risk and compliance, IT security, data science, and business teams. In consequence, a mismatch between values has led to dysfunction between privacy protection and analytics priorities. 

The solution is to implement an enterprise-wide privacy control system that generates quantifiable assessments of the re-identification risk and information loss. This enables businesses to set predetermined risk thresholds and optimize their compliance strategies for minimal information loss. By allowing companies to measure the balance of risk and loss, privacy stakeholder silos can be broken, and a balance can be found that ensures data lakes are privacy-compliant and valuable.


Apply privacy protection at scale to unlock data lakes

Anonymization is not as simple as removing direct personal identifiers such as names. Nor is manual deidentification a viable approach to ensuring privacy compliance in data lakes. In fact, the volume and velocity at which data is accumulated in data lakes make traditional methods of anonymization impossible. What’s more, without a quantifiable risk score, businesses can never be certain that their data is truly anonymized.

But applying blanket solutions like masking and tokenization strips the data of its analytical value. This dilemma is something most businesses struggle with. However, there is no need. Through privacy automation, companies can ensure defensible anonymization is applied at scale. 

Modern privacy automation solutions assess, quantify, and assure privacy protection by measuring the risk of re-identification. Then they apply advanced techniques such as differential privacy to the dataset to optimize for privacy-protection and preservation of analytical value.

The law provides clear guidance about using anonymization to meet privacy compliance, demanding the implementation of organizational and technical controls. Data-driven businesses should de-identify their data lakes by integrating privacy automation solutions into their governance framework and data pipelines. Such action will enable organizations to regain the value of their data lakes and remove the threat of regulatory fines and reputational damage.

Subscribe to our newsletter

The privacy authorities are calling. Is your call centre data GDPR and CCPA compliant?

The privacy authorities are calling. Is your call centre data GDPR and CCPA compliant?

Every time someone calls your call centre, the conversation is recorded and transcribed into free-text data. This provides your business with a wealth of valuable data to derive insights from. The problem is, the way you are using the data today violates privacy regulations and puts you at risk of nine-figure fines and reputational damage.

Call centres often record and manage extremely sensitive data. For example, at a bank, a customer will provide their name, account number, and the answer to a security question (such as their mother’s maiden name). At a wealth management office, someone may call in and talk about their divorce proceedings. This information is not only incredibly personal, but using it for additional purposes without consent is against the law.

Data is transcribed for training purposes. However, the data is often repurposed. Businesses rely on this data for everything from upselling to avoiding customer churn – not to mention the revenue some earn from selling data. 

But under GDPR, data cannot be used for additional purposes without the explicit consent of the data subject.  To comply with privacy regulations, when data science and analytics are performed on the transcripts, a business must first inform and ask permission for each and every instance of use. 

Every time a business asks for permission, they risk requests for data deletion and denials of use that render the transcripts useless. This is because people do not want their data to be exposed, let alone be used to monitor their behaviour.

However, this does not mean all your transcript data is null and void. Why? Because by anonymizing data, you can protect customer privacy and take data out-of-scope from privacy regulations.

In other words, if you anonymize your call centre data, you can use the transcripts for any purpose.

However, anonymization of this kind of data is more complicated than applying traditional methods of privacy protection, like masking and tokenization. Audio transcripts are unstructured, and so using traditional anonymization methods render the data unusable. 

If you use improperly anonymized transcript data for additional purposes, without consent, you will be found in violation of GDPR. This means your business can be fined up to 4% of your revenue. Mistaking partially protected data as anonymized, or hoping manual approaches to de-identification will work, is not legally acceptable. Just ask Google how that turned out for them.

To avoid this, businesses must utilize systematic privacy assessments that quantify the re-identification risk score of their data and establish automated privacy protection based on a predetermined risk threshold. With this, businesses can be certain of the anonymization of their transcripts and perform secondary actions without risking GDPR non-compliance.

State-of-the-art technologies will also enable businesses to measure and reduce the impact of privacy protection on the analytical value of data.

Call centre transcripts are a rich source of customer data that can generate valuable business insights. But blindly using this information can cost your business millions. Use an advanced privacy protection solution to anonymize your transcripts while retaining the analytical value. 

Join our newletter

Breaching Data Privacy for a Social Cause

Breaching Data Privacy for a Social Cause

Data partnerships are increasingly justified as a social good, but in a climate where companies are losing consumer trust through data breaches, privacy concerns begin to outweigh the social benefits of data sharing. 


This week, Apple is gaining consumer trust with its revamped Privacy Page. Facebook follows Apple’s lead as they become more wary about sharing a petabyte of data with Social Science One researchers due to increasing data privacy concerns. Also, law enforcement may be changing the genetic privacy game as they gain unprecedented access to millions of DNA records to solve homicide cases and identify victims.

Apple is setting the standard for taking consumer privacy seriously—Privacy as a Social Good

Apple is setting the stage for consumer privacy with its redesigned privacy page. Apple CEO Tim Cook announced, “At Apple, privacy is built into everything we make. You decide what you share, how you share it, and who you share it with. Here’s how we protect your data.” (Source)

There is no doubt that Apple is leveraging data privacy. When entering Apple’s new privacy landing page, bold letters are used to emphasize how privacy is a fundamental part of the company, essentially one of their core values (Source). 

Apple’s privacy page explains how they’ve designed their devices with their consumers’ privacy in mind. They also showcase how this methodology applies to their eight Apple apps: Safari browser, Apple Maps, Apple Photos, iMessage, Siri Virtual Assistant, Apple News, Wallet and Apple Pay, and Apple Health.

A privacy feature fundamental to many of Apple’s apps is that the data on an Apple device is locally stored and is never released to Apple’s servers unless the user consents to share their data, or the user personally shares his/her data with others. Personalized features, such as smart suggestions, are based on random identifiers.

  • Safari Browser blocks the data that websites collect about site visitors with an Intelligent Tracking Prevention feature and makes it harder for individuals to be identified by providing a simplified system profile for users. 
  • Apple Maps does not require users to sign in with their Apple ID. This eliminates the risk of user location and search information history linking to their identity. Navigation is based on random identifiers as opposed to individual identifiers.  

Photos taken on Apple devices are processed locally and are not shared unless stored on a cloud or shared by the user.

  • iMessages aren’t shared with Apple and are encrypted via end-to-end device encryption.
  • Siri, Apple’s voice-activated virtual assistant can process information without the information being sent to Apple’s servers. Data that is sent back to Apple is not associated with the user and is only used to update Siri.
  • Apple News curates personalized news and reading content based on random identifiers that are not associated with the user’s identity. 
  • Apple Wallet and Pay creates a device account number anytime a new card is added. Transactional data is only shared between the bank and the individual.
  • Apple Health is designed to empower the user to share their personal health information with whom they choose. The data is encrypted and can only be accessed by the user via passcodes. 


Facebook realizes the ethical, legal, and technical concerns in sharing 1,000,000 gigabytes of data with social science researchers

Facebook has been on the wrong side of data privacy ever since the Cambridge Analytica scandal in 2018 where users’ data was obtained, without their consent, for political advertising. Now that Facebook is approaching privacy with users best interest in mind, this is creating tension between the worlds of technology and social science. 

Earlier this year, Facebook and Social Science One partnered in a new model of industry-academic partnership initiative to “help people better understand the broader impact of social media on democracy—as well as improve our work to protect the integrity of elections.” said Facebook (Source). 

Facebook agreed to share 1,000,000 gigabytes of data with Social Science One to conduct research and analysis but has failed to meet their promises. 

According to Facebook, it was almost impossible to apply anonymization techniques such as differential privacy to the necessary data without stripping it completely of its analytical value.   

Facebook half-heartedly released some data as they approached deadlines and pressure, but what they released and what they promised was incomparable. Facebooks’ failure to share the data they agreed to counters the proposed social benefit of using the data to study the impact of disinformation campaigns. 

Facebook is torn between a commitment to contributing to a socially good cause without breaching the privacy of its users. 

This exemplifies how Facebook may not have been fully prepared to shift its business model from one that involved data monetization to a CSR-driven (corporate social responsibility) model where data sharing is used for research while keeping privacy in mind. 

Will Facebook eventually fulfill their promises?


Socially Beneficial DNA Data: Should Warrants be given to access Genealogy website databases?

At a police convention last week, Floridian detective, Michael Fields, revealed how he received a valid law enforcement request to access GEDmatch.com data (Source).

GEDmatch is a genealogy website that contains over a million users’ records. But, does the social benefit accrued outweigh the privacy violation to users whose data was exposed without their consent?

Last year, GEDmatch faced a mix of scrutiny and praise when they helped police identify the Golden State Killer after granting them access to their database (Source).  After privacy concerns surfaced, GEDmatch updated its privacy terms. Access was only permitted to law enforcement from users who opted-in to share their data. Additionally, police authorities are limited to searching for the purposes of, “murder, nonnegligent manslaughter, aggravated rape, robbery or aggravated assault” cases (Source).

This recent warrant granted to detective Fields overrode GEDmatch privacy terms by allowing the detective to access data of all users, even those who did not consent. This was the first time a judge agreed to a warrant of this kind. This changes the tone in genetic privacy, potentially setting precedent about who has access to genetic data. 


Join our newsletter

Is your data toxic or clean? How to prepare for CCPA

Is your data toxic or clean? How to prepare for CCPA

The CCPA is only a few months away from coming into effect. But businesses are not prepared. Currently, petabytes of consumer data rest in businesses’ data science and analytics environments. In many cases, this data is being used for purposes beyond that for which it was initially collected. 

All of this data is governed by the incoming CPPA, which will make it challenging for enterprises to derive consumer-insights and expensive to function. What’s worse, if your business makes a misstep, you will be at risk for class action lawsuits and reputational damage. As a result, most of the data sitting in data lakes and warehouses should be considered highly toxic for CCPA compliance. 


Toxic data will harm your business:

The CCPA defines disclosure obligations and information governance. It will require most companies to overhaul their data systems to improve data discovery and access to information. While taking leaps forward for consumer privacy, the CCPA places a weighty burden on data-driven businesses. Not only does it require them to justify and disclose each and every purpose of consumer data, but it prohibits the use of data for secondary purposes without giving consumers the opportunity to opt-out. 

Under the CCPA, each violation will bring (a) civil penalties of $2,500 if unintentional or (b) $7,500 after notice and a 30-day opportunity to rectify the problem has been provided. In addition, consumer lawsuits can result in statutory damages of up to $750 per consumer per incident. This means that in the CCPA era, a business with 10,000 customers is open to $7,500,000 in lawsuits. This genuine possibility could severely harm the bottom line.

Due to the cost of error, in the CCPA era, personal data, especially that which has been used for additional purposes, should be considered toxic data. This is because it carries significant business, operational, security, and compliance overheads. The good news is there is a way to clean the data and take it out of scope for the CCPA governance. The solution is to defensibly deidentify data.


Cleaning consumer data:

Under CCPA, consumer data used for additional purposes such as data science and analytics that has been correctly deidentified can be considered out of scope for CCPA compliance. To prepare for the CCPA, businesses should prioritize taking data from in-scope to out-of-scope through an automated and defensible deidentification system that can be implemented at an enterprise-level and architectural point of control.

Under the CCPA, defensively deidentified personal data will not be subject to CCPA regulations. This clean data:

  • Is not governed by IT and security controls;
  • Does not need to follow segregation of duties;
  • Is not party to breach notification protocols;
  • Is not required in verifiable consumer requests;
  • Can be used for any purposes without notifying consumers and offering the opportunity to opt-out;
  • Does not give the consumer the option to opt-out.

The implications of using identifiable personal information, or toxic data, will cost businesses millions to maintain every year. When an automated defensible deidentification strategy is just a click away, there is no excuse not to act.

Businesses essentially have two choices: (a) retain toxic data and spend millions ensuring CCPA-compliance, or (b) deidentify their data using privacy automation to take it out-of-scope for CCPA. One option will save your brand and bottom-line, the other is a mass of expensive regulatory complications and litigation exposures.

    Join our newletter

    Is this a mirror world? Uber defends privacy and science centre exposes 174,000 names

    Is this a mirror world? Uber defends privacy and science centre exposes 174,000 names

    Typically we expect Uber to be on the wrong side of a privacy debacle. But this week, they claim to be defending the privacy of its users from the LA Department of Transportation. Meanwhile, the Ontario Science Centre experiences a data breach that exposed the personal information of 174,000 individuals. Are the upcoming state-level privacy laws the answer to consumers privacy concerns?

    Uber claims LA’s data-tracking tool is a violation of state privacy laws.

    LA Department of Transportation (LADOT) wants to use Uber’s dockless scooters and bikes to collect real-time trip data. But, Uber has repeatedly refused due to privacy concerns. This fight is coming to a head, as on Monday, Uber threatened to file a lawsuit and temporary restraining order (Source).

    Last year, the general manager of LADOT, Reynolds began developing a system that would improve mobility in the city by enabling communication between them and every form of transportation. To do so, they implemented a mobility data specification (MDS) software program, called Provider, in November that mandated all dockless scooter and bikes operating in LA send their trip data to the city headquarters.

    Then, a second piece of software was developed, Agency, which reported and alerted companies about their micro-mobility devices. For example, it would send alerts about an improperly parked scooter or imminent street closure (Source).

    This would mean the city has access to each and every single trip consumers take. Yet, according to Reynolds, the data they are gathering is essential to manage the effects of micro-mobility on the streets. “At LADOT, our job is to move people and goods as quickly and safely as possible, but we can only do that if we have a complete picture of what’s on our streets and where.” (Source).

    Other cities across the country were thrilled by the results and look to implement similar MDS solutions. 

    In reality, the protocols exhibit Big Brother-like implications, and many privacy stakeholders seem to side with Uber. Determining that LADOT’s actions would in fact, “constitute surveillance.” (Source).This includes the EFF who stated that “LADOT must start taking seriously the privacy of Los Angeles residents.” What’s more in a letter to LA, they wrote that “the MDS appears to violate the California Electronic Communications Privacy Act (CalECPA), which prohibits any government entity from compelling the production of electronic device information, including raw trip data generated by electronic bikes or scooters, from anyone other than the authorized possessor of the device without proper legal process.” (Source)

    While Uber seems to have validity in their concerns, there is fear that LADOT will revoke their permit to operate because of their refusal to comply (Source). As of Tuesday, the company’s permit was suspended. But with the lawsuit looming, the public can expect the courts to decide the legality of the situation (Source).

    Ontario Science Centre data breach exposes 174,000 names

    This week the Ontario Science Centre explains that on August 16, 2019, they were made aware of a data breach that affected 174,000 people. This was discovered by Campaigner, the third-party company that performs the mailings, newsletters, and invitations for the OSC. 

    Between July 23 and August 7, “someone made a copy of the science centre’s subscriber emails and names without authorization.” (Source

    Upon further investigation, it was learned that the perpetrator used a former Campaigner’s login credentials to access the data. While no other personal information was stolen, the mass number of consumers affected highlights the potentially negative consequences associated with using trusted third parties.

    Anyone whose data was compromised in this incident was alerted by the science centre and was encouraged to ask any further questions. In addition, the Ontario Information and Privacy Commissioner, Beamish, was alerted about the breach one-day after the notices began going out to the public. 

    Moving forward, the Ontario Science Centre is “reviewing data security and retention policies.” alongside Beamish to investigate the incident in full and ensure it is not repeated in the future (Source).

    Will more states adopt privacy laws in 2020?

    January 1, 2020, marks the implementation of the California Consumer Privacy Act (CCPA). This upcoming law has spread across the media, but soon more state-level privacy laws are expected that will reshape the privacy landscape in America. With a focus on consumer privacy and an increased risk of litigation, businesses are on the edge of their seats anticipating the state’s actions.

    Bills in New York, New Jersey, Massachusetts, Minnesota, and Pennsylvania will be debated in the next few months. However, due to the challenge of mediating all stakeholders involved, several of the laws that were expected to have been passed this year were caught up in negotiations. Some have even fallen flat, like those in Arizona, Florida, Kentucky, Mississippi, and Montana. On the other hand, a few states are forming studies that will evaluate current privacy laws and where they should be updated or expanded by digging into data breaches and Internet privacy (Source).

    Meanwhile, big tech is lobbying for a federal privacy law in an attempt to supersede state-level architecture (To learn more about this read our blog).

    Any way you look at it, more regulations are coming, and the shift of privacy values will create mass changes in the United States and across the globe. This is more necessary than ever, in a new mirror world where Uber claims to be on a mission to protect user privacy and the science centre comes clean about a massive data breach. The question remains, are privacy laws the answer to the data-driven world? Perhaps, 2020 will be the year to make businesses more privacy-conscious.

    Join our newsletter