The Consequences of Data Mishandling: Twitter, TransUnion, and WhatsApp

The Consequences of Data Mishandling: Twitter, TransUnion, and WhatsApp

Who should you trust? This week highlights the personal privacy risks and organizational consequences when data is mishandled or utilized against the best interest of the account holder. Twitter provides advertisers with user phone numbers that had been used for two-factor authentication, 37,000 Canadians’ personal information is leaked in a TransUnion cybersecurity attack, and a GDPR-related investigation into Facebook and Twitter threatens billions in fines.
Twitter shared your phone number with advertisers.

Early this week, Twitter admitted to using the phone numbers of users, which had been provided for two-factor authentication, to help profile users and target ads. This allowed the company to create “Tailored Audiences,” an industry-standard product that enables “advertisers to target ads to customers based on the advertiser’s own marketing lists.” In other words, the profiles in the marketing list an advertiser uploaded were matched to Twitter’s user list with the phone numbers users provided for security purposes.

When users provided their phone numbers to enhance account security, they never realized that this would be the tradeoff. This manipulative approach to gaining user-information raises questions over Twitter’s data privacy protocols. Moreover, the fact that they provided this confidential information to advertisers should leave you wondering what other information is made available to business partners and how (Source). 

Curiously, after realizing what happened, rather than come forward, the company rushed to hire Ads Policy Specialists to look into the problem. 

On September 17, the company “addressed an “error” that allowed advertisers to target users based on phone numbers.” (Source) That same day, they then posted a job advertisement for someone to train internal Twitter employees on ad policies, and to join a team working on re-evaluating their advertising products.

Now, nearly a month later, Twitter has publicly admitted their mistake and said they are unsure how many users were affected. While they insist no personal data was shared externally, and are clearly taking steps to ensure this doesn’t occur again, is it too late?

Third-Party Attacks: How Valid Login Credentials Led to Banking Information Exposure 

A cybersecurity breach at TransUnion highlights the rapidly increasing threat of third party attacks and the challenge to prevent them. The personal data of 37,000 Canadians was compromised when legitimate business customer’s login credentials were used illegally to harvest TransUnion data. This includes their name, date of birth, current and past home addresses, credit and loan obligation, and repayment history. While this may not include information on bank account numbers, social insurance numbers may also have been at risk. This compromise occurred between June 28 and July 11 but was not detected until August (Source).

While alarming, these attacks are very frequent, accounting for around 25% of cyberattacks in the past year. Daniel Tobok, CEO of Cytelligence Inc. reports that the threat of third party attacks is increasing, as more than ever, criminals are using the accounts of trusted third parties (customers, vendors) to gain access to their targets’ data. This method of entry is hard to detect due to the nature of the actions taken. In fact, often the attackers are simulating the typical actions taken by the users. In this case, the credentials for the leading division of Canadian Western Bank were used to login and access the credit information of nearly 40,000 Canadians, an action that is not atypical of the bank’s regular activities (Source).

Cybersecurity attacks like this are what has caused the rise on two-factor authentication, which looks to enhance security -perhaps in every case other than Twitter’s. However, if companies only invest in hardware, they only solve half the issue, for the human side of cybersecurity is a much more serious threat than often acknowledged or considered. “As an attacker, you always attack the weakest link, and in a lot of cases unfortunately the weakest link is in front of the keyboard.” (Source)

 

Hefty fines loom over Twitter and Facebook as the Irish DPC closes their investigation.

The Data Protection Commission (DPC) in Ireland has recently finished an investigation into Facebook’s WhatsApp and Twitter over breaches to GDPR (Source). These investigations looked into whether or not WhatsApp provided information about the app’s services in a transparent manner to both users and non-users, and about a Twitter data breach notification in January 2019.

Now, these cases have moved onto the decision-making phase, and the companies are now at risk of a fine up to 4% of their global annual revenue. This means Facebook could expect to pay more than $2 billion.

This decision moves to Helen Dixon, Ireland’s chief data regulator, and we expect to hear by the end of the year. These are landmark cases, as the first Irish legal proceedings connected to US companies since GDPR came into effect a little over a year ago (May 2018) (Source). Big tech companies are on edge about the verdict, as the Irish DPC plays the largest GDPR supervisory role over most big tech companies, due to the fact that many use Ireland as the base for their EU headquarters. What’s more, the DPC has opened dozens of investigations into other major tech companies, including Apple and Google, and perhaps the chief data regulator’s decision will signal more of what’s to come (Source).

In the end, it is clear that the businesses and the public must become more privacy-conscious, as between Twitter’s data mishandling, the TransUnion third-party attack, and the GDPR investigation coming to a close, it is clear that privacy is affecting everyday operations and lives.

Join our newsletter


How Third Parties who Act as Brokers of Data will Struggle as the Future of Data Collaboration Changes

How Third Parties who Act as Brokers of Data will Struggle as the Future of Data Collaboration Changes

Today, everyone understands that, as The Economist put it, “data is the new oil.”

And few understand this better than data aggregators. Data aggregators can loosely be defined as third parties who act as brokers of data to other businesses. Verisk Analytics is perhaps the largest and best-known example, but many other companies exist as well: Yodlee, Plaid, MX and many more.

These data aggregators understand the importance of data, and how the right data can be leveraged to create value through data science for consumers and companies alike. But the future of data collaboration is starting to look very different. Their businesses may well start to struggle.

Why data aggregators face a tricky future

As the power of data has become more widely recognized, so too has the importance of privacy. In 2018, the European Union implemented GDPR (General Data Protection Regulation), the most comprehensive data privacy regulation of its kind, with broad-sweeping jurisdiction. GDPR did its work right away, with a succession of privacy leaks across multiple industries that led to highly negative media coverage. Facebook suffered a $5-billion fine.

Where once many were skeptical, today, few people deny the importance of data privacy. Privacy itself has become a separate dimension, distinct from security. The data scientist community has come to understand that datasets must not only be secure from hackers, but de-identified, to ensure no individual can have their information stolen as the data is shared.

In the new era of privacy controls, third party data aggregators will face two problems: 

  1. Privacy Protection Requirements
    Using a third party to perform data collaboration is a flawed approach. No matter what regulations or protections you enforce, you are still moving your data out of your data centers, and exposing your raw information (which contains both PII and IP-sensitive items) to someone else. Ultimately, third party consortiums do not maintain a “privacy-by-design” frame, which is the standard required for GDPR compliance.

  2. Consumers Don’t Consent to Have their Data Used
    The GDPR requires that collectors of data also collect the consent of their consumers for its use. If I have information that I’ve collected, I can only use it for the specific purpose the consumer has allowed for. I cannot just share it with anyone, or use it however I like.

These challenges are serious obstacles to data collaboration, and will affect data aggregators the most due to their unique value proposition.Many see data aggregators as uniquely flawed in their dealings with these issues, and that has generated some negative traction against them. A recent Nevada state law required all who qualified to sign up for a public registry. 

There is a need for these aggregators to come out ahead of this, in order to overcome challenges to their business model, and to avoid negative media attention.

How CryptoNumerics can help

At CryptoNumerics, we recognise the genuine ethical need for privacy. But we also recognize the vast good that data science can provide. In our opinion, no-one should have to choose one over the other. Hence we have developed new technology that enables both.

CN-Insight uses a concept we refer to as Virtual Data Collaboration. Using technologies like secure multi-party computation and secret share cryptography, CN-Insight enables companies to perform machine learning and data science across distributed datasets. Instead of succumbing to the deficits of the third-party consortium model, we enable companies to keep their data sets on-prem, without need of co-location or movement of any kind, and without needing to expose any raw information. The datasets are matched using feature engineering, and our technology enables enterprises to build the models as if the data sets were combined.

Data aggregators must give these challenges serious thought, and make use of these new technology innovations in order to stay ahead of a new inflection point in their industry. Privacy is here to stay, and as the data brokers that lead the industry, they have an opportunity to play a powerful role in leading the way forward, and improving their business future.

Subscribe to our newsletter


What do Trump, Google, and Facebook Have in Common?

What do Trump, Google, and Facebook Have in Common?

This year, the Trump Administration declared the need for a national privacy law to supersede a patchwork of state laws. But, as the year comes to a close, and amidst the impeachment inquiry, time is running out. Meanwhile, Google plans to roll out encrypted web addresses, and Facebook stalls research into social media’s effect on democracy. Do these three seek privacy or power?

The Trump Administration, Google, and Facebook claim that privacy is a priority, and… well… we’re still waiting for the proof. Over the last year, the news has been awash with privacy scandals and data breaches. Every day we hear promises that privacy is a priority and that a national privacy law is coming, but so far, the evidence of action is lacking. This begs the question, are politicians and businesses using the guise of “privacy” to manipulate people? Let’s take a closer look.

Congress and the Trump Administration: National Privacy Law

Earlier this year, Congress and the Trump Administration agreed they wanted a new federal privacy law to protect individuals online. This rare occurrence was even supported and campaigned for by major tech firms (read our blog “What is your data worth” to learn more). However, despite months of talks, “a national privacy law is nowhere in sight [and] [t]he window to pass a law this year is now quickly closing.” (Source)

Disagreement over enforcement and state-level power are said to be holding back progress. Thus, while senators, including Roger Wicker, who chairs the Senate Commerce Committee, insist they are working hard, there are no public results; and with the impeachment inquiry, it is possible we will not see any for some time (Source). This means that the White House will likely miss their self-appointed deadline of January 2020, when the CCPA goes into effect.

Originally, this plan was designed to avoid a patchwork of state-level legislature that can make it challenging for businesses to comply and weaken privacy care. It is not a simple process, and since “Congress has never set an overarching national standard for how most companies gather and use data.”, much work is needed to develop a framework to govern privacy on a national level (Source). However, there is evidence in Europe with GDPR, that a large governing structure can successfully hold organizations accountable to privacy standards. But how much longer will US residents need to wait?

Google Encryption: Privacy or Power

Google has been trying to get an edge above the competition for years by leveraging the mass troves of user data it acquires. Undoubtedly, their work has led to innovation that has redefined the way our world works, but our privacy has paid the price. Like never before, our data has become the new global currency, and Google has had a central part to play in the matter. 

Google has famously made privacy a priority and is currently working to enhance user privacy and security with encrypted web addresses.

Unencrypted web addresses are a major security risk, as they make it simple for malicious persons to intercept web traffic and use fake sites to gather data. However, in denying hackers this ability, power is given to companies like Google, who will be able to collect more user data than ever before. For the risk is “that control of encrypted systems sits with Google and its competitors.” (Source)

This is because encryption cuts out the middle layer of ISPs, and can change the mechanisms through which we access specific web pages. This could enable Google to become the centralized encryption DNS provider (Source).

Thus, while DoH is certainly a privacy and security upgrade, as opposed to the current DNS system, shifting from local middle layers to major browser enterprises centralizes user data, raising anti-competitive and child-protection concerns. Further, it diminishes law enforcement’s ability to blacklist dangerous sites and monitor those who visit them. This also opens new opportunities for hackers by reducing their ability to gather cybersecurity intelligence from malware activity that is an integral part of being able to fulfil government-mandated regulation (Source).

Nonetheless, this feature will roll out in a few weeks as the new default, despite the desire from those with DoH concerns to wait until learning more about the potential fallout.

Facebook and the Disinformation Fact Checkers

Over the last few years, Facebook has developed a terrible reputation as one of the least privacy-centric companies in the world. But it is accurate? After the Cambridge Analytica scandal, followed by endless cases of data privacy ethical debacles, Facebook stalls its “disinformation fact-checkers” on the grounds of privacy problems.

In April of 2018, Mark Zuckerburg announced that the company would develop machine learning to detect and manage misinformation on Facebook (Source). It then promised to share this information with non-profit researchers who would flag disinformation campaigns as part of an academic study on how social media is influencing democracies (Source). 

To ensure that the data being shared could not be traced back to individuals, Facebook applied differential privacy techniques.

However, upon sending this information, researchers complained data did not include enough information about the disinformation campaigns to allow them to derive meaningful results. Some even insisted that Facebook was going against the original agreement (Source). As a result, some of the people funding this initiative are considering backing out.

Initially, Facebook was given a deadline of September 30 to provide the full data sets, or the entire research grants program would be shut down. While they have begun offering more data in response, the full data sets have not been provided.

A spokesperson from Facebook says, “This is one of the largest sets of links ever to be created for academic research on this topic. We are working hard to deliver on additional demographic fields while safeguarding individual people’s privacy.” (Source). 

While Facebook may be limiting academic research on democracies, perhaps they are finally prioritizing privacy. And, at the end of the day with an ethical framework to move forward, through technological advancement and academic research, the impact of social media and democracy is still measurable without compromising privacy.

In the end, it is clear that privacy promises hold the potential to manipulate people into action. While the US government may not have a national privacy law anywhere in sight, the motives behind Google’s encrypted links may be questionable, and Facebook’s sudden prioritization of privacy may cut out democratic research, at least privacy is becoming a hot topic, and that holds promise for a privacy-centric future for the public.

Join our newsletter
CryptoNumerics Partners with TrustArc on Privacy Insight Webinar

CryptoNumerics Partners with TrustArc on Privacy Insight Webinar

We’re excited to partner up with TrustArc on their Privacy Insight Series on Thursday, September 26th at 12pm ET to talk about “Leveraging the Power of Automated Intelligence for Privacy Management”! 

With the increasing prevalence of privacy technology, how can the privacy industry leverage the benefits of artificial intelligence and machine learning to drive efficiencies in privacy program management? Many papers have been written on managing the potential privacy issues of automated decision-making, but far fewer on how the profession can utilize the benefits of technology to automate and simplify privacy program management.

Privacy tools are starting to leverage technology to incorporate powerful algorithms to automate repetitive, time-consuming tasks. Automation can generate significant cost and time savings, increase quality, and free up the privacy office’s limited resources to focus on more substantive and strategic work. This session will bring together expert panelists who can share examples of leveraging intelligence within a wide variety of privacy management functions.

 

Key takeaways from this webinar:
  • Understand the difference between artificial Intelligence, machine learning, intelligent systems and algorithms
  • Hear examples of the benefits of using intelligence to manage privacy compliance
  • Understand how to incorporate intelligence into your internal program and/or client programs to improve efficiencies

Register Now!

Can’t make it? Register anyway – TrustArc will automatically send you an email with both the slides and recording after the webinar.

To read more privacy articles, click here.

This content was originally posted on TrustArc’s website. Click here to view the original post.

Join our newsletter


What is your data worth?

What is your data worth?

How much compensation would you require to give a company complete access to your data? New studies demonstrate that prescribing a price tag to data may be the wrong approach to go about fines for noncompliance. Meanwhile, 51 CEOs write an open letter to Congress to request a federal consumer data privacy law and the Internet Associations joins them in their campaign. At the same time, Facebook is caught using Bluetooth in the background to track users and drive up profits.

Would you want your friends to know every facet of your digital footprint? How about your:

  • Location
  • Visited sites
  • Searched illnesses
  • Devices connected to the internet
  • Content read
  • Religious views
  • Political views
  • Photos
  • Purchasing habits


How about strangers? No? We didn’t think so. Then, the question remains, why are we sharing non-anonymized or improperly-anonymized copies of our personal information with companies? 

Today, many individuals are regularly sharing their data unconsciously with companies who collect it for profit. This data is used to monitor behaviour and profile you for targeted advertising that will make big data and tech companies, like Facebook, $30 per year in revenue per North American user (Source). Due to the profitability of data mining and the increasing number of nine-figure fines for data breaches, researchers have become fascinated by the economics of privacy. 

A 2019 study in the Journal of Consumer Policy questioned how users value their data. In the study, individuals stated they would only be willing to pay $5/month to protect personal data. While the low price tag may sound like privacy is a low priority, it is more likely that individuals’ believe their privacy should be a given, rather than something they have to pay to receive. This theory is corroborated by the fact that in reversing ownership in the question, and asking how much users would accept for full access to their data, there was a median response of $80/month (Source). 

While this study demonstrates a clear value placed on data from the majority, some individuals attributed a much higher cost and others said they would share data for free. Thus, the study concluded that “both willingness to pay and willingness to accept measures are highly unreliable guides to the welfare effects of retaining or giving up data privacy.” (Source)

In calling into question the ability of traditional measures of economic value to determine fines for data breaches and illegally harvesting data, other influential players in the data privacy research were asked how to go about holding corporations accountable to privacy standards. Rebecca Kelly Slaughter, Federal Trade Commission (FTC) Commissioner, stated that “injury to the public can be difficult to quantify in monetary terms in the case of privacy violations.” (Source

Rohit Chopra, a fellow FTC commissioner, also explained that current levels of monetary fines are not a strong deterrent for companies like Facebook, as their business model will remain untouched. As a result, the loss could be recouped through the further monetization of personal data. Consequently, both commissioners suggested that holding Facebook executives personally liable would be a stronger approach (Source).

If no price can equate to the value of personal data, and fines do not deter prolific companies like Facebook, should we continue asking what data is worth? Alessandro Acquisti, of Carnegie Mellon University, suggests an alternative method to look at data privacy is to view it as a human right. This model of thinking poses an interesting line of inquiry for both big data players and lawmakers, especially as federal data privacy legislature increases in popularity in the US (Source).

On September 10, 51 top CEOs, members of Business Roundtable, an industry lobbying organization, sent an open letter to Congress to request a US federal data privacy law that would supersede state-level privacy laws to simplify product design, compliance, and data management. Amongst the CEOs were the executives from Amazon, IBM, Salesforce, Johnson & Johnson, Walmart, and Visa.  

Throughout the letter, the giants accredited the patchwork of privacy regulations on a state-level for the disorder of consumer privacy in the United States. Today, companies face an increasing number of state and jurisdictional legislation that uphold varying standards to which organizations must comply. This, the companies argue, is inefficient to protect citizens, whereas a federal consumer data privacy law would provide reliable and consistent protections for Americans.

The letter also goes so far as to offer a proposed Framework for Consumer Privacy Legislation that the CEOs believe should be the base for future legislation. This framework states that data privacy law should…

  1. Champion Consumer Privacy and Promote Accountability.
  2. Foster Innovation and Competitiveness
  3. Harmonize Regulations
  4. Achieve Global Interoperability

While a unified and consistent method to hold American companies accountable could benefit users, many leading privacy advocates, and even some tech giants, have pointed out the immoral intentions of the CEOs. This is because they regarded the proposal as a method “to aggregate any privacy lawmaking under one roof, where lobby groups can water-down any meaningful user protections that may impact bottom lines.” (Source)

This pattern of a disingenuous push for a federal privacy law continued last week as the Internet Association (IA), a trade group funded by the largest tech companies worldwide, launched a campaign to request the same. Members are largely made up of companies who make a profit through the monetization of consumer data, including Google, Microsoft, Facebook, Amazon, and Uber (Source).

In an Electronic Frontier Foundation (EFF) article, this campaign was referred to as a “disingenuous ploy to undermine real progress on privacy being made around the country at the state level.” (Source) Should this occur, the federal law would supersede state laws, like The Illinois Biometric Information Privacy Act (BIPA) that makes it illegal to collect biometric data without opt-in consent, and the California Consumer Privacy Act (CCPA) which will give state residents the right to access and opt-out of the sale of their personal data (Source). 

In the last quarter alone, the IA has spent close to USD $176,000 to try and weaken CCPA before it takes effect without success. As a result, now, in conjunction with Business Roundtable and Technet, they have called for a “weak national ‘privacy’ law that will preempt stronger state laws.” (Source)

One of the companies campaigning to develop a national standard is Facebook, who is caught up, yet again, in a data privacy scandal.

Apple’s new iOS 13 update looks to rework the smartphone operating system to prioritize privacy for users (Source). Recent “sneak peeks” showed that it will notify users of background activity from third-party apps surveillance infrastructure used to generate profit by profiling individuals outside their app-usage. The culprit highlighted, unsurprisingly, is Facebook, who has been caught using Bluetooth to track nearby users

While this may not seem like a big deal, in “[m]atching Bluetooth (and wif-fi) IDs that share physical location [Facebook could] supplement the social graph it gleans by data-mining user-to-user activity on its platform.” (Source) Through this, Facebook can track not just your location, but the nature of your relationship with others. In pairing Bluetooth-gathered interpersonal interactions with social tracking (likes, followers, posts, messaging), Facebook can escalate its ability to monitor and predict human behaviour.

While you can opt-out of location services on Facebook, this means you cannot use all aspects of the app. For instance, Facebook Dating requires location services to be enabled, a clause that takes away a user’s ability to make a meaningful choice about maintaining their privacy (Source).

In notifying users about apps using their data in the background, iOS 13 looks to bring back a measure of control to the user by making them aware of potential malicious actions or breaches of privacy.

In the wake of this, Facebook’s reaction has tested the bounds of reality. In an attempt to get out of the hot seat, they have rebranded the new iOS notifications as “reminders” (Source) and, according to Forbes, un-ironically informed users “that if they protect their privacy it might have an adverse effect on Facebook’s ability to target ads and monetize user data.” (Source) At the same time, Facebook PR has also written that “We’ll continue to make it easier for you to control how and when you share your location,” as if to take credit for Apple’s new product development (Source).

With such comments, it is clear that in the upcoming months, we will see how much individuals value their privacy and convenience. Between the debate over the value of data, who should govern consumer privacy rights, and another privacy breach by Facebook, the relevance of the data privacy conversation is evident. To stay up to date, sign up for our monthly newsletter and keep an eye out for our weekly blogs on privacy news.

Join our newsletter


The Power of AI and Big Data: What Will GDPR and CCPA Mean for You?

The Power of AI and Big Data: What Will GDPR and CCPA Mean for You?

What does your data say about you? More than you think. A study of the University of Edinburgh highlights the power of AI and big data to predict the views of “silent” social media users, while GDPR crackdown looms on California-operating companies anticipating CCPA implementation in January 2020.

Last week, a study at the University of Edinburgh demonstrated that AI could predict the political and religious views of “silent” social media users, or individuals who do not post any content online. Their findings demonstrated that existing methods of analysis which assess the text of an individual’s post was less effective than analyzing the way people engage with others content by liking or following specific posts and people. This proves that even online, actions speak louder than words. 

In the study, researchers examined the data of 2,000 public Twitter accounts and were able to reveal ~75% of people’s views on key issues, including atheism, feminism, and climate change, by assessing online actions (likes, followers, etc.) in combination with their personal posts. 

Such findings will revolutionize the way people track the data of social media users, unlocking the ability to accurately predict information about “silent” users for the first time. 

Researchers explained that this dangerous new approach leaves people vulnerable to being targetted with false information, as malicious users can better predict individuals behaviour and gear information that appeals to their personal biases. In light of this, the researchers call for “improved privacy measures to prevent publically available data being used to infer people’s personal views.” (Source)

Such findings highlight the potential abuse of power when pairing AI with big data, as well as the “need to develop regulations and counter algorithms to preserve the privacy of social media users.” (Source) However, changes in privacy protection, are necessary not just in terms of social media, but across all sources and activities. 

Last week, GDPR indicated a willingness to impose monumental fines in cases of extreme data breaches, as a record £183 million fine was dolled out to British Airlines as a result of a breach on their website which redirected users to a fake site that compromised 500,000 individuals’ personal data (Source). The size of the fine suggests that one year in, GDPR is calling on companies to respect data regulations with the threat of pursuance by regulatory authorities.

The fine was equivalent to 1.5% of their 2017 global revenue, a fraction of the potential financial penalty. In the future, we expect to see fines upwards of USD $3 billion, as the Information Commissioner’s Office (ICO) has the potential to assign noncompliance fines of up to 4% of a company’s global revenue. With such high stakes, and a national US data privacy law in the works, a transformation of corporate responsibility and accountability is coming.

In January 2020, the California Consumer Privacy Act (CCPA) legislature will be implemented, marking a real change in data privacy (Source). With the intention to increase the transparency of data usage by tech giants and data trafficking companies, this will enable Californians to have more control over their data. Starting in a few months time, all Californians will be able to request for their information to be provided to them or deleted, and opt-out of it being sold from retailers, restaurants, banks, and many other companies. In fact, this will relate to any for-profit business that does business in California or collects data on California residents that meets any of the following criteria: (a) has an annual revenue of over $25 million, (b) holds information on over 50,000 consumers, or (c) earns 50% of its annual revenue by selling user data. This means that even companies who only have an online presence that services Californians will be affected. According to IAPP, approximately 500,000 US businesses will meet the criteria, including the likes of Starbucks Corp., Wells Fargo, and Mattel Inc.

The new rights given to Californians will require most companies to overhaul their data collections systems. As of today, the mass of disparate systems used to store data will not enable companies to provide accurate results to state residents. However, despite the looming deadline, a PricewaterhouseCoopers survey last year of over 300 executives of US companies with a revenue of over USD $500 million, showed that only 52% of respondents expected their company to be CCPA-compliant by January 2020. On the other hand, companies like the Gap, who operate in Europe and had to become in compliance with GDPR last year are at an advantage moving forward.

While CCPA legislature infractions are not the same as those of GDPR, the fines could easily equate to USD $1 million. This is because any business operating in California could face civil charges of $750 per violation per user. This means that sizeable data breaches in corporations with large customer bases will add-up quickly (Source).

In the wake of GDPR crackdowns, perhaps the prospect of monstrous regulatory penalties will motive some companies to change rapidly to avoid the potential of hefty fines and ensure customer trust remains at a high in the emerging digital economy.

On top of that, as more governments begin to implement privacy protection laws, the potential for companies to be held legally and financially accountable for their data breaches by multiple regulatory authorities across levels of government holds a real possibility. The message is clear, as AI and big data to continue to unlock ways to analyze people, corporations must improve their data security and privacy protection techniques, or risk fines that will have a real impact on business operations.

Join our newsletter