The Privacy Risk Most Data Scientists Are Missing

The Privacy Risk Most Data Scientists Are Missing

Facebook privacy issues

Data breaches are becoming increasingly common, and the risks of being involved in one are going up. A Ponemon Institute report (an IBM-backed think tank), found that the average cost of a data breach in 2018 was $148 per record, up nearly 5% from 2017.

Privacy regulations and compliance teams are using methods like masking and tokenization to protect their data — but these methods come at a cost.
Businesses often find that these solutions prevent data from being leveraged for analytics and on top of that, they also leave your data exposed.

Many data scientists and compliance departments protect and secure direct identifiers. They hide an individual’s name, or their social security number, and move on. The assumption is that by removing unique values from a user, the dataset has been de-identified. Unfortunately, that is not the case.

In 2010, Netflix announced a $1 million competition to whoever could build them the best movie-recommendation engine. To facilitate this, they released large volumes of subscriber data with redacted direct identifiers, so engineers could use Netflix’s actual data, without compromising consumer privacy. The available information included users’ age, gender, and zip code. However, when these indirect identifiers (also known as quasi-identifiers) were taken in combination, they could re-identify a user with over 90% accuracy. That’s exactly what happened, resulting in the exposure of millions of Netflix’s consumers. Within a few months, the competition had been called off, and a lawsuit was filed against Netflix.

When it comes to the risk exposure of indirect identifiers, it’s not a question of if, but a question of when. That’s a lesson companies have continuously found out the hard way. Marriott, the hotel chain, faced a data breach of 500 million consumer records and faced $72 million in damages due to a failure to protect indirect identifiers.

Businesses are faced with a dilemma. Do they redact all their data and leave it barren for analysis? Or do you leave indirect identifiers unprotected, and create an avenue for exposure that will lead to an eventual leak of your customers’ private data?

Either option causes problems. That can be changed!

That’s why we founded CryptoNumerics. Our software is able to autonomously classify your datasets into direct, indirect, sensitive, and insensitive identifiers, using AI. We then use cutting-edge data science technologies like differential privacy, k-anonymization, and secure multi-party computation to anonymize your data while preserving its analytical value. Your datasets are comprehensively protected and de-identified, while still being enabled for machine learning, and data analysis.

Data is the new oil. Artificial intelligence and machine learning represent the future of technology-value, and any company that does not keep up will be left behind and disrupted. Businesses cannot afford to leave data siloed, or uncollected.

Likewise, Data privacy is no longer an issue that can be ignored. Scandals like Cambridge Analytica, and policies like GDPR, prove that, but the industry is still not knowledgeable on key risks, like indirect identifiers. Companies that use their data irresponsibly will feel the damage, but those that don’t use their data at all will be left behind. Choose instead, not to fall into either category.

Join our newsletter



Announcing CN-Protect for Data Science

Announcing CN-Protect for Data Science

We are pleased to announce the launch of CN-Protect for Data Science

CryptoNumerics announces CN-Protect for Data Science, a Python library that applies insight-preserving data privacy protection, enabling data scientists to build better quality models on sensitive data.  

Toronto – April 24, 2019CryptoNumerics, a Toronto-based enterprise software company, announced the launch of CN-Protect for Data Science which enables data scientists to implement state-of-the-art privacy protection, such as differential privacy, directly into their data science stack while maintaining analytical value.

According to a 2017 Keggle study, two of the top 10 challenges that data scientists face at work are data inaccessibility and privacy regulations, such as GDPR, HIPAA, and CCPA.  Additionally, common privacy protection techniques, such as Data Masking, often decimate the analytical value of the data. CN-Protect for Data Science solves these issues by allowing data scientists to seamlessly privacy-protect datasets that retain their analytical value and can subsequently be used for statistical analysis and machine learning.

“Private information that is contained in data is preventing data scientists from obtaining insights that can help meet business goals.  They either cannot access the data at all or receive a low quality version which has had the private information removed.” Monica Holboke, Co-founder & CEO CryptoNumerics. “With CN-Protect for Data Science, data scientists can incorporate privacy protection in their workflow with ease and deliver more powerful models to their organization.”

CN-Protect for Data Science is a privacy-protection python library that works with Anaconda, Scikit and Jupyter Notebooks, smoothly integrating into the data scientist workflow.  Data scientists will be able to:

  • Create and apply customized privacy protection schemes, streamlining the compliance process.
  • Preserve analytical value for model building while ensuring privacy protection.
  • Implement differential privacy and other state-of-the-art privacy protection techniques using only a few lines of code.

CN-Protect for Data Science follows the successful launch of CN-Protect Desktop App in March. It is part of CryptoNumerics’ efforts to bring insight-preserving data privacy protection to data science platforms and data engineering pipelines while complying with GDPR, HIPAA, and CCPA. CN-Protect editions for SAS, R Studio, Amazon AWS, Microsoft Azure, and Google GCP are coming soon.  

Join our newsletter



Announcing CN-Protect Free Downloadable Software for Privacy-Protection

Announcing CN-Protect Free Downloadable Software for Privacy-Protection

We are pleased to announce the launch of CN-Protect as free, downloadable software to create privacy-protected datasets. We believe:

 

  • Protecting consumer privacy is paramount.
  • Satisfying privacy regulations such as HIPAA, GDPR, and CCPA should not sacrifice analytical value.
  • Data scientists, privacy officers, and legal teams should have the ability to easily ensure privacy.

Today’s businesses are faced with data breaches or misuse of consumer information on a regular basis. In response, governments have moved to protect their citizens through regulations like GDPR in Europe and CCPA in California. Organizations are scrambling to comply with these regulations without adversely impacting their business. However, there is no doubt that people’s privacy should not be compromised.

Current approaches to de-identify data such as masking, tokenization, and aggregation can leave data unprotected or without analytical value.

  • Data masking has no analytical use once applied to all values and, if not applied to all values, does not protect against re-identification. Data masking works by replacing existing sensitive information with information that looks real, but is of no use to anyone who might misuse it and is not reversible.
  • Tokenization removes all data utility of the tokenized fields, but re-identification is still possible through untokenized fields. Tokenization replaces sensitive information with a non-sensitive equivalent or a token which can be used to map back to the original data, but without access to the tokenization system, it is impossible to reverse.
  • Aggregation severely reduces the analytical value and if not done correctly can lead to re-identification. Data aggregation summarizes the data in a cumulative fashion such that any one individual is not re-identifiable. However, if the data does not contain enough samples re-identification is still possible.

CN-protect leverages AI and the most advanced anonymization techniques such as optimal k-Anonymity and Differential Privacy to protect your data and maintain analytical value. Furthermore, CN-Protect is easy to adopt, it is available as a downloadable application or plug-in for your favorite data science platform.

With CN-Protect you can:

  • Comply with privacy regulations such as HIPAA, GDPR, and CCPA;
  • Create privacy protected datasets while maintaining analytical value.

There are a variety of privacy models and data quality metrics available that you can choose from depending on your desired application. These privacy models use anonymization techniques to protect private information, while data quality metrics are used to balance those techniques against the analytical value of the data.

The following privacy models are available in CN-Protect:

  • Optimal k-Anonymity;
  • t-Closeness;
  • Differential Privacy, and more.

You will be able to:

  • Specify parameters for the various privacy models that can be applied across your organization and fine-tune for your many applications;
  • Define acceptable levels of privacy risk for your organization and the intended use of your data;
    Get quantifiable metrics that you can use for compliance;
  • Understand the impact of privacy protection on your statistical and machine learning models.

Stay ahead of regulations and protect your data. Download CN-Protect now for a free trial!

Join our newsletter



Weekly News #3

Weekly News #3

Facebook privacy issues

Experts predict that data privacy will take the center stage in 2019 and that organizations will have to fully embrace it. Google and other cloud providers are already jumping into the privacy wave by offering de-identification tools for healthcare data. 

Data privacy became a major topic in 2018. On one hand, GDPR came into effect in Europe affecting organizations from all over the world. On the other hand, massive cases of data breaches and data misuse where reported leading to customer concerns and legislators proposing new privacy laws.

2019 is expected to be a year in which organizations shift from considering privacy as a nice-to-have to a must-have. This shift will come in part from legislation but also from consumers demanding stronger data protection. Kristina Bergman, CEO of Integris Software Inc., predicts that in 2019 :

  • we will see the rise of the Chief Information Security Officer;
  • privacy and security will be seen as a continuum;
  • a growing conflict between privacy vs. the Data Industrial Complex;
  • the growth of data privacy automation.

In Canada, Howard Solomon interviewed four privacy and security experts, and these are their predictions:

  • David Senf, founder and chief analyst at the Toronto cyber consultancy Cyverity, predicts an increase in the demand of cybersecurity experts to protect against data breaches.
  • Ann Cavoukian, Expert-in-Residence at Ryerson University’s Privacy by Design Centre of Excellence, predicts that 2019 will be a “privacy eye-opener” with a growth of decentralization and SmartData.
  • Imran Ahmad, a partner at the law firm of Blake, Cassels & Graydon LLP, advises that HR should become more involved in preventing data misuse.
  • Ahmed Etman, managing director for security at Accenture Canada, warns that organizations have to be careful of cyberattacks against their supply chain.

Meanwhile, some organizations are jumping into the privacy wave by launching products to help their customers make better use of their data while protecting privacy:

One thing we can be sure in 2019 is that data privacy and security will continue to make headlines.

Join our newsletter



Weekly News #2

Weekly News #2

Facebook privacy issues

New information on Facebook’s user data misuse causes a $30 billion market-value loss. US senators propose the Data Care Act to regulate privacy across the 50 states. Reporting data breaches is now mandatory in Canada. The Department of Health and Human Services wants to modify HIPAA.

Facebook lost $30 billion in market value after the New York Times published on December 18 documents detailing different agreements that Facebook had with companies like Microsoft, Netflix, Spotify, Amazon, and Yahoo to access Facebook users’ data. For example, Netflix and Spotify could read users’ private messages. However, that was not everything. On December 14, Facebook notified its users of a bug in the Photo API that gave developers access to non-shared photos of 5.6 million users.

Pushed by the recent data breaches, 15 senators in the US proposed the Data Care Act on Wednesday to regulate privacy across the 50 states. The Data Care Act main guidelines are:

  • Duty of Care – Must reasonably secure individual-identifying data and promptly inform users of data breaches that involve sensitive information;
  • Duty of Loyalty – May not use individual-identifying data in ways that harm users;
  • Duty of Confidentiality – Must ensure that the duties of care and loyalty extend to third parties when disclosing, selling, or sharing individual-identifying data;
  • Federal and State Enforcement – A violation of the duties will be treated as a violation of an FTC rule with fine authority. States may also bring civil enforcement actions, but the FTC can intervene;
  • Rulemaking Authority – FTC is granted rulemaking authority to implement the Act.

On November 1st, it became mandatory to notify data breaches in Canada. This is an important step for Canadian privacy regulation and is something that will require a shift in the operation of Canadian businesses because according to Statistics Canada only 10% of the businesses affected by a cyber attack reports it.

The Department of Health and Human Services (HHS) issued a Request For Information (RFI) for input on how to modify HIPAA on the following issues:

  • Encouraging information-sharing for treatment and care coordination;
  • Facilitating parental involvement in care;
  • Addressing the opioid crisis and serious mental illness;
  • Accounting for disclosures of protected health information for treatment, payment, and health care operations;
  • Changing the current requirement for certain providers to make a good faith effort to obtain an acknowledgment of receipt of the Notice of Privacy Practices;

After having a 2018 plagued with data breaches and important privacy regulation (GDPR), we can expect that 2019 will be a year in which protecting privacy becomes a must for public and private organizations. SC magazine has eight privacy predictions for 2019, most of them revolve around regulations and their impact on the behavior of organizations and consumers.

Join our newsletter