Select Page
2 ways in which organizations can open their data in a privacy protected way

2 ways in which organizations can open their data in a privacy protected way

Opening data means making it accessible for any person to view, use, and share. While seemingly daunting, opening data helps researchers, scientists, governments, and companies better provide for the greater good of society. 

The McKinsey Global Insitute predicted that open data could unlock USD 5 trillion per year in economic value globally. However, opening data can’t happen without privacy protection.

Data marketplaces have begun popping up to make data more accessible and, in some cases monetizing it. These marketplaces are collections of data, with a degree of privacy protection, organized for buying and selling between companies for analytic purposes. 

After ensuring your company’s user data is privacy-protected properly, marketplaces are an excellent way to generate additional revenue sources, open up partnerships, and expand the value of your data.

Healthcare data marketplace.

Healthcare is an industry that has a lot to benefit from opening data. Making more data available can help find cures for rare diseases, monetizing data can provide much needed financial resources to hospitals, and letting external experts analyze data can lead to discoveries.

To take advantage of this opportunity, Mayo Clinic, a not-for-profit, academic medical center committed to clinical practice, education, and research, announced the launch of a healthcare data platform. The medical center is looking to digitize 25 million pathology slides in the next two years, creating the largest source of labeled medical data in the world that is easily accessible for doctors and other researchers to find the information necessary to diagnose or educate. 

Mayo Clinic is emphasizing the importance of eliminating the risk of re-identification of Personal Health Information (PHI) while maintaining the value of the data. 

Marketplaces in other industries

Data marketplaces have appeared in other industries, from finance to marketing, and government. The available information is beneficial to everyone involved.

Governments have been leading in the open data movement; for example, The United States Census Bureau created the leading source of statistical information of US citizens. Through their website, researchers can find data on employment, population, and other statistics relevant to American people. This data is collected, and de-identified such that any American person can be re-identified, while the information remains valuable. 

On the marketing side, there are a couple of examples. Salesforce launched Data Studio, and Oracle created the Oracle Data Marketplace. These projects allow companies to buy and sell their data, to better understand their customers and marketing activities. 

How is CryptoNumeric’s contributing to open data? 

Recently, we partnered with Clearsense, a healthcare data analytics company that is reimagining how healthcare organizations manage and utilize their data.

Clearsense helps healthcare organizations unlock the power of their disparate data sources to lower cost, increase revenue, and improve outcomes. 

Through the use of our products, CN-Protect and CN-Insight, Clearsense helps it’s healthcare partners in two ways:

  •  Anonymize their data so that it can be shared. Following HIPAA standards and using state-of-the-art privacy protection techniques, the original datasets are transformed, resulting in a privacy protected dataset that preserved its analytical value. 
  • Perform privacy-protected analytics. There are cases in which there is a need to combine various datasets; however, for regulatory restrictions, these datasets can not be moved, limiting its usefulness. With the help of CN-Insight, Clearsense was able to overcome this challenge and perform analytics on datasets as if they were combined, without the need of relocating them. 

Clearsense now can offer its customers an opportunity to open up their data in a way that is compliant with regulations and cost-effective.

By protecting data privacy while maintaining its value, opening up collected information is helping move toward a privacy-safe future that benefits from the enormous amounts of data generated every second.

To learn more about our partnership with Clearsense watch our webinar Facilitating Multi-Site Research: Privacy, HIPAA and Data Residency

To learn more about how the Mayo Clinic project, read our blog Two Paths of Data Monetization: Exploitation or Protection

 

Join our newsletter


CCPA 1 month in review

CCPA 1 month in review

The California Consumer Privacy Act (CCPA) is privacy legislation that regulates companies that collect and process data of California residents, even if the company is based elsewhere. The law requires that consumers are given the option to opt-out of data collection/selling, and/or have their data completely removed from those datasets. 

As well, any data that is collected still has to be protected. Not only does this protect consumers, but it makes it easier for companies to comply with data deletion requests. 

While CCPA came into effect on January 1st, it has yet to create the waves in privacy that many were hoping for. 

What is happening to my data privacy? 

As of right now, not too much. Many large companies, such as Facebook, have made changes to their privacy policies in order to be compliant, however many others are slow-moving to do so. Rules of compliance continue to be a work in progress, generating both mass confusion and the slow start of some companies fulfilling the changing laws. 

Mary Stone Ross, associate director of the electronic privacy information center, says that enforcement of CCPA will likely not start for months, as well as will be an underfunded program. Not only this, it appears the likelihood of prosecuting CCPA cases will be limited to just 3 cases per year. 

Because of this, CCPA’s enforcement date for companies will start in July, despite its implementation already passing. 

Part of the legislation includes the opportunity to request my data. Is this something companies have started abiding by? 

While many companies are complying with CCPA and returning user data, others are making the interaction more complicated than necessary. Some companies are redirecting their customers to multiple outside organizations while others are offering to send data and then never following through. 

One writer at the Guardian requested her data from Instagram, and while she received 3.92GB of memory, there was plenty of information that the photo-sharing giant left out from her report. 

Despite the 8000 photos, direct messages, and search history, there was not much that couldn’t be found in the app already. The company failed to send the metadata of which they have stated in their data policy to storing. This metadata could include information regarding the location of where photos were taken. 

Instagram is not the only application to send incomplete information when requested. Spotify, a leading music streaming platform, complies with CCPA in sharing data. However, after denying one user’s original request, the platform responded with a light 4.7-megabyte file, despite this person having a 9-year-old account. 

Another social media, Twitter, sent users their files in Javascript, making it impossible for users without coding knowledge to understand the contents of their Twitter history.

Such companies are getting away by complying at a bare minimum -and they are allowed to do this. Companies like Instagram can send snippets of data when requested, and users cannot prove that they did not receive all of it. 

Because CCPA has not seen a total resurrection, companies are pushing around users into thinking they are abiding by the law, without adequately protecting their data.

Is my data still being sold? 

CCPA requires that companies provide users with the opportunity to opt-out of data sharing/selling. However, in many cases, information is often buried, small print, and unclear for a user to find. 

Data aggregators have partnered with companies participating in data sharing and are the go-to when users want to opt-out of data sharing. 

Acxiom is an example of a company taking the edge off consumers who want their data back. By placing information into the Acxiom site, the authorized agent scours sights requesting the deletion or viewing of your data. 

The issue with sites such as Acxiom is that the majority of internet users are unfamiliar with these types of applications. Thus, finding ways to view and delete your data becomes exhausting. 

The average Internet user participates in over 6 hours on the Internet per day. With the human attention span decreasing, the number of websites one person may visit per day could be well over 50. User’s visiting a webpage for only one article, or for only a few minutes, would most likely not spend the extra time searching for a Do Not Sell link. 

Because of this, companies remain compelled to hide the opportunity for users to take control of their data. And while CCPA should be effective for the average user’s data, it is still unclear the impact it will have.

Join our newsletter


Facial Recognition added to the list of privacy concerns

Facial Recognition added to the list of privacy concerns

Personal data privacy is a growing concern across the globe. And while we focus on where our clicks and metadata end up, there is a new section of privacy invasion being introduced: the world of facial recognition. 

Unbeknownst to the average person, facial recognition and tracking have infiltrated our lives in many ways and will only continue to grow in relevance as technology develops. 

Companies like Clearview AI and Microsoft are on two ends of the spectrum when it comes to facial recognition, with competing technologies and legislations fight to protect and expose personal information. Data privacy remains an issue as well, as products like Apple’s Safari are revealed to be leaking the very information it’s sworn to protect. 

Clearview AI is threatening privacy as we know it.

Privacy concerns due to facial recognition efforts are growing and relevant.

Making big waves in facial recognition software is a company called Clearview AI, which has created a facial search engine of over 3 billion photos. On Sunday, January 18th, the New York Times (NYT) wrote a scathing piece exposing the 2017 start-up. Until now, Clearview AI has managed to keep its operations under wraps, quietly partnering with 600 law enforcement agencies. 

By taking the photo of one person and submitting it into the Clearview software, Clearview spits out tens of hundreds of pictures of that same person from all over the web. Not only are images exposed, but the information about where they were taken, which can lead to discovering mass amounts of data on one person. 

For example, this software was able to find a murder suspect just by their face showing up in a mirror reflection of another person’s gym photo. 

The company is being questioned for serious privacy risk concerns. Not only are millions of people’s faces stored on this software without their knowledge, but the chances of this software being used for unlawful purposes are incredibly high.

The NYT also released that the software pairs with augmented reality glasses; someone could take a walk down a busy street and identify every person they passed, including addresses, age, etc. 

Many services prohibit people from scraping user’s images, including Facebook or Twitter. However, Clearview has violated said terms. When asked about its Facebook violation, the CEO, Mr. Ton-That disregarded, saying everybody does it. 

As mentioned, hundreds of police agencies in both the U.S and Canada allegedly have been using Clearview’s software to solve crimes since February of 2019. However, a Buzzfeed article has just revealed Clearview’s claim about helping to solve a 2019 subway terrorist threat is not real. The incident was a selling point for the facial recognition company to partner with hundreds of law enforcement across the U.S. The NYPD has claimed they were not involved at all. 

This company has introduced a dangerous tool into the world, and there seems to be no coming back. While it has great potential to help solve serious criminal cases, the risk for citizens is astronomical. 

Microsoft at the front of facial recognition protection

To combat privacy violations, similar to the concerns brought forward with Clearview AI, cities like San Fransisco have recently banned facial recognition technologies, fearing a privacy invasion. 

Appearing in front of Washington State Senate, two Microsoft employees sponsored two proposed bills supporting the regulation of facial recognition technologies. Rather than banning facial recognition, these bills look to place restrictions and requirements onto the technology owners.

Despite Microsoft offering facial recognition as a service, its president Brad Smith called for regulating facial recognition technologies in 2018.

Last year, similar bills, drafted by Microsoft, made it through Washington Senate. However, those did not go forward as the House made changes that Microsoft opposed. The amendments by the House included a certification that the technology worked for all skin tones and genders.

The first of these new Washington Bill’s looks similar to the California Consumer Privacy Act, which Microsoft has stated it complies with. This bill also requires companies to inform their customers when facial recognition is being used. The companies would be unable to add a person’s face to their database without direct consent.

The second bill has been proposed by Joseph Nguyen, who is both a state senator and a program manager at Microsoft. This proposed bill focuses on government use of facial recognition technology. 

A section of the second bill includes requiring that law enforcement agencies must have a warrant before using the technology for surveillance. This requirement has been met with heat from specific law enforcement, saying that people don’t have an expectation of privacy in public; thus, the demand for a warrant was unnecessary. 

Safari exposed as a danger to user privacy.

About data tracking, Google’s Information Security team has released a report detailing several security issues in the design of Apple’s Safari Intelligent Tracking Prevention (ITP). 

ITP is used to protect users from tracking across the web by preventing third-party affiliated websites from receiving information that would allow identifying the user. The report lists two of ITP’s main functionalities:

  • Establishing an on-device list of prevalent domains based on the user’s web traffic
  • Applying privacy restrictions to cross-site requests to domains designated as prevalent

The report, created by Google researchers Artur Janc, Krzysztof Kotowicz, Lucas Weichselbaum, and Roberto Clapis, reported five different attacks that exploit the ITP’s design. These attacks are: 

  • Revealing domains on the ITP list
  • Identifying individual visited websites
  • Creating a persistent fingerprint via ITP pinning
  • Forcing a domain onto the ITP list
  • Cross-site search attacks using ITP

(Source)

Even so, the advised ‘workarounds’ given in the report “will not address the underlying problem.” 

Most interesting coming from the report is that in trying to address privacy issues, Apple’s Safari created more significant privacy issues.

As facial recognition continues to appear in our daily lives, recognizing and educating on the implications these technologies will have is critical to how we move forward as an information-protected society.

To read more privacy blogs, click here

 

Join our newsletter


Processing personal data through anonymization methods

Processing personal data through anonymization methods

Companies are becoming increasingly reliant on user data to understand consumers better and improve performance. But with the rise of new privacy legislation and the growing concerns for personal data security, ensuring that your company is checking all the boxes in privacy protection is more critical than ever.

Utilizing different privacy-protecting techniques, organizations can then protect consumer information while extracting value at the same time. These techniques include masking, k-anonymity, and differential privacy.  

By understanding the potentials and challenges of these techniques, processing personal data so that user data is not re-identifiable is achievable.  

Let’s look at the three privacy-protection techniques mentioned before.

Masking is the process of replacing the values in a dataset with different values, that in many cases, resemble the structure of the original value. Unfortunately, masking tends to destroy the analytical value of data since the relationship between values gets affected by the replacing actions.

The ideal use case for masking is in DevOps environments where there is a need for data, but the analytical value is irrelevant. 

k-anonymity objective is to reduce privacy risk by grouping individual records into “cohorts.” Grouping is achieved by using generalization (substitution of a specific value with a more general value) and suppression (removal of values) to group the quasi-identifiers (QID’s) in ways that make them indistinguishable from one another. The k value defines the minimum number of elements in one group; the higher the value is, the higher the level of data protection. 

While k-anonymity reduces the analytical value of the data, it still preserves enough value for data scientists to perform analytics and Machine Learning using the dataset.

Differential privacy is a privacy technique that provides a privacy guarantee on how much information can be extracted on an individual.  

Differential privacy uses a technique called perturbation, which adds random noise to a point where it becomes incredibly difficult to know with certainty if a specific individual is present in a dataset. 

Differential privacy is one of the most promising privacy techniques; however, it can only be used with large data sets because applying perturbation to a small data set would destroy its analytical value.

With these privacy-techniques techniques, privacy and analytics no longer have to be at odds. Companies who dare to ignore them are exposing themselves to unnecessary risks. 

Contact us today to learn how you can use CN-Protect to apply any of these techniques to protect your data while preserving its analytical value.

To read more privacy blogs, click here

Join our newsletter


Looking Ahead to LGPD, Brazil’s GDPR

Looking Ahead to LGPD, Brazil’s GDPR

Since the implementation and success of the General Data Protection Regulation (GDPR), privacy has emerged globally as a legislative hot topic. It has influenced governments and consumers to take control of their privacy. Taking inspiration from the EU’s regulation, Brazil has created its own privacy legislation, the Brazilian General Protection Law (LGPD).

LGPD looks to unify Brazil’s 40 current privacy laws for online security. Through this implementation, Brazil seeks to consolidate and control how companies collect, use, disclose, and process personal data. 

LGPD is set to come into effect in August 2020, leaving companies with less than 7 months to prepare and comply with its new legislation. However, companies that are already compliant with GDPR will find most of the preparation is already in place.

GDPR has influenced countries around the world to follow their regulations into privacy protection. LGPD is no different. Its original conception in 2018 bore almost identical provisions before editing and vetoing. Much like with GDPR, complying with LGPD is necessary for any organization to maintain not only customer trust but also their company’s analytics and monetizations. 

Just like GDPR, this legislation defines its applicability in Article 3, referring to any data processing operation or processing personal data within its territory. Meaning, companies that are not located in Brazil, but deal with data processed within Brazil are required to follow LGPD legislation. 

Just like GDPR, this legislation applies to any processing operation carried out by a natural person or a legal entity, of public or private law. LGPD is irrespective of means used for processing, as well the country where either its headquarters or data is located. This is provided that:

  • The processing operation is carried out in Brazil
  • The purpose of the processing activity is to offer or provide goods or services, or the processing of data of individuals located in Brazil
  • The personal data was collected in Brazil

(Source)

Article 7 lists a limited number of situations where the processing of personal data is allowed. The definitions include:

  • Consent of the data subject (meaning consent is given in writing/must be proven, as well the data maintains the ability to be revoked at any time)
  • Compliance with a legal or regulatory obligation by the controller;
  • Anonymized data not considered personal data, only if it cannot be re-identified

How is personal data defined? 

Unlike GDPR, LGPD takes a broad approach with its definition of personal data. By doing so, LGPD thus can apply its legislation not only to data explicitly identifying a person, but also data in which an identity can be inferred. 

For both GDPR and CCPA, anonymized data remains viable for companies to utilize for monetization or analytics. LGPD instead states that anonymized data remains defined as personal when being used for tasks such as behavioral tracking. 

Article 18 of LGPD defines 9 personal data subjects’ rights. These include: 

  • Confirmation of the existence of their data being processed
  • Access to the data
  • Correct any incomplete, inaccurate or out-of-date data
  • Anonymization, blocking or deletion of unnecessary or noncompliance data 
  • Portability of their data i.e., an express request to another service or processor
  • Deletion of personal data 
  • Information about public and private entities with which the controller has shared data
  • Information about the possibility of denying consent and the consequences
  • Revoke consent

Who is the ANPD? 

The National Data Protection Authority (ANPD) is mentioned a multitude of times throughout the legislation, in which it is regarded as responsible for overseeing the enforcement of privacy and data protection laws in Brazil. The ANPD is therefore responsible for monitoring, issuing guidelines and enforcing data protection laws throughout Brazil. 

Central powers of the ANPD include:

  • Issuing guidelines for the implementation of LGPD, data protection, and privacy
  • Examine complaints 
  • Investigate and apply sanctions
  • Prepare studies and educate society
  • Encourage the adoption of standards for services and products that facilitate data subjects’ control over their data
  • Promote cooperative actions with data protection authorities from other countries 

This role is similar to France’s National Commission for Information Technology and Liberties (CNIL), put in place during the introduction of GDPR. Similar to the ANPD, some of the CNIL’s role includes ensuring compliance, informing data controllers of their responsibilities, rights and obligations, as well as controlling and sanctioning. 

How is LGPD different from GDPR? 

One of the main successes of GDPR was its hefty fines for companies, such as Google’s penalty of 50 milion euros. By creating significant fines, companies are placed at risk for losing tens of millions of dollars. However, LGPD has put forward much less regulation for fines and warnings of violations: 

Sanction

LGPD

GDPR

Warning

No specific time frame of response, except for “an appropriate time period”

A reply must be issued within 72 hours of receiving the notification

Fines

Up to 2% of revenue in Brazil or R$50, 000, 000 (whichever is higher). This is equivalent to $12.9million USD

Up to 4% of annual revenue for a company or €20 million (whichever is higher). This is equivalent to$22million USD

As shown in the chart above, LGPD fines and warnings are significantly smaller than those of GDPR. 

Both GDPR and LGPD require Data protection officers (DPO). These officers have the responsibility of confirming that organizations are complying with protecting personal data. However, GDPR requires that this position is appointed by both a data controller and processor and that the company meeting specific requirements needs a DPO. LGPD instead requires that every company have an assigned DPO, who is given their role by a data processor.

GDPR has made clear the importance of compliance, and the introduction of LGPD is no different. Protecting consumer privacy is important not only in complying with LGPD but with maintaining data analytics and monetization.

LGPD is a significant step in privacy regulations. Success in Brazil’s data privacy regulations could influence more countries to move towards these same regulations, just as GDPR has influenced many countries. As these privacy regulations begin to emerge across the globe, acting now is more important than ever in order to comply and succeed in the new privacy era. 

To read more privacy blogs, click here.

 

Join our newsletter