The first step to privacy-protecting your data is to understand privacy attributes

The first step to privacy-protecting your data is to understand privacy attributes

To effectively privacy-protect people’s data, businesses need a risk metric to evaluate privacy exposure and the effectiveness of protection actions.  And, while many factors contribute to the overall privacy risk metric, the privacy risk of each dataset is the most important one.

Calculating privacy risk requires understanding that each value in the dataset has a specific privacy attribute dependent on how unique the value is and its relationship to other values in the dataset. For example, an email address is unique, while gender is not, however, by combining gender, age, and zip code, the risk of re-identification becomes very high.

There are three privacy attributes that any value in a dataset can have:

Direct identifiers: Are highly unique, representing close to a 100% risk of re-identification. Examples of values categorized as direct identifiers are name, social security number, credit card number, email address, etc.

Quasi-identifiers: They are not unique; thus, their privacy risk is low; however, when combined with other quasi-identifiers, the risk increases considerably.  In a well-known re-identification case, an MIT student was able to identify the governor of Massachusetts by using only gender, zip code, and birthday. 

Sensitive attributes: While the uniqueness of these values can vary, their main characteristic is that disclosing them could harm the individual related to it. For example, disclosing health diagnosis could lead to discrimination. 

In general, businesses have a clear understanding of direct identifiers and how to deal with them. Unfortunately, in many cases, quasi-identifiers and sensitive attributes are not considered, leaving the business exposed to high privacy risks. 

Now you know how to classify the different values in your dataset using privacy attributes. Using a software, like CN-Protect, can help you streamline this task through its smart classification algorithm that leverages AI to learn from your specific requirements and use cases.

Join our newsletter
Facebook collecting healthcare data

Facebook collecting healthcare data

As many of our previous blogs have highlighted, COVID-19 is severely impacting the tech world. Privacy regulations have been a hot topic for debate between governments, Big tech, and its users. 

 Facebook has joined the top companies taking advantage of user data in COVID-19 research. As well, Brazil’ LGPD sees pushback in its enforcing because of COVID-19. Opposite to Brazil, US senators are introducing a new privacy bill to ensure American’s data privacy remains protected.

 

Facebook collecting symptom data

 In the current pandemic climate, tech companies of all sizes have stepped up to provide solutions and aid to governments and citizens struggling to cope with COVID-19. As we’ve highlighted in our previous blog posts, Google and Apple have been at the frontlines of introducing systems to protect their user privacy, while inflicting change in how communities track the virus.

 Following closely behind, Facebook has introduced its attempt to work with user data for the greater good of COVID-19 research. 

 Facebook announced its partnerships with different American universities to begin collecting symptom data in different countries. Facebooks CEO and founder told the Verge that the information could work to highlight COVID hotspots across the globe, especially in places where governments have neglected to address the virus’s severity.

Facebook has been working throughout this pandemic to demonstrated how aggregated & anonymized data can be used for good. 

However, not everyone is taking to Facebook’s sudden praise for user data control. One article highlighted how the company is still being investigated by the FTC over privacy issues

Facebook’s long list of privacy invasions for its users is raising some concerns over not how the data is currently being used, but how it will be handled after the pandemic has subsided. 

 

Brazil pushes back privacy legislation.

At the beginning of this year, we wrote an article outlining Brazil’s first data protection act, LGPD. This privacy legislation follows closely to that of the EU’s GDPR and will unify 40 current privacy laws the country has. 

Before COVID-19s effect on countries like Brazil, many tech companies were already pressuring the Brazilian government to change LGPD’s effective date.

On April 29th, the Brazilian president delayed the applicability date of the LGPD to May 3rd, 2021. By issuing this Provisional measure, the Brazilian Congress has been given 16 days to approve the new LGPD implementation. 

If Congress does not approve of this new date by May 15th, the Brazillian Congress must vote on the new LGPD date. If they do not, the LGPD will come into effect on August 14th, 2020. 

Brazil’s senate has now voted to move its introduction in January 2021, with sanctions coming to action in August 2021. Meaning all lawsuits and complaints can be proposed as of January 1st, and all action will be taken on August 1st (source).

 

America introduces new privacy law.

Much like Brazil’s privacy legislation being affected by COVID-19, some US senators have stepped up to ensure the privacy of American citizens data.

The few senators proposing this bill have said they are working to “hold businesses accountable to consumers if they use personal data to fight the COVID-19 pandemic.”

This bill does not target contact tracing apps like those proposed by Apple and Google. However, it does ensure that these companies are effectively using data and protecting it. 

The bill requires companies to gain consent from users in order to collect any health or location data. As well, it forces companies to ensure that the information they collect is properly anonymized and cannot be re-identified. The bill requires that these tech companies will have to delete all identifiable information once COVID-19 has subsided, and tracking apps are no longer necessary. 

The bill has wide acceptance across the congressional floor and will be enforced by the state attorney generals. This privacy bill is being considered a big win for Americans’ privacy rights, especially with past privacy trust issues between big tech companies and its users. 

Join our newsletter


Location data and your privacy

Location data and your privacy

As technology grows to surround the entirety of our lives, it comes as no surprise that each and every move is tracked and stored by the very apps we trust with our information. With the current COVID-19 pandemic, the consequences of inviting these big techs into our every movement are being revealed. 

At this point, most of the technology-users understand the information they do give to companies, such as their birthdays, access to pictures, or other sensitive information. However, some may be unknowing of the amount of location data that companies collect and how that affects their data privacy. 

Location data volume expected to grow

We have created over 90% of the world’s data since 2017. As wearable technology continues to grow in trend, the amount of data a person creates each day is on a steady incline. 

One study reported that by 2025, the installation of worldwide IoT-enabled devices is expected to hit 75 billion. This astronomical number highlights how intertwined technology is into our lives, but also how welcoming we are to that technology; technology that people may be unaware of the ways their data is collected. 

Marketers, companies and advertisers will increasingly look to using location-based information as its volume grows. A recent study found that more than 84% of marketers use location data for their 

The last few years have seen a boost in big tech companies giving their users more control over how their data is used. One example is in 2019 when Apple introduced pop-ups to remind users when apps are using their location data.

Location data is saved and stored for the benefit of companies to easily direct personalized ads and products to your viewing. Understanding what your devices collect from you, and how to eliminate data sharing on your devices is crucial as we move forward in the technological age. 

Click here to read our past article on location data in the form of wearable devices. 

COVID-19 threatens location privacy

Risking the privacy of thousands of people or saving thousands of lives seems to be the question throughout this pandemic; a question that is running out of time for debate. Companies across the big 100 have stepped up to volunteer its anonymized data, including SAS, Google and Apple. 

One of the largest concerns is not how this data is being used in this pandemic, but how it could be abused in the future. 

One Forbes article brought up a comparison of the regret many are faced with after sharing DNA with sites like 23andMe, leading to health insurance issues or run-ins with criminal activity. 

As companies like Google, Apple and Facebook step-up to the COVID-19 technology race, many are expressing their concerns as these companies have not been deemed reliable for user data anonymization. 

In addition to the data-collecting concern, governments and big tech companies are looking into contact-tracking applications. Civilian location data being used for surveillance purposes, while alluded for the greater good of health and safety, raises multiple red flags into how our phones can be used to survey our every movement. To read more about this involvement in contact tracing apps, read our latest article

Each company has released that it anonymizes its collected data. However, in this pandemic age, anonymized information can still be exploited, especially at the hands of government intervention. 

With all this said, big tech holds power over our information and are playing a vital role in the COVID-19 response. Paying close attention to how user data is managed post-pandemic will be valuable in exposing how these companies handle user information.

 

Join our newsletter


Privacy: The Most Talked About Gadget of CES 2020

Privacy: The Most Talked About Gadget of CES 2020

This week Las Vegas once again saw the Consumer Electronics Show (CES), accompanied by a range of flashy new gadgets. Most significant among the mix; privacy. 

Technology front runners such as Facebook, Amazon, and Google took the main stage in unveiling data privacy changes in their products, as well as headlining discussions surrounding the importance of consumer privacy. However, through each reveal, attendees noticed gaps and missteps in these companies’ attempts at privacy.

Facebook: A New Leader in Data Privacy? 

This year, Facebook attempted to portray itself as a changed company in the eyes of privacy. Complete with comfortable seating and flowers, Facebook’s CES booth revealed a company dedicated to customer privacy, pushing the idea that Facebook does not sell customer data. 

Originally created in 2014, Facebook relaunched a new-and-improved “Privacy Checkup”, complete with easy to manage data-sharing settings. Facebook took the opportunity at this year’s CES to display added features such as the ability to turn off facial recognition, managing who can see a user account or posts, and the ability to remove/add preferences based on personal browsing history.

While these changes to privacy settings are a step in the right direction towards protecting user data, attendees could not help but notice the side-stepping of significant data privacy initiatives of which Facebook is ignoring. Most notably, the lack of user control on how advertisers use personal information. 

Ring’s New Control Center: Fix or Flop?

Ring has been a hot commodity in household security since its purchase by Amazon in 2018. However, recently, the company has come under fire for its law enforcement partnerships. 

In light of mounting hacking concerns, the home security company utilized CES to announce a new dashboard for both Apple and Android users labeled “the control center”. This center provides the user with the opportunity to manage connected Ring devices, third-party devices, as well as providing the user with options for law enforcement to request access to Ring videos. 

Ring has missed initial requests of its customers who are asking for additions such as suspicious activity detection or notifying for new account logins. Ring has continued to add software that in turn places onus onto users to protect themselves. Customers are viewing this so-called privacy update as nothing more than a “cosmetic redesign”. The device continues to provide no significant hacker-protection, and therefore no notable privacy protection for its customers. 

Google Assistant: New Front-Runner in Privacy Adjustments

Each year Google is celebrated for taking full advantage of CES to indulge its visitors into the technology of the company. This year, Google’s efforts focused on Google Assistant.

After last year’s confirmation that third-party workers were monitoring Google Assistant, Google’s efforts to combat data privacy has been at the forefront of this year’s CES panel. On January 7, 2020, Google announced new features to its Assistant, reassuring its dedication to privacy protection. Users are now able to ask their assistant questions such as: 

  • “Are you saving my audio data?”
  • “Hey google, delete everything I said to you this week”
  • “Hey Google, that wasn’t for you”
  • “How are you keeping my information private?”

Source

Of these new user commands, the most significant is “are you saving my audio data?” This command allows users to determine whether or not their Assistant opted into allowing Google access. 

However, some Google Assistant users are accusing Google of placing onus onto the user, instead of creating a product that protects its user. Similar to the Ring controversy, there is frustration that Google is missing the mark for understanding the privacy demands of its users. All that being said, Google is one of few companies taking the step in the right direction to most significantly impact how user information is stored. 

It is clear that this year’s CES, while still delivering new and exciting ‘gadgets of the future’, has experienced a shift towards privacy as the most significant technological topic point. While that was made clear by most front-leading tech companies, many continue to be missing the mark in understanding the privacy their users want.

Facebook, Ring and Google each brought forward privacy changes of topical interest while continuing to exude an ignorant role of misunderstanding what it means to keep their user’s information private. Thus the question we must ask ourselves as consumers of these products continues to be; are these minimal changes enough for us to continue flushing our information into? 

Join our newsletter


Breaching Data Privacy for a Social Cause

Breaching Data Privacy for a Social Cause

Data partnerships are increasingly justified as a social good, but in a climate where companies are losing consumer trust through data breaches, privacy concerns begin to outweigh the social benefits of data sharing. 

 

This week, Apple is gaining consumer trust with its revamped Privacy Page. Facebook follows Apple’s lead as they become more wary about sharing a petabyte of data with Social Science One researchers due to increasing data privacy concerns. Also, law enforcement may be changing the genetic privacy game as they gain unprecedented access to millions of DNA records to solve homicide cases and identify victims.

Apple is setting the standard for taking consumer privacy seriously—Privacy as a Social Good

Apple is setting the stage for consumer privacy with its redesigned privacy page. Apple CEO Tim Cook announced, “At Apple, privacy is built into everything we make. You decide what you share, how you share it, and who you share it with. Here’s how we protect your data.” (Source)

There is no doubt that Apple is leveraging data privacy. When entering Apple’s new privacy landing page, bold letters are used to emphasize how privacy is a fundamental part of the company, essentially one of their core values (Source). 

Apple’s privacy page explains how they’ve designed their devices with their consumers’ privacy in mind. They also showcase how this methodology applies to their eight Apple apps: Safari browser, Apple Maps, Apple Photos, iMessage, Siri Virtual Assistant, Apple News, Wallet and Apple Pay, and Apple Health.

A privacy feature fundamental to many of Apple’s apps is that the data on an Apple device is locally stored and is never released to Apple’s servers unless the user consents to share their data, or the user personally shares his/her data with others. Personalized features, such as smart suggestions, are based on random identifiers.

  • Safari Browser blocks the data that websites collect about site visitors with an Intelligent Tracking Prevention feature and makes it harder for individuals to be identified by providing a simplified system profile for users. 
  • Apple Maps does not require users to sign in with their Apple ID. This eliminates the risk of user location and search information history linking to their identity. Navigation is based on random identifiers as opposed to individual identifiers.  

Photos taken on Apple devices are processed locally and are not shared unless stored on a cloud or shared by the user.

  • iMessages aren’t shared with Apple and are encrypted via end-to-end device encryption.
  • Siri, Apple’s voice-activated virtual assistant can process information without the information being sent to Apple’s servers. Data that is sent back to Apple is not associated with the user and is only used to update Siri.
  • Apple News curates personalized news and reading content based on random identifiers that are not associated with the user’s identity. 
  • Apple Wallet and Pay creates a device account number anytime a new card is added. Transactional data is only shared between the bank and the individual.
  • Apple Health is designed to empower the user to share their personal health information with whom they choose. The data is encrypted and can only be accessed by the user via passcodes. 

 

Facebook realizes the ethical, legal, and technical concerns in sharing 1,000,000 gigabytes of data with social science researchers

Facebook has been on the wrong side of data privacy ever since the Cambridge Analytica scandal in 2018 where users’ data was obtained, without their consent, for political advertising. Now that Facebook is approaching privacy with users best interest in mind, this is creating tension between the worlds of technology and social science. 

Earlier this year, Facebook and Social Science One partnered in a new model of industry-academic partnership initiative to “help people better understand the broader impact of social media on democracy—as well as improve our work to protect the integrity of elections.” said Facebook (Source). 

Facebook agreed to share 1,000,000 gigabytes of data with Social Science One to conduct research and analysis but has failed to meet their promises. 

According to Facebook, it was almost impossible to apply anonymization techniques such as differential privacy to the necessary data without stripping it completely of its analytical value.   

Facebook half-heartedly released some data as they approached deadlines and pressure, but what they released and what they promised was incomparable. Facebooks’ failure to share the data they agreed to counters the proposed social benefit of using the data to study the impact of disinformation campaigns. 

Facebook is torn between a commitment to contributing to a socially good cause without breaching the privacy of its users. 

This exemplifies how Facebook may not have been fully prepared to shift its business model from one that involved data monetization to a CSR-driven (corporate social responsibility) model where data sharing is used for research while keeping privacy in mind. 

Will Facebook eventually fulfill their promises?

 

Socially Beneficial DNA Data: Should Warrants be given to access Genealogy website databases?

At a police convention last week, Floridian detective, Michael Fields, revealed how he received a valid law enforcement request to access GEDmatch.com data (Source).

GEDmatch is a genealogy website that contains over a million users’ records. But, does the social benefit accrued outweigh the privacy violation to users whose data was exposed without their consent?

Last year, GEDmatch faced a mix of scrutiny and praise when they helped police identify the Golden State Killer after granting them access to their database (Source).  After privacy concerns surfaced, GEDmatch updated its privacy terms. Access was only permitted to law enforcement from users who opted-in to share their data. Additionally, police authorities are limited to searching for the purposes of, “murder, nonnegligent manslaughter, aggravated rape, robbery or aggravated assault” cases (Source).

This recent warrant granted to detective Fields overrode GEDmatch privacy terms by allowing the detective to access data of all users, even those who did not consent. This was the first time a judge agreed to a warrant of this kind. This changes the tone in genetic privacy, potentially setting precedent about who has access to genetic data. 

 

Join our newsletter


The Consequences of Data Mishandling: Twitter, TransUnion, and WhatsApp

The Consequences of Data Mishandling: Twitter, TransUnion, and WhatsApp

Who should you trust? This week highlights the personal privacy risks and organizational consequences when data is mishandled or utilized against the best interest of the account holder. Twitter provides advertisers with user phone numbers that had been used for two-factor authentication, 37,000 Canadians’ personal information is leaked in a TransUnion cybersecurity attack, and a GDPR-related investigation into Facebook and Twitter threatens billions in fines.
Twitter shared your phone number with advertisers.

Early this week, Twitter admitted to using the phone numbers of users, which had been provided for two-factor authentication, to help profile users and target ads. This allowed the company to create “Tailored Audiences,” an industry-standard product that enables “advertisers to target ads to customers based on the advertiser’s own marketing lists.” In other words, the profiles in the marketing list an advertiser uploaded were matched to Twitter’s user list with the phone numbers users provided for security purposes.

When users provided their phone numbers to enhance account security, they never realized that this would be the tradeoff. This manipulative approach to gaining user-information raises questions over Twitter’s data privacy protocols. Moreover, the fact that they provided this confidential information to advertisers should leave you wondering what other information is made available to business partners and how (Source). 

Curiously, after realizing what happened, rather than come forward, the company rushed to hire Ads Policy Specialists to look into the problem. 

On September 17, the company “addressed an “error” that allowed advertisers to target users based on phone numbers.” (Source) That same day, they then posted a job advertisement for someone to train internal Twitter employees on ad policies, and to join a team working on re-evaluating their advertising products.

Now, nearly a month later, Twitter has publicly admitted their mistake and said they are unsure how many users were affected. While they insist no personal data was shared externally, and are clearly taking steps to ensure this doesn’t occur again, is it too late?

Third-Party Attacks: How Valid Login Credentials Led to Banking Information Exposure 

A cybersecurity breach at TransUnion highlights the rapidly increasing threat of third party attacks and the challenge to prevent them. The personal data of 37,000 Canadians was compromised when legitimate business customer’s login credentials were used illegally to harvest TransUnion data. This includes their name, date of birth, current and past home addresses, credit and loan obligation, and repayment history. While this may not include information on bank account numbers, social insurance numbers may also have been at risk. This compromise occurred between June 28 and July 11 but was not detected until August (Source).

While alarming, these attacks are very frequent, accounting for around 25% of cyberattacks in the past year. Daniel Tobok, CEO of Cytelligence Inc. reports that the threat of third party attacks is increasing, as more than ever, criminals are using the accounts of trusted third parties (customers, vendors) to gain access to their targets’ data. This method of entry is hard to detect due to the nature of the actions taken. In fact, often the attackers are simulating the typical actions taken by the users. In this case, the credentials for the leading division of Canadian Western Bank were used to login and access the credit information of nearly 40,000 Canadians, an action that is not atypical of the bank’s regular activities (Source).

Cybersecurity attacks like this are what has caused the rise on two-factor authentication, which looks to enhance security -perhaps in every case other than Twitter’s. However, if companies only invest in hardware, they only solve half the issue, for the human side of cybersecurity is a much more serious threat than often acknowledged or considered. “As an attacker, you always attack the weakest link, and in a lot of cases unfortunately the weakest link is in front of the keyboard.” (Source)

 

Hefty fines loom over Twitter and Facebook as the Irish DPC closes their investigation.

The Data Protection Commission (DPC) in Ireland has recently finished an investigation into Facebook’s WhatsApp and Twitter over breaches to GDPR (Source). These investigations looked into whether or not WhatsApp provided information about the app’s services in a transparent manner to both users and non-users, and about a Twitter data breach notification in January 2019.

Now, these cases have moved onto the decision-making phase, and the companies are now at risk of a fine up to 4% of their global annual revenue. This means Facebook could expect to pay more than $2 billion.

This decision moves to Helen Dixon, Ireland’s chief data regulator, and we expect to hear by the end of the year. These are landmark cases, as the first Irish legal proceedings connected to US companies since GDPR came into effect a little over a year ago (May 2018) (Source). Big tech companies are on edge about the verdict, as the Irish DPC plays the largest GDPR supervisory role over most big tech companies, due to the fact that many use Ireland as the base for their EU headquarters. What’s more, the DPC has opened dozens of investigations into other major tech companies, including Apple and Google, and perhaps the chief data regulator’s decision will signal more of what’s to come (Source).

In the end, it is clear that the businesses and the public must become more privacy-conscious, as between Twitter’s data mishandling, the TransUnion third-party attack, and the GDPR investigation coming to a close, it is clear that privacy is affecting everyday operations and lives.

Join our newsletter