Consumer purchasing decisions rely on product privacy

Consumer purchasing decisions rely on product privacy

79% of Americans are concerned about the way companies are using their data. Now, they are acting by avoiding products, like Fitbit after the Google acquisition. *Privacy Not Included, a shopping guide from Mozilla, signals that these privacy concerns will impact what (and from whom) consumers shop for over the holidays.

Consumers are concerned about the ways businesses are using their data

A Pew Research Center study investigated the way Americans feel about the state of privacy, and their concerns radiated from the findings. 

    • 60% believe it is not possible to go through daily life without companies and the government collecting their personal data.
    • 79% are concerned about the way companies are using their data.
    • 72% say they gain nothing or very little from company data collected about them.
    • 81% say that the risks of data collection by companies outweigh the benefits.

This study determined that most people feel they have no control over the data that is collected on them and how it is used.

Evidently, consumers lack trust in companies and do not believe that most have their best interests at heart. In the past, this has not been such a big deal, but today, businesses will live and die by their privacy reputation. Such is reflected by the wave of privacy regulations emerging across the world, with GDPR, CCPA, and LGPD.

However, the legal minimum outlined in privacy regulations is not enough for many consumers, suggesting that meeting the basic requirements without embedding privacy into your business model is insufficient.

Such is seen with Fitbit, and the many users pledging to toss their devices in light of the Google acquisition. Google’s reputation has been tarnished in recent months with €50 million GDPR fine and backlash over their secret harvesting of health records in the Ascension partnership.

Google’s acquisition of Fitbit highlights the risks of a failure to prioritize privacy

On November 1, Google acquired Fitbit for $2.1 billion in an effort, we presume, to breach the final frontier of data: health information. Fitbit users are now uprising against the fact that Google will have access not just to their search data, location, and behaviour, but now, their every heartbeat.

In consequence, thousands of people have threatened to discard their Fitbits out of fear and started their search for alternatives, like the Apple Watch. This validates the Pew study and confirms that prioritizing privacy is a competitive advantage.

Despite claims that it will not sell personal information or health data, Fitbit users are doubtful. One user said, “I’m not only afraid of what they can do with the data currently, but what they can do with it once their AI advances in 10 or 20 years”. Another wrote this tweet:

 

This fear is hinged on the general concern over how big tech uses consumer data, but is escalated by the company’s historical lack of privacy-prioritization. After all, why would Google invest $2.1 billion if they would not profit from the asset? It can only be assumed that Google intends to use this data to break into the healthcare space. This notion is validated by their partnership with Ascension, where they have started secretly harvesting the personal information of 50 million Americans, and the fact that they have started hiring healthcare executives.

Privacy groups are pushing regulators to block the acquisition that was originally planned to close in 2020.

Without Privacy by Design, sales will drop

On November 20, the third annual *Privacy Not Included report was launched by Mozilla, which determines if connected gadgets and toys on the market are trustworthy. This “shopping guide” looks to “arm shoppers with the information they need to choose gifts that protect the privacy of their friends and family. And, spur the tech industry to do more to safeguard customers.” (Source)

This year, 76 products across six categories of gifts (Toys & Games; Smart Home; Entertainment; Wearables; Health & Exercise; and Pets) were evaluated based on their privacy policies, product specifications, and encryption/bug bounty programs.

To receive a badge, products must:

    • Use encryption
    • Have automatic security updates
    • Feature strong password mechanics
    • Manage security vulnerabilities
    • Offer accessible privacy policies

62 of those products met the Minimum Security Requirements, but Ashley Boyd, Mozilla’s Vice President of Advocacy, warns that that is not enough, because “Even though devices are secure, we found they are collecting more and more personal information on users, who often don’t have a whole lot of control over that data.”

8 products, on the other hand, failed to meet the Minimum Security Standards, including:

    • Ring Video Doorbell
    • Ring Indoor Cam
    • Ring Security Cams
    • Wemo Wifi Smart Dimmer
    • Artie 3000 Coding Robot
    • Little Robot 3 Connect
    • OurPets SmartScoop Intelligent Litter Box
    • Petsafe Smart Pet Feeder

These products fail to protect consumer privacy and adequately portray the risks associated with using their products. They are the worst nightmare of consumers, and the very reason 79% are concerned about the way companies are using their data.

Through this study, there was an evident lack of privacy prioritization across businesses, especially small ones, despite positive security measures. And those that did prioritize privacy, tended to make customers pay for it. This signals, that the market is looking for more privacy-focused products, and there is room to move in.

Businesses should embed privacy into the framework of their products and have the strictest privacy settings as the default. In effect, privacy operations management must a guiding creed from stage one, across IT systems, business practices, and data systems. This is what is known as Privacy by Design and Privacy by Default. These principles address the increasing awareness of data privacy and ensure that businesses will consider consumer values throughout the product lifecycle. To learn more read this: https://cryptonumerics.com/privacy-compliance/.

Customers vote with their money, coupling the Pew study results with the Fitbit case, it is clear that customers are privacy-conscious and willing to boycott not only products but companies who do not represent the same values. This week serves as a lesson that businesses must act quickly to bring their products in line with the privacy values, to move beyond basic regulatory requirements, and meet the demands of customers.

Join our newsletter


Your health records are online, and Amazon wants you to wear Alexa on your face

Your health records are online, and Amazon wants you to wear Alexa on your face

This week’s news was flooded with a wealth of sensitive medical information landing on the internet, and perhaps, in the wrong hands. Sixteen million patient scans were exposed online, the European Court of Justice ruled Google does not need to remove links to sensitive information, and Amazon released new Alexa products for you to wear everywhere you go.

Over five million patients have had their privacy breached and their private health information exposed online. These documents contain highly sensitive data, like names, birthdays, and in some cases, social security numbers. Worse, the list of compromised medical record systems is rapidly increasing, and the data can all be accessed with a traditional web browser. In fact, Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security, reports “[i]t’s not even hacking,” because the data is so easily accessible to the average person (Source).

One of these systems belongs to MobilexUSA, whose records, which showed patients’ names, date of birth, doctors, and a list of procedures, were found online (Source

Experts report that this could be a direct violation of HIPAA and many warn that the potential consequences of this leak are devastating, as medical data is so sensitive, and if in the wrong hands, could be used maliciously (Source).

According to Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, “[m]edical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice.” (Source

Such a statement signals a privacy crisis in the healthcare industry that requires a desperate fix. According to Pianykh, the problem is not a lack of regulatory standards, but rather that “medical device makers don’t follow them.” (Source) If that is the case, should we expect HIPAA to crackdown the same way GDPR has?

With a patient’s privacy up in the air in the US, a citizens’ “Right to be Forgotten” in the EU is also being questioned. 

The “Right to be Forgotten” states that “personal data must be erased immediately where the data are no longer needed for their original processing purpose, or the data subject has withdrawn [their] consent” (Source). This means that upon request, a data “controller” must erase any personal data in whatever means necessary, whether that is physical destruction or permanently over-writing data with “special software.” (Source)

When this law was codified in the General Data Protection Regulation (GDPR), it was implemented to govern over Europe. Yet, France’s CNIL fined Google, an American company, $110,000 in 2016 for refusing to remove private data from search results. Google argued changes should not need to be applied to the google.com domain or other non-European sites (Source). 

On Tuesday, The European Court of Justice agreed and ruled that Google is under no obligation to extend EU rules beyond European borders by removing links to sensitive personal data (Source). However, the court made a distinct point that Google “must impose new measures to discourage internet users from going outside the EU to find that information.” (Source) This decision sets a precedent for the application of a nation’s laws outside its borders when it comes to digital data. 

While the EU has a firm stance on the right to be forgotten, Amazon makes clear that you can “automatically delete [your] voice data”… every three to eighteen months (Source). The lack of immediate erasure is potentially troublesome for those concerned with their privacy, especially alongside the new product launch, which will move Alexa out of your home and onto your body.

On Wednesday, Amazon launched Alexa earbuds (Echo Buds), glasses (Echo Frames), and rings (Echo Loop). The earbuds are available on the marketplace, but the latter two are an experiment and are only available by invitation for the time being (Source). 

With these products, you will be able to access Alexa support wherever you are, and in the case of the EchoBuds, harness the noise-reduction technology of Bose for only USD $130 (Source). However, while these products promise to make your life more convenient, in using these products Amazon will be able to monitor your daily routines, behaviour, quirks, and more. 

Amazon specified that their goal is to make Alexa “ubiquitous” and “ambient” by spreading it everywhere, including our homes, appliances, cars, and now, our bodies. Yet, at the same time as they open up about their strategy for lifestyle dominance, Amazon claims to prioritize privacy, as the first tech giant to allow users to opt-out of their voice data being transcribed and listened to by employees. Despite this, it is clear that “Alexa’s ambition and a truly privacy-centric customer experience do not go hand in hand.” (Source). 

With Amazon spreading into wearables, Google winning the “Right to be Forgotten” case, and patient records being exposed online, this week is wrapping up to be a black mark on user privacy. Stay tuned for our next weekly news blog to learn about how things shape up. 

Join our newsletter


How Google Can Solve its Privacy Problems

How Google Can Solve its Privacy Problems

Google and the University of Chicago’s Medical Center have made headlines for the wrong reasons.  According to a June 26th New York Times report, a lawsuit filed in the US District Court for Northern Illinois alleged that a data-sharing partnership between the University of Chicago’s Medical Center and Google had “shared too much personal information,” without appropriate consent. Though the data sets had ostensibly been anonymized, the potential for re-identification was too high. Therefore, they had compromised the privacy rights of the individual named in the lawsuit.

The project was touted as a way to improve predictions in medicine and realize the utility of electronic health records through data science. Its coverage today instead focuses on risks to patients and invasions of privacy. Across industries like finance, retail, telecom, and more, the same potential for positive impact through data science exists, as does the potential for exposure-risk to consumers. The potential value created through data science is such that institutions must figure out how to address privacy concerns.

No one wants their medical records and sensitive information to be exposed. Yet, they do want research to progress and to benefit from innovation. That is the dilemma faced by individuals today. People are okay with their data being used in medical research, so long as their data is protected and cannot be used to re-identify them. So where did the University of Chicago go wrong in sharing data with Google — and was it a case of negligence, ignorance, or a lack of investment?

The basis of the lawsuit claims that the data shared between the two parties were still susceptible to re-identification through inference attacks and mosaic effects. Though the data sets had been stripped of direct identifiers and anonymized, they still contained date stamps of when patients checked in and out of the hospital. When combined with other data that Google held separately, like location data from phones and mapping apps, the university’s data could be used to re-identify individuals in the data set. Free text medical notes from doctors, though de-identified in some fashion, were also contained in the data set, further compounding the exposure of private information.

Inference attacks and mosaic effect methods combine information from different data sets to re-identify individuals. They are now well-documented realities that institutions cannot be excused for being ignorant of. Indirect identifiers must also be assessed for the risk of re-identification of an individual and included when considering privacy-protection. 

Significant advancements in data science have led to improvements in data privacy technologies, and controls for data collaboration. Autonomous, systematic, meta-data classification, and re-identification risk assessment and scoring, are two processes that would have made an immediate difference in this case. Differential privacy and Secure Multiparty-Computation are two others.

Privacy automation systems encompassing these technologies are a reality today. Privacy management is often seen as an additional overhead cost to data science projects. That is a mistake. Tactical use of data security solutions, like encryption and hashing, to privacy-protect data sets are also not enough, as can be attested to by the victims of this case.

As we saw with Cybersecurity over the last decade, it took several years and continued data theft and hacks making headlines before organizations implemented advanced Cybersecurity and intrusion detection systems. Cybersecurity solutions are now seen as an essential component of an enterprise’s infrastructure and have a commitment at the board level to keep company data safe and their brand untarnished. Boards must reflect on the negative outcomes of lawsuits like this one, where the identity of its customers are being compromised, and their trust damaged. 

Today, data science projects without advanced automated privacy protection solutions should not pass internal privacy governance and data compliance. Additionally, these projects should not use customer data, even if the data is anonymized, until automated privacy risk assessments solutions can accurately reveal the level of re-identification risk (inclusive of inference attacks, and the mosaic effect).  

With the sensitivity around privacy in data science projects in our public discourse today, any enterprise not investing and implementing advanced privacy management systems only exposes itself as having no regard for the ethical use of customer data. The potential for harm is not a matter of if, but when.

Join our newsletter