The Consequences of Data Mishandling: Twitter, TransUnion, and WhatsApp

The Consequences of Data Mishandling: Twitter, TransUnion, and WhatsApp

Who should you trust? This week highlights the personal privacy risks and organizational consequences when data is mishandled or utilized against the best interest of the account holder. Twitter provides advertisers with user phone numbers that had been used for two-factor authentication, 37,000 Canadians’ personal information is leaked in a TransUnion cybersecurity attack, and a GDPR-related investigation into Facebook and Twitter threatens billions in fines.
Twitter shared your phone number with advertisers.

Early this week, Twitter admitted to using the phone numbers of users, which had been provided for two-factor authentication, to help profile users and target ads. This allowed the company to create “Tailored Audiences,” an industry-standard product that enables “advertisers to target ads to customers based on the advertiser’s own marketing lists.” In other words, the profiles in the marketing list an advertiser uploaded were matched to Twitter’s user list with the phone numbers users provided for security purposes.

When users provided their phone numbers to enhance account security, they never realized that this would be the tradeoff. This manipulative approach to gaining user-information raises questions over Twitter’s data privacy protocols. Moreover, the fact that they provided this confidential information to advertisers should leave you wondering what other information is made available to business partners and how (Source). 

Curiously, after realizing what happened, rather than come forward, the company rushed to hire Ads Policy Specialists to look into the problem. 

On September 17, the company “addressed an “error” that allowed advertisers to target users based on phone numbers.” (Source) That same day, they then posted a job advertisement for someone to train internal Twitter employees on ad policies, and to join a team working on re-evaluating their advertising products.

Now, nearly a month later, Twitter has publicly admitted their mistake and said they are unsure how many users were affected. While they insist no personal data was shared externally, and are clearly taking steps to ensure this doesn’t occur again, is it too late?

Third-Party Attacks: How Valid Login Credentials Led to Banking Information Exposure 

A cybersecurity breach at TransUnion highlights the rapidly increasing threat of third party attacks and the challenge to prevent them. The personal data of 37,000 Canadians was compromised when legitimate business customer’s login credentials were used illegally to harvest TransUnion data. This includes their name, date of birth, current and past home addresses, credit and loan obligation, and repayment history. While this may not include information on bank account numbers, social insurance numbers may also have been at risk. This compromise occurred between June 28 and July 11 but was not detected until August (Source).

While alarming, these attacks are very frequent, accounting for around 25% of cyberattacks in the past year. Daniel Tobok, CEO of Cytelligence Inc. reports that the threat of third party attacks is increasing, as more than ever, criminals are using the accounts of trusted third parties (customers, vendors) to gain access to their targets’ data. This method of entry is hard to detect due to the nature of the actions taken. In fact, often the attackers are simulating the typical actions taken by the users. In this case, the credentials for the leading division of Canadian Western Bank were used to login and access the credit information of nearly 40,000 Canadians, an action that is not atypical of the bank’s regular activities (Source).

Cybersecurity attacks like this are what has caused the rise on two-factor authentication, which looks to enhance security -perhaps in every case other than Twitter’s. However, if companies only invest in hardware, they only solve half the issue, for the human side of cybersecurity is a much more serious threat than often acknowledged or considered. “As an attacker, you always attack the weakest link, and in a lot of cases unfortunately the weakest link is in front of the keyboard.” (Source)

 

Hefty fines loom over Twitter and Facebook as the Irish DPC closes their investigation.

The Data Protection Commission (DPC) in Ireland has recently finished an investigation into Facebook’s WhatsApp and Twitter over breaches to GDPR (Source). These investigations looked into whether or not WhatsApp provided information about the app’s services in a transparent manner to both users and non-users, and about a Twitter data breach notification in January 2019.

Now, these cases have moved onto the decision-making phase, and the companies are now at risk of a fine up to 4% of their global annual revenue. This means Facebook could expect to pay more than $2 billion.

This decision moves to Helen Dixon, Ireland’s chief data regulator, and we expect to hear by the end of the year. These are landmark cases, as the first Irish legal proceedings connected to US companies since GDPR came into effect a little over a year ago (May 2018) (Source). Big tech companies are on edge about the verdict, as the Irish DPC plays the largest GDPR supervisory role over most big tech companies, due to the fact that many use Ireland as the base for their EU headquarters. What’s more, the DPC has opened dozens of investigations into other major tech companies, including Apple and Google, and perhaps the chief data regulator’s decision will signal more of what’s to come (Source).

In the end, it is clear that the businesses and the public must become more privacy-conscious, as between Twitter’s data mishandling, the TransUnion third-party attack, and the GDPR investigation coming to a close, it is clear that privacy is affecting everyday operations and lives.

Join our newsletter


Forget Third-party Datasets – the Future is Data Partnerships that Balance Compliance and Analytical Value

Forget Third-party Datasets – the Future is Data Partnerships that Balance Compliance and Analytical Value

Organizations are constantly gathering information from their customers. However, they are always driven to acquire extra data on top of this. Why? Because more data equals better insights into customers, and better ability to identify potential leads and cross-sell products. Historically, to acquire more data, organizations would purchase third-party datasets. Though these come with unique problems, such as occasionally poor data quality, the benefits used to outweigh the problems. 

But not anymore. Unfortunately for organizations, since the introduction of the EU General Data Protection Regulation (GDPR), buying third-party data has become extremely risky. 

GDPR has changed the way in which data is used and managed, by requiring customer consent in all scenarios other than those in which the intended use falls under a legitimate business interest. Since third-party data is acquired by the aggregator from other sources, in most cases, the aggregators don’t have the required consent from the customers. This puts any third-party data purchaser in a non-compliant situation that could expose them to fines, reputational damage, and additional overhead compliance costs.

If organizations can no longer rely on third-party data, how can they maximize the value of the data they already have? 

By changing their focus. 

The importance of data partnerships and second-party data

Instead of acquiring third-party data, organizations should establish data partnerships and access second-party data. This new approach has two main advantages. One, second-party constitutes the first-party data of another organization, so it is of high quality. Two, there are no concerns about customer consent, as the organization who owns this data has direct consent from the customer. 

That said, to establish a successful data partnership, there are three things that have to be taken into consideration: privacy protection, IP protection, and data analytical value.   

Privacy Protection

Even when customer consent is present, the data that is going to be shared should be privacy-protected in order to comply with GDPR, safeguard customer information, and prevent any risk. Privacy protection should be understood as a reduction in the probability of re-identifying a specific individual in a dataset. GDPR, as well as other privacy regulations, refer to anonymization as the maximum level of privacy protection, wherein an individual can no longer be re-identified. 

Privacy protection can be achieved with different techniques. Common approaches include  differential privacy, encryption, the adding of “noise,” and suppression. Regardless of which privacy technique is applied, it is important to always measure the risk of re-identification of the data.

IP (Intellectual Property) Protection

There are some organizations that are okay with selling their data. However, there are others that are very reticent, because they understand that once the data is sold, all of its value and IP is lost, since they can’t control it anymore. IP control is a big barrier when trying to establish data partnerships. 

Fortunately, there is a way to establish data partnerships and ensure that IP remains protected.

Recent advances in cryptographic techniques have made it possible to collaborate with data partners and extract insights without having to expose the raw data. The first of these techniques is called Secure Multiparty Computation.

As its name implies, with Secure Multiparty Computation, multiple parties can perform computations on their datasets as if they were collocated but without revealing any of the original data to any of the parties. The second technique is Fully Homomorphic Encryption. With this technique, data is encrypted in a way in which computations can be performed without the need for decrypting the data. 

Because the original raw data is never exposed across partners, both of these advanced techniques allow organizations to augment their data, extract insights and protect IP safely and securely.

Analytical Value

The objective of any data partnership is to acquire more insights into customers and prospects. For this reason, any additional data that is acquired needs to add analytical value. But maintaining this value becomes difficult when organizations need to preserve privacy and IP protection. 

Fortunately, there is a solution. Firstly, organizations should identify common individuals in both datasets. This is extremely important, because you want to acquire data that adds value. By using Secure Multiparty Computation, the data can be matched and common individuals identified, without exposing any of the sensitive original data. 

Secondly, organizations must use software that balances privacy and information loss. Without this, the resulting data will be high on privacy protection and extremely low on analytical value, making it useless for extracting insights.

Thanks to the new privacy regulations sweeping the world, acquiring third-party datasets has become extremely risky and costly. Organizations should change their strategy and engage in data partnerships that will provide them with higher quality data. However, for these partnerships to add real value, privacy and IP have to be protected, and data has to maintain its analytical value.

For more about CryptoNumerics’ privacy automation solutions, read our blog here.

Subscribe to our newsletter


What do Trump, Google, and Facebook Have in Common?

What do Trump, Google, and Facebook Have in Common?

This year, the Trump Administration declared the need for a national privacy law to supersede a patchwork of state laws. But, as the year comes to a close, and amidst the impeachment inquiry, time is running out. Meanwhile, Google plans to roll out encrypted web addresses, and Facebook stalls research into social media’s effect on democracy. Do these three seek privacy or power?

The Trump Administration, Google, and Facebook claim that privacy is a priority, and… well… we’re still waiting for the proof. Over the last year, the news has been awash with privacy scandals and data breaches. Every day we hear promises that privacy is a priority and that a national privacy law is coming, but so far, the evidence of action is lacking. This begs the question, are politicians and businesses using the guise of “privacy” to manipulate people? Let’s take a closer look.

Congress and the Trump Administration: National Privacy Law

Earlier this year, Congress and the Trump Administration agreed they wanted a new federal privacy law to protect individuals online. This rare occurrence was even supported and campaigned for by major tech firms (read our blog “What is your data worth” to learn more). However, despite months of talks, “a national privacy law is nowhere in sight [and] [t]he window to pass a law this year is now quickly closing.” (Source)

Disagreement over enforcement and state-level power are said to be holding back progress. Thus, while senators, including Roger Wicker, who chairs the Senate Commerce Committee, insist they are working hard, there are no public results; and with the impeachment inquiry, it is possible we will not see any for some time (Source). This means that the White House will likely miss their self-appointed deadline of January 2020, when the CCPA goes into effect.

Originally, this plan was designed to avoid a patchwork of state-level legislature that can make it challenging for businesses to comply and weaken privacy care. It is not a simple process, and since “Congress has never set an overarching national standard for how most companies gather and use data.”, much work is needed to develop a framework to govern privacy on a national level (Source). However, there is evidence in Europe with GDPR, that a large governing structure can successfully hold organizations accountable to privacy standards. But how much longer will US residents need to wait?

Google Encryption: Privacy or Power

Google has been trying to get an edge above the competition for years by leveraging the mass troves of user data it acquires. Undoubtedly, their work has led to innovation that has redefined the way our world works, but our privacy has paid the price. Like never before, our data has become the new global currency, and Google has had a central part to play in the matter. 

Google has famously made privacy a priority and is currently working to enhance user privacy and security with encrypted web addresses.

Unencrypted web addresses are a major security risk, as they make it simple for malicious persons to intercept web traffic and use fake sites to gather data. However, in denying hackers this ability, power is given to companies like Google, who will be able to collect more user data than ever before. For the risk is “that control of encrypted systems sits with Google and its competitors.” (Source)

This is because encryption cuts out the middle layer of ISPs, and can change the mechanisms through which we access specific web pages. This could enable Google to become the centralized encryption DNS provider (Source).

Thus, while DoH is certainly a privacy and security upgrade, as opposed to the current DNS system, shifting from local middle layers to major browser enterprises centralizes user data, raising anti-competitive and child-protection concerns. Further, it diminishes law enforcement’s ability to blacklist dangerous sites and monitor those who visit them. This also opens new opportunities for hackers by reducing their ability to gather cybersecurity intelligence from malware activity that is an integral part of being able to fulfil government-mandated regulation (Source).

Nonetheless, this feature will roll out in a few weeks as the new default, despite the desire from those with DoH concerns to wait until learning more about the potential fallout.

Facebook and the Disinformation Fact Checkers

Over the last few years, Facebook has developed a terrible reputation as one of the least privacy-centric companies in the world. But it is accurate? After the Cambridge Analytica scandal, followed by endless cases of data privacy ethical debacles, Facebook stalls its “disinformation fact-checkers” on the grounds of privacy problems.

In April of 2018, Mark Zuckerburg announced that the company would develop machine learning to detect and manage misinformation on Facebook (Source). It then promised to share this information with non-profit researchers who would flag disinformation campaigns as part of an academic study on how social media is influencing democracies (Source). 

To ensure that the data being shared could not be traced back to individuals, Facebook applied differential privacy techniques.

However, upon sending this information, researchers complained data did not include enough information about the disinformation campaigns to allow them to derive meaningful results. Some even insisted that Facebook was going against the original agreement (Source). As a result, some of the people funding this initiative are considering backing out.

Initially, Facebook was given a deadline of September 30 to provide the full data sets, or the entire research grants program would be shut down. While they have begun offering more data in response, the full data sets have not been provided.

A spokesperson from Facebook says, “This is one of the largest sets of links ever to be created for academic research on this topic. We are working hard to deliver on additional demographic fields while safeguarding individual people’s privacy.” (Source). 

While Facebook may be limiting academic research on democracies, perhaps they are finally prioritizing privacy. And, at the end of the day with an ethical framework to move forward, through technological advancement and academic research, the impact of social media and democracy is still measurable without compromising privacy.

In the end, it is clear that privacy promises hold the potential to manipulate people into action. While the US government may not have a national privacy law anywhere in sight, the motives behind Google’s encrypted links may be questionable, and Facebook’s sudden prioritization of privacy may cut out democratic research, at least privacy is becoming a hot topic, and that holds promise for a privacy-centric future for the public.

Join our newsletter
The Key to Anonymizing Datasets Without Destroying Their Analytical Value

The Key to Anonymizing Datasets Without Destroying Their Analytical Value

Enterprise need for “anonymised” data lies at the core of everything from modern medical research, to personalised recommendations, to modern data science, to ML and AI techniques for profiling your customers for upselling and market segmentation. At the same time, anonymised data forms the legal foundation for demonstrating compliance with privacy regimes such as GDPR, CCPA, HIPPA, and all other established and emerging data residency and privacy laws from around the world.

For example, the GDPR Recital 26 defines anonymous information as “information which does not relate to an identified or identifiable natural person” or “personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.” Under GDPR law, only properly anonymized information can be handled or utilized by enterprises.


The perils of poorly or partially anonymised data

Why is anonymised data such a central part of demonstrating legal and regulatory privacy compliance? And why does failing to comply expose organisations to the risk of significant fines, and brand and reputational damage?

Because if the individuals in a dataset can be re-identified, then their promised privacy protections evaporate. Hence “anonymisation” is the process of removing personal identifiers, both direct and indirect, that may lead to an individual being identified. An individual may be directly identified from their name, address, postcode, telephone number, photograph or image, or some other unique personal characteristics. An individual may also be indirectly identifiable when certain information is combined or linked together with other sources of information, including their place of work, job title, salary, gender, age, their postcode or even the fact that they have a particular medical diagnosis or condition.

Anonymization is so relevant to legislation such as GDPR because recent research has now conclusively shown that poorly or partially anonymised data can lead to an individual being identified simply by combining that data with another dataset. In 2008, individuals were re-identified from an anonymised Netflix dataset of film ratings by comparing the ratings information with public scores on the IMDb film website. In 2014. the home addresses of New York taxi drivers were identified from an anonymous datasets of individual taxi trips in the city.  

In 2018, The University of Chicago Medical team shared with Google anonymised patient records which included appointment date and time stamps and medical notes. A 2019 pending class action lawsuit brought against Google and the University claims that Google can combine the appointment date and time stamps with other records its holds from Waze, Android phones and other location records to re-identify these individuals.

And data compliance isn’t the only reason that organizations need to be smart with how they anonymize data. An equally major issue is that fully anonymised techniques tend to devalue the data or render it less useful for purposes such as data science, AI and ML, and other applications looking to gain insights and extract value. This is particularly true with indirect identifying information.     

The challenges of anonymization present businesses with a dilemma: Fully anonymising directly and indirectly identifying customer data keeps them compliant, but it renders that data less valuable and useful. But partially anonymising and the increased risks of individuals being identified.


How to anonymise datasets without wiping out their analytical value

The good news is that it is possible to create fully complaint anonymised datasets and still retain the analytical value of data for data science, and AI and ML applications. You just need the right software.

The first challenge is to understand the risk of re-identification of an individual or individuals from a dataset. This cannot be done manually or by scanning a dataset. A systematic and automated approach has to be applied to assess the risk of re-identification. This risk assessment forms a key part of demonstrating your Privacy Impact Assessment (PIA), especially in a data science and data lake environments. How many unique individuals or identifying attributes exist is a dataset that can identify an individual directly or indirectly?  For example, say there are three twenty-eight-year-old males living in a certain neighbourhood in Toronto. As there are only three individuals, if this information was combined with one other piece of information – such as employer, or car driven, or medical condition – then you have a high probability of being able to identify the individual. 

Once we’re armed with this risk assessment information, modern systems-based approaches to anonymisation can be applied. In the first example, using an anonymisation generalisation technique, we can generalise the indirect identifiers in such a manner that the analytical value of the data is still retained but we can also meet our privacy compliance objectives to fully anonymise the dataset.  So with the twenty-eight-year-old males living in a certain neighbourhood in Toronto, we can generalise gender to show that there are nine twenty-eight-year-old individuals living there, thereby reducing the risk of an individual being identified.  

Another example is age binning, where the analytical value of the data is preserved by generalising the age attribute. By binning the age “28” to a range such as “25 to 30,” we now show that there are 15 individuals aged 25 to 30 living in the Toronto neighbourhood, further reducing the risk of identification of an individual.

In the above examples, two key technologies enable us to fully anonymize datasets while retaining the analytical value: 

  1. An automated risk assessment feature which identifies the risk of re-identification in each and every dataset in a consistent and defensible manner across the enterprise is the first step. 
  2. The application of anonymisation protection using privacy protection actions such as generalisation, hierarchies, and differential privacy techniques.

Using these two techniques, enterprises can start to overcome the anonymisation dilemma.

 

Subscribe to our newsletter



How Secure Multi-Party Computation (SMC) allows you to Perform Advanced Data Collaboration Without Exposing your Data

How Secure Multi-Party Computation (SMC) allows you to Perform Advanced Data Collaboration Without Exposing your Data

Data collaboration is the process of combining datasets together to generate new value from data-driven insights. The datasets being combined can come from different organizations, or they can come from data silos internal to an organization.

 A number of use cases are possible through data collaboration: fraud detection; advances in healthcare research; real-world data; cross-selling; churn analysis and more. However, there are significant blockers in realizing the potential benefits of data collaboration. Some of these blockers are so severe that they can stymie potentially valuable collaborations. The blockers originate from a host of areas: fear of loss of IP (intellectual property); privacy regulations, data residency restrictions, and reputational risk (just to name a few). 

However, with the right technology, it is possible to remove these blockers. It is possible to engage in advanced data collaboration without exposing, sharing or moving raw data.  


The power of Secure Multi-Party Computation (SMC) 

How? By using advanced cryptography. This enables organizations to acquire the sort of insights produced by combining data in a central location, but without ever moving or exposing the data. This best-of-both-worlds approach sees participants make use of Secure Multi-Party Computation (SMC). SMC enables a number of parties to jointly compute a function over a set of inputs that they wish to keep private. The approach is at the core of CryptoNumerics’ CN-Insight.

There is a range of problems that SMC can solve – from simple queries and statistics to training machine learning models. In all cases, raw data is not moved or exposed. 

A classic example of SMC is the millionaires’ problem. There are two millionaires, Alice and Bob, who would like to know who has more wealth without revealing their actual wealth. They really just want to know the result of: is A > B?, but without revealing A or B, where A is Alice’s wealth and B is Bob’s wealth. 

They can accomplish this by using SMC – specifically, a protocol called oblivious transfer. As defined by the Encyclopedia of Cryptography and Security, Oblivious transfer (OT) is “a two-party protocol between a sender and a receiver, by which the sender transfers some information to the receiver, the sender remaining oblivious, however, to what information the receiver actually obtains.”


Secret Splitting

Another good example of a sophisticated SMC protocol is called secret splitting. Here, the secret information or data is split into multiple secret shares. In order to retrieve the original information, all shares are mandatory, and it is impossible to obtain the original information until all the shares are carried out.

Once the data has been split into secret shares, it can then be distributed among the participants in the collaboration. This is called parties. For example, we have three parties and each party creates three secret shares of their data. Then one share gets transferred to each of the other parties so that in the end each party has one secret share of all of the other party’s data. Now if they wanted to know the sum or the average of their data each party would perform those operations locally on the secret shares and then exchange the result. The result itself would be secret shares of the sum or the average. Now the parties would exchange the result shares so that each party would now have all shares of the result and would sum them up to reveal the answer. Many mathematical operations can be performed in this way, which enables the building of machine learning models in a data collaboration without exposing, sharing or moving the data.

Secret shares are created by applying one-time pad encryption which offers absolute security.  For example, if we were to create two secret shares, one share would be completely random and the other would be the result of subtracting the random share from the original data (or secret). More shares can be created by utilizing more random shares. For absolute security, the random share must be truly random, which is only possible in a finite field. Finite fields work like a clock, in that 1 o’clock can be 1 pm or 1 am. The numbers wrap around so that all numbers have an equal likelihood of appearing.


SMC requires software

In order to utilize SMC in data collaboration, organizations require the appropriate software (software such as Cryptonumerics’ CN-Insight). This software must then be installed locally, so that it can access the data and establish a secure line of communication with the other organizations. The parties must then agree upon the type of analysis, and the way in which the result will be distributed. It is possible to control who gets the result: all parties, a subset of parties, or no parties at all. In the no party case, the result remains as a secret share. In order to use it, the parties must engage in the SMC protocol.

SMC enables simple queries, statistical analysis, and the building of ML models – all without exposing, sharing or moving raw data. The result is IP and reputation protection, and the satisfaction of privacy and data residency regulations.

For more about CryptoNumerics’ privacy automation solutions, read our blog here.

Your health records are online, and Amazon wants you to wear Alexa on your face

Your health records are online, and Amazon wants you to wear Alexa on your face

This week’s news was flooded with a wealth of sensitive medical information landing on the internet, and perhaps, in the wrong hands. Sixteen million patient scans were exposed online, the European Court of Justice ruled Google does not need to remove links to sensitive information, and Amazon released new Alexa products for you to wear everywhere you go.

Over five million patients have had their privacy breached and their private health information exposed online. These documents contain highly sensitive data, like names, birthdays, and in some cases, social security numbers. Worse, the list of compromised medical record systems is rapidly increasing, and the data can all be accessed with a traditional web browser. In fact, Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security, reports “[i]t’s not even hacking,” because the data is so easily accessible to the average person (Source).

One of these systems belongs to MobilexUSA, whose records, which showed patients’ names, date of birth, doctors, and a list of procedures, were found online (Source

Experts report that this could be a direct violation of HIPAA and many warn that the potential consequences of this leak are devastating, as medical data is so sensitive, and if in the wrong hands, could be used maliciously (Source).

According to Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, “[m]edical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice.” (Source

Such a statement signals a privacy crisis in the healthcare industry that requires a desperate fix. According to Pianykh, the problem is not a lack of regulatory standards, but rather that “medical device makers don’t follow them.” (Source) If that is the case, should we expect HIPAA to crackdown the same way GDPR has?

With a patient’s privacy up in the air in the US, a citizens’ “Right to be Forgotten” in the EU is also being questioned. 

The “Right to be Forgotten” states that “personal data must be erased immediately where the data are no longer needed for their original processing purpose, or the data subject has withdrawn [their] consent” (Source). This means that upon request, a data “controller” must erase any personal data in whatever means necessary, whether that is physical destruction or permanently over-writing data with “special software.” (Source)

When this law was codified in the General Data Protection Regulation (GDPR), it was implemented to govern over Europe. Yet, France’s CNIL fined Google, an American company, $110,000 in 2016 for refusing to remove private data from search results. Google argued changes should not need to be applied to the google.com domain or other non-European sites (Source). 

On Tuesday, The European Court of Justice agreed and ruled that Google is under no obligation to extend EU rules beyond European borders by removing links to sensitive personal data (Source). However, the court made a distinct point that Google “must impose new measures to discourage internet users from going outside the EU to find that information.” (Source) This decision sets a precedent for the application of a nation’s laws outside its borders when it comes to digital data. 

While the EU has a firm stance on the right to be forgotten, Amazon makes clear that you can “automatically delete [your] voice data”… every three to eighteen months (Source). The lack of immediate erasure is potentially troublesome for those concerned with their privacy, especially alongside the new product launch, which will move Alexa out of your home and onto your body.

On Wednesday, Amazon launched Alexa earbuds (Echo Buds), glasses (Echo Frames), and rings (Echo Loop). The earbuds are available on the marketplace, but the latter two are an experiment and are only available by invitation for the time being (Source). 

With these products, you will be able to access Alexa support wherever you are, and in the case of the EchoBuds, harness the noise-reduction technology of Bose for only USD $130 (Source). However, while these products promise to make your life more convenient, in using these products Amazon will be able to monitor your daily routines, behaviour, quirks, and more. 

Amazon specified that their goal is to make Alexa “ubiquitous” and “ambient” by spreading it everywhere, including our homes, appliances, cars, and now, our bodies. Yet, at the same time as they open up about their strategy for lifestyle dominance, Amazon claims to prioritize privacy, as the first tech giant to allow users to opt-out of their voice data being transcribed and listened to by employees. Despite this, it is clear that “Alexa’s ambition and a truly privacy-centric customer experience do not go hand in hand.” (Source). 

With Amazon spreading into wearables, Google winning the “Right to be Forgotten” case, and patient records being exposed online, this week is wrapping up to be a black mark on user privacy. Stay tuned for our next weekly news blog to learn about how things shape up. 

Join our newsletter