Select Page
IoT and everyday life; how interconnected are we?

IoT and everyday life; how interconnected are we?

The Internet of Things (IoT) is a term spanning a variety of ‘smart’ applications. This ranges from things like smart fridges, to smart cities. This idea of ‘smart’ or IoT is the connectedness between everything and the internet. 

It’s hard to grasp the amount of data one person creates each day and understanding where IoT fits into that. And with this new era of ‘smart’ everything, the realm of knowledge is pushed even farther away. 

To understand just how much our smart technologies follow our everyday behaviours, let’s focus on only one person’s use of a smartwatch. 

But first, what are the implications of a smartwatch? This wearable technology gained its popularity starting in 2012, giving users the ability to track their health and set fitness goals at the tap of their wrist. Since then, smartwatches have infiltrated all sorts of markets, from the ability to pay using the watch, take phone calls, or update a Facebook status.

The technology in our lives has become so interconnected, de-identifying our data, while achievable, on a grand scale, is seemingly complicated. Take the smartwatch, our unique footprints, recreated each day are logged and monitored through the small screen on our wrist. While the data created is anonymized to an extent, it’s not sufficient

But why not? After all, technology has moved mountains in the last decade. To better understand this connectedness of our data, let’s follow one person’s day through the point of view of just their smartwatch. 

Imagine Tom is a 30-year-old man in excellent health who, like the rest of us, follows a pretty general routine during his workweek. Outside of the many technologies that collect Tom’s data, what might just his smartwatch collect? 

Let’s take a look. 

Every morning, Tom’s smartwatch alerts him at 7:30 am to wake up and start his day. After a few days of logging Tom’s breathing patterns and heart rate, and monitoring his previous alarm settings, Tom’s smartwatch has learned the average time Tom should be awake and alerts Tom to set a 7:30 alarm each night before bed. 

Before ever having to tell his watch which time he gets up in the morning, his watch already knows. 

Similar to his smartwatches alarm system, this watch knows and labels the locations of 6 specific places that Tom spends most time in the week. Tom didn’t have to tell his watch where he was and why; based on the hours of the day Tom spends at this location, with his sleeping patterns and other movements, his watch already knows. 

Not only are these places determined from his geographical location, but from the other information, his watch creates. 

When Tom is at the gym, his sped-up heart rate and lost calories are logged. When Tom goes to his local grocery store or coffee shop, Tom uses his smartwatch to pay. At his workplace, Tom’s watch records the amount of time spent at the location and is able to determine the two main places Tom spends his time is between his home location and his work. 

Based on a collection of spatial-temporal data, transactional data, health data and repeated behaviour, it is easy to create a very accurate picture of who Tom is.

Let’s keep in mind that this is all created without Tom having to explicitly tell his smartwatch where he is or what he is doing at each minute. Tom’s smartwatch operates on learned behaviours based on the unique pattern Tom creates each day.

This small peak into Tom’s life, according to his watch, isn’t even much of a “peak” at all. We could analyze the data retained by his smartwatch with each purchase, each movement of location or only by the data pertaining to his health. 

This technology is seen in our cars, fridges, phones and TVs. Thus, understanding how just one device collects and understands so much about your person is critical to how we interact with these technologies. What’s essential to understand next is how this data is dealt with, protected and shared. 

The more advanced our technology gets, the easier it is to connect a person based on the data the technology collects. It’s important more than ever to understand the impacts of our technology use, what of our data is being collected, and where it is going. 

At CryptoNumerics we have been developing a solution that can de-identify this data without destroying its analytical value. 

If your company has transactional and/or spatio-temporal data that needs to be privacy-protected, contact us to learn more about our solution.

Data sharing is an issue across industries

Data sharing is an issue across industries

Privacy, as many of our previous blogs have enforced, is essential not only on a business-customer relationship but also on a moral level. The recent Fitbit acquisition by Google has created big waves in the privacy sphere, as the customer’s health data is at risk, due to Google’s past dealings with personal information. On the topic of healthcare data, the recent Coronavirus panic has thrown patient privacy out the window, as the fear of the spreading virus rises. Finally, data sharing continues to raise eyes as a popular social media app, TikTok scrambles to protect its privacy reputation.  

Fitbit acquisition causing major privacy concerns

From its in-house command system to being the world’s most used search engine, Google has infiltrated most aspects of regular life. There are seemingly no corners left untouched by the search engine. 

In 2014, Google released its Wear OS, a watch technology for monitoring health, as well as for use compatible with phone technology. While wearable technology has soared to the top of technology chart, as a popular way to track and manage your health and lifestyle, Google’s Wear OS has not gained the popularity necessary to maintain itself as a strong tech competitor.  

In November of last year, Google announced its acquisition of Fitbit for $2.1 billion. Fitbit has sold over 100 million devices and is worn by over 28 million people, 24 hours a day, 7 days a week. Many are calling this Google’s attempt to recover from its failing project.

But there is more to this acquisition than staying on top of the market; personal data. 

Google’s terrible privacy reputation is falling onto Fitbit, as fears that the personal information FitBit holds, like sleep patterns or heart rate, will fall into the hands of third parties and advertisers.  

Healthcare is a large market, one of which Google has been silently buying into for years. Accessing personal health information gives Google an edge in the healthcare partnerships it’s been looking for. 

Fitbit has come under immense scrutiny after its announced partnership with Google, seeing sales drop 5% in 2019. Many are urging Fitbit consumers to ditch their products amidst the acquisition.

However, Fitbit still maintains that users will be in full control of their data and that the company will not see personal information to Google. 

The partnership will be followed with a close eye going forward, as government authorities such as the Australian Competition and Consumer Commission open inquiries into the companies intentions.

TikTok scrambling to fix privacy reputation

TikTok is a social media app that has taken over video streaming services. With over 37 million users in the U.S. last year, TikTok has been downloaded over 1 billion times. And that number is expected to rise 22% this year

While the app is reporting these drastically high numbers for downloading, the app has been continuously reprimanded for its terrible privacy policy and its inability to protect its user’s information. After already being banned from companies across the U.S, one Republican Senator, Josh Hawley, is introducing legislation to prohibit federal workers from using the app. This comes from several security flaws reported against the app in January, addressing user location and access to user information. 

The CEO of Reddit recently criticized TikTok, saying he tells people, “don’t install that spyware on your phone.”

These privacy concerns stem from the app’s connection with the Chinese government. In 2017, viral app Musical.ly was acquired and merged with TikTok by Beijing company, ByteDance, for $1 billion. Chinese law requires companies to comply with government intelligence operations if asked, meaning apps like TikTok would have no authority to decline government access to their data.

In response to their privacy backlash, the company made a statement last year saying all their data centers are located entirely outside of China. However, their privacy policy does state that they share a variety of user data with third parties. 

In new attempts to combat all privacy concerns, ex-APD, Roland Cloutier has been hired as Chief Information Security Officer to oversee privacy information issues within the popular app.

With Cloutier’s long history in cybersecurity, there is hope that the most popular app among will soon gain a better privacy reputation.

Coronavirus raising concerns over person information 

The Coronavirus is a deadly, fast-spreading respiratory illness that has moved quickly throughout China and now reported in 33 countries across the world. 

Because of this, China has been thrown into a rightful panic and has gone to all lengths to combat and protect its spreading. However, in working to protect the continuous spread of the virus, many are saying that patient privacy is being thrown out the window.

Last month China put out a ‘close contact’ app, testing people to see if they’ve been around people who have or contracted the virus. The app assigns a colour code to users; green for safe, yellow for required 7day quarantine, and red is a 14day quarantine. 

Not only is the app required to enter public places like subways or malls, but the data is also shared with police. 

The New York Times released that the app sends a person’s location, city name and an identifying code number to the authorities. China’s already high-tech surveillance has reached new limits, as the times reports that surveillance cameras placed around neighborhoods are being strictly monitored, watching residents who present yellow or red cards.

South Korea has also thrown patient privacy to the wind, as text messages are sent out, highlighting every movement of individuals who contracted the virus. One individual’s extra-marital affair was exposed through the string of messages, revealing his every move before contracting the virus, according to the Guardian.

The question on everyone’s mind now is, what happens to privacy when the greater good is at risk?

For more privacy blogs, click here

Join our newsletter


Facial Recognition added to the list of privacy concerns

Facial Recognition added to the list of privacy concerns

Personal data privacy is a growing concern across the globe. And while we focus on where our clicks and metadata end up, there is a new section of privacy invasion being introduced: the world of facial recognition. 

Unbeknownst to the average person, facial recognition and tracking have infiltrated our lives in many ways and will only continue to grow in relevance as technology develops. 

Companies like Clearview AI and Microsoft are on two ends of the spectrum when it comes to facial recognition, with competing technologies and legislations fight to protect and expose personal information. Data privacy remains an issue as well, as products like Apple’s Safari are revealed to be leaking the very information it’s sworn to protect. 

Clearview AI is threatening privacy as we know it.

Privacy concerns due to facial recognition efforts are growing and relevant.

Making big waves in facial recognition software is a company called Clearview AI, which has created a facial search engine of over 3 billion photos. On Sunday, January 18th, the New York Times (NYT) wrote a scathing piece exposing the 2017 start-up. Until now, Clearview AI has managed to keep its operations under wraps, quietly partnering with 600 law enforcement agencies. 

By taking the photo of one person and submitting it into the Clearview software, Clearview spits out tens of hundreds of pictures of that same person from all over the web. Not only are images exposed, but the information about where they were taken, which can lead to discovering mass amounts of data on one person. 

For example, this software was able to find a murder suspect just by their face showing up in a mirror reflection of another person’s gym photo. 

The company is being questioned for serious privacy risk concerns. Not only are millions of people’s faces stored on this software without their knowledge, but the chances of this software being used for unlawful purposes are incredibly high.

The NYT also released that the software pairs with augmented reality glasses; someone could take a walk down a busy street and identify every person they passed, including addresses, age, etc. 

Many services prohibit people from scraping user’s images, including Facebook or Twitter. However, Clearview has violated said terms. When asked about its Facebook violation, the CEO, Mr. Ton-That disregarded, saying everybody does it. 

As mentioned, hundreds of police agencies in both the U.S and Canada allegedly have been using Clearview’s software to solve crimes since February of 2019. However, a Buzzfeed article has just revealed Clearview’s claim about helping to solve a 2019 subway terrorist threat is not real. The incident was a selling point for the facial recognition company to partner with hundreds of law enforcement across the U.S. The NYPD has claimed they were not involved at all. 

This company has introduced a dangerous tool into the world, and there seems to be no coming back. While it has great potential to help solve serious criminal cases, the risk for citizens is astronomical. 

Microsoft at the front of facial recognition protection

To combat privacy violations, similar to the concerns brought forward with Clearview AI, cities like San Fransisco have recently banned facial recognition technologies, fearing a privacy invasion. 

Appearing in front of Washington State Senate, two Microsoft employees sponsored two proposed bills supporting the regulation of facial recognition technologies. Rather than banning facial recognition, these bills look to place restrictions and requirements onto the technology owners.

Despite Microsoft offering facial recognition as a service, its president Brad Smith called for regulating facial recognition technologies in 2018.

Last year, similar bills, drafted by Microsoft, made it through Washington Senate. However, those did not go forward as the House made changes that Microsoft opposed. The amendments by the House included a certification that the technology worked for all skin tones and genders.

The first of these new Washington Bill’s looks similar to the California Consumer Privacy Act, which Microsoft has stated it complies with. This bill also requires companies to inform their customers when facial recognition is being used. The companies would be unable to add a person’s face to their database without direct consent.

The second bill has been proposed by Joseph Nguyen, who is both a state senator and a program manager at Microsoft. This proposed bill focuses on government use of facial recognition technology. 

A section of the second bill includes requiring that law enforcement agencies must have a warrant before using the technology for surveillance. This requirement has been met with heat from specific law enforcement, saying that people don’t have an expectation of privacy in public; thus, the demand for a warrant was unnecessary. 

Safari exposed as a danger to user privacy.

About data tracking, Google’s Information Security team has released a report detailing several security issues in the design of Apple’s Safari Intelligent Tracking Prevention (ITP). 

ITP is used to protect users from tracking across the web by preventing third-party affiliated websites from receiving information that would allow identifying the user. The report lists two of ITP’s main functionalities:

  • Establishing an on-device list of prevalent domains based on the user’s web traffic
  • Applying privacy restrictions to cross-site requests to domains designated as prevalent

The report, created by Google researchers Artur Janc, Krzysztof Kotowicz, Lucas Weichselbaum, and Roberto Clapis, reported five different attacks that exploit the ITP’s design. These attacks are: 

  • Revealing domains on the ITP list
  • Identifying individual visited websites
  • Creating a persistent fingerprint via ITP pinning
  • Forcing a domain onto the ITP list
  • Cross-site search attacks using ITP

(Source)

Even so, the advised ‘workarounds’ given in the report “will not address the underlying problem.” 

Most interesting coming from the report is that in trying to address privacy issues, Apple’s Safari created more significant privacy issues.

As facial recognition continues to appear in our daily lives, recognizing and educating on the implications these technologies will have is critical to how we move forward as an information-protected society.

To read more privacy blogs, click here

 

Join our newsletter


Two Paths of Data Monetization: Exploitation or Protection

Two Paths of Data Monetization: Exploitation or Protection

Companies are relying more than ever on collecting consumer data to drive their business further. Despite efforts made by legislation and consumer pleas, organizations are continuing to incorrectly store and use said user data leading to breaches in consumer privacy.

Privacy regulations have thrown some light into how companies have been using their customers’ data, causing that  79% of Americans feel concerned about the way big companies are using their data.

Strides have been made as to the proper ways to protect (while still utilizing) consumer data. All the while, social media leaders choose exploitation over protection. 

Grindr at the forefront of a significant privacy scandal

On January 14, 2020, the Norwegian Consumer Council (NCC) released a 186-page report outlining ten leading apps that are violating the GDPR and placing its user’s data at risk for exploitation. 

Of most significance among the list, was the popular LGBT dating app Grindr. Grindr asks for personal information from its users to pair them with potential significant others. In doing so, users have unknowingly put their private, sensitive information into the hands of multiple third party companies, therefor at risk for tracking, monetization, and exposure. 

The NCC report found that a Twitter-owned company, MoPub, is “acting as an advertising mediator for Grindr, facilitating transmissions containing personal data to other ad tech companies.”

Grindr’s privacy policy states that the company shares user data with MoPub. However, MoPub’s privacy policy then says that it shares this data with over 160 of its partners, and asks users to read each of its partner’s privacy policies to determine where their information is going from there. 

One MoPub partner includes AT&T, a telecommunications company, who, on their own, has an estimated 1000 partners. This means that not only is AT&T taking data from Grindr to adjust its marketing methods but that user data has the potential to be sold to more than the 160 partners from MoPub.

This information includes raw IP address, GPS location, Advertising ID, and any specific keywords that any third-party requests. This could reveal personal information, such as HIV status, drug use, or sexual preference.

Grindr and other Android apps listed in the NCC report are an excellent representation of how in-the-dark users are regarding knowing where their information is being sold. The report demonstrates the privacy policies of these organizations, written in ways to prevent users from either understanding or grasping the significance of the data sharing that is being done

The NCC has filed a report with the GDPR, hoping that action will be taken to protect user information. 

The crime is not that Grindr has shared data with third parties; the issue is the raw data being shared, with no attempt to de-identify or, in any way, anonymize data to both comply with GDPR and keep their user data private.

Mayo Clinic pairs with Nference to protect patient data while creating a lexicon of diseases

Just last week, a Minnesota based non-for-profit academic medical center, Mayo Clinic, released that they have paired with Nference to collate years of clinical research and patient information and perform data analytics on de-identified data. By doing so, the two companies look to create an easily accessible data platform for doctors and other researchers to find the information necessary to diagnose or educate. 

Nference is an augmented intelligence company that looks to make biomedical knowledge accessible for medical researchers, scientists, and practitioners through evolving software. 

Nference looks to de-identify patient data by removing quasi-identifiers such as patient names, locations, and Protect Health Information (PHI). Nference has ensured that their software complies with HIPAA and mitigates any re-identification risk.

Any concern about patient consent to be used in this partnership is avoided, as all information is appropriately de-identified without risk. As this is not a clinical trial or a trial providing people with medical care that requires consent, Nference can access the data to ensure proper de-identification techniques and perform advanced analytics. Nference co-founder Venky Soundararajan said that through this partnership, “the lexicon of diseases will be fundamentally redefined.”

This partnership understands the significance of utilizing its years of medical data while maintaining patient confidentiality and avoiding the chances of patient re-identification.

Both the Mayo Clinic partnership and the NCC’s study on Android applications expose different methods of exploiting the benefits of data analytics and privacy protection. These very separate approaches show a divide in achieving data monetization.

In comparing these two methods to monetize data, there is a clear wrong and right approach. The task of de-identifying consumer data not only mitigates the risk of privacy breaches, legislative fines, or consumer lack of trust but also maintains data value in a way that performing analytics remains profitable.

Privacy is a conversation that will only grow in relevance and legislature. Complying with both privacy laws and the security of an organization’s consumers is crucial as we move further into the technological age. The question thus remains; do you know where your data is going? 

Join our newsletter


Privacy: The Most Talked About Gadget of CES 2020

Privacy: The Most Talked About Gadget of CES 2020

This week Las Vegas once again saw the Consumer Electronics Show (CES), accompanied by a range of flashy new gadgets. Most significant among the mix; privacy. 

Technology front runners such as Facebook, Amazon, and Google took the main stage in unveiling data privacy changes in their products, as well as headlining discussions surrounding the importance of consumer privacy. However, through each reveal, attendees noticed gaps and missteps in these companies’ attempts at privacy.

Facebook: A New Leader in Data Privacy? 

This year, Facebook attempted to portray itself as a changed company in the eyes of privacy. Complete with comfortable seating and flowers, Facebook’s CES booth revealed a company dedicated to customer privacy, pushing the idea that Facebook does not sell customer data. 

Originally created in 2014, Facebook relaunched a new-and-improved “Privacy Checkup”, complete with easy to manage data-sharing settings. Facebook took the opportunity at this year’s CES to display added features such as the ability to turn off facial recognition, managing who can see a user account or posts, and the ability to remove/add preferences based on personal browsing history.

While these changes to privacy settings are a step in the right direction towards protecting user data, attendees could not help but notice the side-stepping of significant data privacy initiatives of which Facebook is ignoring. Most notably, the lack of user control on how advertisers use personal information. 

Ring’s New Control Center: Fix or Flop?

Ring has been a hot commodity in household security since its purchase by Amazon in 2018. However, recently, the company has come under fire for its law enforcement partnerships. 

In light of mounting hacking concerns, the home security company utilized CES to announce a new dashboard for both Apple and Android users labeled “the control center”. This center provides the user with the opportunity to manage connected Ring devices, third-party devices, as well as providing the user with options for law enforcement to request access to Ring videos. 

Ring has missed initial requests of its customers who are asking for additions such as suspicious activity detection or notifying for new account logins. Ring has continued to add software that in turn places onus onto users to protect themselves. Customers are viewing this so-called privacy update as nothing more than a “cosmetic redesign”. The device continues to provide no significant hacker-protection, and therefore no notable privacy protection for its customers. 

Google Assistant: New Front-Runner in Privacy Adjustments

Each year Google is celebrated for taking full advantage of CES to indulge its visitors into the technology of the company. This year, Google’s efforts focused on Google Assistant.

After last year’s confirmation that third-party workers were monitoring Google Assistant, Google’s efforts to combat data privacy has been at the forefront of this year’s CES panel. On January 7, 2020, Google announced new features to its Assistant, reassuring its dedication to privacy protection. Users are now able to ask their assistant questions such as: 

  • “Are you saving my audio data?”
  • “Hey google, delete everything I said to you this week”
  • “Hey Google, that wasn’t for you”
  • “How are you keeping my information private?”

Source

Of these new user commands, the most significant is “are you saving my audio data?” This command allows users to determine whether or not their Assistant opted into allowing Google access. 

However, some Google Assistant users are accusing Google of placing onus onto the user, instead of creating a product that protects its user. Similar to the Ring controversy, there is frustration that Google is missing the mark for understanding the privacy demands of its users. All that being said, Google is one of few companies taking the step in the right direction to most significantly impact how user information is stored. 

It is clear that this year’s CES, while still delivering new and exciting ‘gadgets of the future’, has experienced a shift towards privacy as the most significant technological topic point. While that was made clear by most front-leading tech companies, many continue to be missing the mark in understanding the privacy their users want.

Facebook, Ring and Google each brought forward privacy changes of topical interest while continuing to exude an ignorant role of misunderstanding what it means to keep their user’s information private. Thus the question we must ask ourselves as consumers of these products continues to be; are these minimal changes enough for us to continue flushing our information into? 

Join our newsletter


Breaching Data Privacy for a Social Cause

Breaching Data Privacy for a Social Cause

Data partnerships are increasingly justified as a social good, but in a climate where companies are losing consumer trust through data breaches, privacy concerns begin to outweigh the social benefits of data sharing. 

 

This week, Apple is gaining consumer trust with its revamped Privacy Page. Facebook follows Apple’s lead as they become more wary about sharing a petabyte of data with Social Science One researchers due to increasing data privacy concerns. Also, law enforcement may be changing the genetic privacy game as they gain unprecedented access to millions of DNA records to solve homicide cases and identify victims.

Apple is setting the standard for taking consumer privacy seriously—Privacy as a Social Good

Apple is setting the stage for consumer privacy with its redesigned privacy page. Apple CEO Tim Cook announced, “At Apple, privacy is built into everything we make. You decide what you share, how you share it, and who you share it with. Here’s how we protect your data.” (Source)

There is no doubt that Apple is leveraging data privacy. When entering Apple’s new privacy landing page, bold letters are used to emphasize how privacy is a fundamental part of the company, essentially one of their core values (Source). 

Apple’s privacy page explains how they’ve designed their devices with their consumers’ privacy in mind. They also showcase how this methodology applies to their eight Apple apps: Safari browser, Apple Maps, Apple Photos, iMessage, Siri Virtual Assistant, Apple News, Wallet and Apple Pay, and Apple Health.

A privacy feature fundamental to many of Apple’s apps is that the data on an Apple device is locally stored and is never released to Apple’s servers unless the user consents to share their data, or the user personally shares his/her data with others. Personalized features, such as smart suggestions, are based on random identifiers.

  • Safari Browser blocks the data that websites collect about site visitors with an Intelligent Tracking Prevention feature and makes it harder for individuals to be identified by providing a simplified system profile for users. 
  • Apple Maps does not require users to sign in with their Apple ID. This eliminates the risk of user location and search information history linking to their identity. Navigation is based on random identifiers as opposed to individual identifiers.  

Photos taken on Apple devices are processed locally and are not shared unless stored on a cloud or shared by the user.

  • iMessages aren’t shared with Apple and are encrypted via end-to-end device encryption.
  • Siri, Apple’s voice-activated virtual assistant can process information without the information being sent to Apple’s servers. Data that is sent back to Apple is not associated with the user and is only used to update Siri.
  • Apple News curates personalized news and reading content based on random identifiers that are not associated with the user’s identity. 
  • Apple Wallet and Pay creates a device account number anytime a new card is added. Transactional data is only shared between the bank and the individual.
  • Apple Health is designed to empower the user to share their personal health information with whom they choose. The data is encrypted and can only be accessed by the user via passcodes. 

 

Facebook realizes the ethical, legal, and technical concerns in sharing 1,000,000 gigabytes of data with social science researchers

Facebook has been on the wrong side of data privacy ever since the Cambridge Analytica scandal in 2018 where users’ data was obtained, without their consent, for political advertising. Now that Facebook is approaching privacy with users best interest in mind, this is creating tension between the worlds of technology and social science. 

Earlier this year, Facebook and Social Science One partnered in a new model of industry-academic partnership initiative to “help people better understand the broader impact of social media on democracy—as well as improve our work to protect the integrity of elections.” said Facebook (Source). 

Facebook agreed to share 1,000,000 gigabytes of data with Social Science One to conduct research and analysis but has failed to meet their promises. 

According to Facebook, it was almost impossible to apply anonymization techniques such as differential privacy to the necessary data without stripping it completely of its analytical value.   

Facebook half-heartedly released some data as they approached deadlines and pressure, but what they released and what they promised was incomparable. Facebooks’ failure to share the data they agreed to counters the proposed social benefit of using the data to study the impact of disinformation campaigns. 

Facebook is torn between a commitment to contributing to a socially good cause without breaching the privacy of its users. 

This exemplifies how Facebook may not have been fully prepared to shift its business model from one that involved data monetization to a CSR-driven (corporate social responsibility) model where data sharing is used for research while keeping privacy in mind. 

Will Facebook eventually fulfill their promises?

 

Socially Beneficial DNA Data: Should Warrants be given to access Genealogy website databases?

At a police convention last week, Floridian detective, Michael Fields, revealed how he received a valid law enforcement request to access GEDmatch.com data (Source).

GEDmatch is a genealogy website that contains over a million users’ records. But, does the social benefit accrued outweigh the privacy violation to users whose data was exposed without their consent?

Last year, GEDmatch faced a mix of scrutiny and praise when they helped police identify the Golden State Killer after granting them access to their database (Source).  After privacy concerns surfaced, GEDmatch updated its privacy terms. Access was only permitted to law enforcement from users who opted-in to share their data. Additionally, police authorities are limited to searching for the purposes of, “murder, nonnegligent manslaughter, aggravated rape, robbery or aggravated assault” cases (Source).

This recent warrant granted to detective Fields overrode GEDmatch privacy terms by allowing the detective to access data of all users, even those who did not consent. This was the first time a judge agreed to a warrant of this kind. This changes the tone in genetic privacy, potentially setting precedent about who has access to genetic data. 

 

Join our newsletter