Facial recognition, data marketplaces and AI changing the future of data privacy

Facial recognition, data marketplaces and AI changing the future of data privacy

With the emerging Artificial Intelligence (AI) market comes the everso popular privacy discourse. Data regulations that are being introduced left and right, while effective, are not yet representative of the growing technologies like facial recognition or data marketplaces. 

Companies like Clearview AI are once again making headlines after receiving cease-and-desist from big tech, despite there being no current facial recognition laws they are violating. As well, Nature released an article calling for an international code of conduct for genomic research aggregation. Between both AI and healthcare, Microsoft has announced a $40million AI for health initiative.  

Facial recognition company hit with cease-and-desist  

A few weeks ago, we released a blog introducing the facial recognition start-up, Clearview AI, as a threat to privacy.

Since then, Clearview AI has continued to make headlines, and most recently, has received cease-and-desist from Big Tech companies like Google, Facebook and Twitter. 

To recap, Clearview AI is a facial recognition company that has created a database of over 3 billion searchable faces, scrapped from different social media platforms. The company has introduced its software in more than 600 police departments across Canada and the US. 

The company’s CEO, Hoan Ton-That, has repeatedly defended its company, telling CBS

“Google can pull in information from all different websites, so if it’s public, you know, and it’s out there, it could be inside Google search engine it can be inside ours as well.”

Google then responded, saying this was ‘inaccurate.’ Google says they are a public search option and give sites choices in what they put out, as well as give opportunities to withdraw images. All options Clearview does not provide, as they go as far as holding images in their database after it’s been deleted from its source.

While Google and Facebook have both provided Clearview with a cease-and-desist, Clearview has maintained that they are within their first amendment rights to use the information. One privacy attorney told Cnet, “I don’t really buy it. It’s really frightening if we get into a world where someone can say, ‘The first amendment allows me to violate everyone’s privacy.’” 

While cities like San Francisco have started banning facial recognition, there are currently no federal laws addressing it as an issue, thus allowing more leeway for companies like Clearview AI to create potentially dangerous software.  

Opening up genomic data for researchers across the world

With these introductions to new health care initiatives, privacy becomes more relevant than ever. Healthcare data contains some of the most sensitive information for an individual. Thus the idea of big tech buying and selling such personal data is scary.

Last week, Nature, an international journal of science, released that over 800 terabytes of genomic data are available to investigators all over the world. The eight authors worked explicitly to protect the privacy of the thousands of patients/volunteers who consented to have their data used in this research.

The article reports the six-year collection of 2,658 cancer genomes between 468 institutions in 34 different countries is creating an open market of genome data. This project, called the Pan-Cancer Analysis of Whole Genomes (PCAWG), was the first attempt to aggregate a variety of subprojects and release a dataset globally.

A significant emphasis of this article was on the lack of clarity within the healthcare research community on how to protect data in compliance with the ongoing changes to privacy legislation.

Some issues in these genomic marketplaces are in the strategic attempts to not only comply with the variety of privacy legislation but also in ensuring that no individual can be re-identified using this information. Protecting patient data is not just a legislative issue but a moral one. 

The majority of the privacy unclarity came from questions of what vetting should occur before gaining access to information, or what checks should be made before the data is internationally shared.

As the article says, “Genomic researches urgently need clear data-sharing rules that are harmonized across jurisdictions.” The report calls for an international code of conduct to overcome the current hurdles that come with the different emerging privacy regulations. 

The article also said that the Biobanking and BioMolecular Resources Research Infrastructure (BBMRI-ERIC), had announced back in 2017 that it would develop an EU Code of Conduct on Health-Related Data. Once completed and approved, 

Microsoft to add another installment to AI for Good

The ability to collect patient data and share in an open market for researchers or doctors is helping cure and diagnose patients at a faster rate than ever before seen. In addition to this, AI is seen as another vital tool for the growing healthcare industry.

Last week, Microsoft announced its fifth installment to its ‘AI for Good’ project, ‘AI for Health.’ This project, similar to its cohorts, will support healthcare initiatives such as providing access to cash grants, AI tools, cloud computing, and Microsoft researchers. 

The project will focus on three different AI strategies, including: 

  • Accelerating medical research
  • Increase the understanding of mortality to guard various global health crises.
  • Reducing health injustices 

The program will be emphasizing supporting individual non-profits and under-served communities. As well, Microsoft released in a video their focus on addressing Sudden Infant Death Syndrome, eliminating Leprosy and diabetic retinopathy-driven blindness in partnership with different non-for-profits. 

AI is essential to healthcare, and it has lots of data that companies like Microsoft are utilizing. But with this, privacy has to remain at the forefront of the action. 

Similar to Nature’s data, protecting user information is extremely important and complicated when looking to utilize the data’s analytical value, all while complying with privacy regulations. Microsoft announced that it would be using Differential Privacy as its privacy solution. 

Like Microsoft, we at CryptoNumerics user differential privacy as a method of anonymization and data value preserving. Learn more about differential privacy and CryptoNumeric solutions.

 

Join our newsletter


Facial Recognition added to the list of privacy concerns

Facial Recognition added to the list of privacy concerns

Personal data privacy is a growing concern across the globe. And while we focus on where our clicks and metadata end up, there is a new section of privacy invasion being introduced: the world of facial recognition. 

Unbeknownst to the average person, facial recognition and tracking have infiltrated our lives in many ways and will only continue to grow in relevance as technology develops. 

Companies like Clearview AI and Microsoft are on two ends of the spectrum when it comes to facial recognition, with competing technologies and legislations fight to protect and expose personal information. Data privacy remains an issue as well, as products like Apple’s Safari are revealed to be leaking the very information it’s sworn to protect. 

Clearview AI is threatening privacy as we know it.

Privacy concerns due to facial recognition efforts are growing and relevant.

Making big waves in facial recognition software is a company called Clearview AI, which has created a facial search engine of over 3 billion photos. On Sunday, January 18th, the New York Times (NYT) wrote a scathing piece exposing the 2017 start-up. Until now, Clearview AI has managed to keep its operations under wraps, quietly partnering with 600 law enforcement agencies. 

By taking the photo of one person and submitting it into the Clearview software, Clearview spits out tens of hundreds of pictures of that same person from all over the web. Not only are images exposed, but the information about where they were taken, which can lead to discovering mass amounts of data on one person. 

For example, this software was able to find a murder suspect just by their face showing up in a mirror reflection of another person’s gym photo. 

The company is being questioned for serious privacy risk concerns. Not only are millions of people’s faces stored on this software without their knowledge, but the chances of this software being used for unlawful purposes are incredibly high.

The NYT also released that the software pairs with augmented reality glasses; someone could take a walk down a busy street and identify every person they passed, including addresses, age, etc. 

Many services prohibit people from scraping user’s images, including Facebook or Twitter. However, Clearview has violated said terms. When asked about its Facebook violation, the CEO, Mr. Ton-That disregarded, saying everybody does it. 

As mentioned, hundreds of police agencies in both the U.S and Canada allegedly have been using Clearview’s software to solve crimes since February of 2019. However, a Buzzfeed article has just revealed Clearview’s claim about helping to solve a 2019 subway terrorist threat is not real. The incident was a selling point for the facial recognition company to partner with hundreds of law enforcement across the U.S. The NYPD has claimed they were not involved at all. 

This company has introduced a dangerous tool into the world, and there seems to be no coming back. While it has great potential to help solve serious criminal cases, the risk for citizens is astronomical. 

Microsoft at the front of facial recognition protection

To combat privacy violations, similar to the concerns brought forward with Clearview AI, cities like San Fransisco have recently banned facial recognition technologies, fearing a privacy invasion. 

Appearing in front of Washington State Senate, two Microsoft employees sponsored two proposed bills supporting the regulation of facial recognition technologies. Rather than banning facial recognition, these bills look to place restrictions and requirements onto the technology owners.

Despite Microsoft offering facial recognition as a service, its president Brad Smith called for regulating facial recognition technologies in 2018.

Last year, similar bills, drafted by Microsoft, made it through Washington Senate. However, those did not go forward as the House made changes that Microsoft opposed. The amendments by the House included a certification that the technology worked for all skin tones and genders.

The first of these new Washington Bill’s looks similar to the California Consumer Privacy Act, which Microsoft has stated it complies with. This bill also requires companies to inform their customers when facial recognition is being used. The companies would be unable to add a person’s face to their database without direct consent.

The second bill has been proposed by Joseph Nguyen, who is both a state senator and a program manager at Microsoft. This proposed bill focuses on government use of facial recognition technology. 

A section of the second bill includes requiring that law enforcement agencies must have a warrant before using the technology for surveillance. This requirement has been met with heat from specific law enforcement, saying that people don’t have an expectation of privacy in public; thus, the demand for a warrant was unnecessary. 

Safari exposed as a danger to user privacy.

About data tracking, Google’s Information Security team has released a report detailing several security issues in the design of Apple’s Safari Intelligent Tracking Prevention (ITP). 

ITP is used to protect users from tracking across the web by preventing third-party affiliated websites from receiving information that would allow identifying the user. The report lists two of ITP’s main functionalities:

  • Establishing an on-device list of prevalent domains based on the user’s web traffic
  • Applying privacy restrictions to cross-site requests to domains designated as prevalent

The report, created by Google researchers Artur Janc, Krzysztof Kotowicz, Lucas Weichselbaum, and Roberto Clapis, reported five different attacks that exploit the ITP’s design. These attacks are: 

  • Revealing domains on the ITP list
  • Identifying individual visited websites
  • Creating a persistent fingerprint via ITP pinning
  • Forcing a domain onto the ITP list
  • Cross-site search attacks using ITP

(Source)

Even so, the advised ‘workarounds’ given in the report “will not address the underlying problem.” 

Most interesting coming from the report is that in trying to address privacy issues, Apple’s Safari created more significant privacy issues.

As facial recognition continues to appear in our daily lives, recognizing and educating on the implications these technologies will have is critical to how we move forward as an information-protected society.

To read more privacy blogs, click here

 

Join our newsletter


Avoid Data Breaches and Save Your Company Money

Avoid Data Breaches and Save Your Company Money

Tips on how to avoid privacy risks and breaches that big companies face today. How much data breaches cost in 2019. Why consumers are shying away from sharing their data. Airline phishing scam could prove to be fatal in the long-run.

Stay Ahead of the Privacy Game

The Equifax data breach is another wake-up call for all software companies. There’s so much going on today, with regards to data exposure, fraud and, threats. Especially with the new laws proposed, companies should take the necessary steps to stay away from penalties and breaches. Here are some ways you can stay ahead of the privacy game. 

  1. Get your own security hackers – Many companies have their own cybersecurity team, to test out for failures, threats, etc. Companies also hire outside hackers to uncover any weaknesses in the company’s privacy or security tactics. “Companies can also host private or public “bug bounty” competitions where hackers are rewarded for detecting vulnerabilities” (Source)
  2. Establish trust with certificates of compliance – Earn your customers’ trust by achieving certificates of compliance. The baseline certification is known as the ISO 27001. If your company offers cloud services, you can attain the SOC 2 Type II certificate of compliance.
  3. Limit the data you need – Some companies ask for too much information, for example, when a user is signing up for a free trial in hopes of making easy money. Why ask for their credit card number when you are offering a free trial service? If they love the product or service, they themselves will offer to pay for full services. Have faith in your product or service.
  4. Keep the data for as long as needed only – Keeping this data for long periods of time, when you don’t need it is simply a risk for your company. Think about it: As a consumer yourself, how would you react if your own personal data was compromised because of a trial you signed up for years ago? (Source)

How much does a data breach cost today?

According to a 2019 IBM + Ponemon Institute report, the average data breach costs a company approximately USD$1.25 million to USD$8.19 million, depending on the country and industry.

Each record costs companies an average of USD$148, based on the report’s results, which surveyed 507 organizations and was based on 16 regions in the world, across 17 industries. The U.S. takes first place with the highest data breach, at USD$8.19 million. Healthcare is the most expensive industry in terms of data breach costs, sitting in at an average of USD$6.45 million. 

However, the report isn’t all negative, as it provides tips to improve your data privacy. You can reduce the cost of a potential data breach by up to USD$720,000, through simple mitigating steps such as an incident response team or having encryption in place (Source).

Consumers more and more hesitant to share their data

Marketers and data scientists all over – beware. A survey of 1,000 Americans conducted by the Advertising Research Foundation indicates that consumers’ will to share data with companies has decreased drastically since last year. “I think the industry basically really needs to communicate the benefits to the consumer of more relevant advertising,” said ARF Chief Research Officer Paul Donato. It is important to remember that not all consumers would happily give up their data for better-personalized advertisements (Source).

Air New Zealand breach could pose long-term effects

Air New Zealand’s recent phishing scam from earlier this week has caused fear among citizens. The data breach exposed about 112,00 Air New Zealand Airpoints customers to long-term privacy concerns. 

Victims received emails requesting them to disclose personal information. They then responded with personal information like passport numbers and credit card numbers. 

“The problem is, the moment things are out there, then they can be used as a means to gain further information,” said  Dr. Panos Patros, a specialist in cybersecurity at the University of Waikato. “Now they have something of you so then they can use it in another attack or to confuse someone else” (Source).

A good practice for situations similar to this is to regularly change your passwords and monitor your credit card statements. Refrain from putting common security question information on your social media such as the first school you attended or your first pet’s name, etc. Additionally, delete all suspicious emails immediately without opening them (Source). 

Join our newsletter


Facial Recognition Technology is Shaking Up the States

Facial Recognition Technology is Shaking Up the States

Facial recognition technology is shaking up the States

Many states in America are employing facial recognition devices at borders to screen travelers. However, some cities like Massachusetts and San Francisco have banned the use of these devices, and the American Civil Liberties Union (ACLU) is pushing for a nationwide ban. 

It is still unclear how the confidential data gathered by the facial recognition devices will be used. Could it be shared with other branches of the government, such as ICE? 

ICE, or Immigrations and Customs Enforcement have been in the public eye for some time now, for their arrests of undocumented workers and immigration offenders. 

“Any time in the last three to four years that any data collection has come up, immigrants’ rights … have certainly been part of the argument,” says Brian Hofer, who is part of Oakland’s Privacy Advisory Commission. “Any data collected is going to be at risk when [ICE is] on a warpath, looking for anything they can do to arrest people. We’re definitely trying to minimize that exposure”.

This unregulated data is what is helping ICE locate and monitor undocumented people violating laws (Source).

Now Microsoft is listening to your Skype calls

A new day, a new privacy scandal. This week, Microsoft and Skype employees were revealed to be reviewing real consumer video chats, to check the quality of their software, and its translations. 

The problem is that they are keeping their customers in the dark on this, as do most tech companies. Microsoft has not told its consumers that they do this, though the company claims to have their users’ permission. 

“I recommend users refrain from revealing any identifying information while using Skype Translation, and Cortana. Unless you identify yourself in the recording, there’s almost no way for a human analyst to figure out who you are”, says privacy advocate Paul Bischoff (Source).

Essentially Alexa, Siri, Google Home, and Skype are listening to your conversations. However, instead of avoiding these products, we are compromising our privacy for convenience and efficiency. 

Canadians want more healthcare tech, regardless of privacy risks

New studies indicate that Canadians are open to a future where healthcare is further enhanced with technology, despite privacy concerns. 

The advantages of these innovations include reduced medical errors, reduced data loss, better-informed patients, and much more. 84% of respondents wanted to access their health data on an electronic platform, as opposed to hard copy files. 

Dr. Gigi Osler, president of the Canadian Medical Association, states, “We’ve got hospitals that still rely on pagers and fax machines, so the message is clear that Canada’s health system needs an upgrade and it’s time to modernize”. 

Furthermore, most respondents look forward to the possibility of online doctor visits, believing that treatment could be faster and more convenient (Source).

After all, if we bank, shop, read, watch movies and socialize online, why can’t we get digital treatment too? 

Join our newsletter