Google and Apple to lead data privacy in the global pandemic

Google and Apple to lead data privacy in the global pandemic

What happens to privacy in a global pandemic? This question continues to be debated as countries like Canada, the United States and the United Kingdom move into what is assumed to be the peak of COVID-19 spreading in their countries. 

The world watched as countries like South Korea and China introduced grave measures to track its citizens, essentially stripping their privacy rights. But as numbers continue to rise in the western world, governments are looking to implement similar tracking technologies into their own citizen’s devices. 

At the frontlines of the tracing-app, is the U.K.’s National Health Service’s health technology development unit (NHSX). The U.K.’s contract-tracing app would track COVID-19 positive patients and alert the people they had been in contact with. 

However, prior to their launching of the app, big tech companies Google and Apple released their joint contact-tracing system, limiting invasive apps on their devices and therefore derailing the development of the app. 

Google and Apple have released that they are not releasing an app themselves, but instead a set of ‘privacy-focused API’s” to ensure that governments are not releasing invasive apps onto their citizen’s devices. 

Countries like Singapore that have these contact tracing apps on phones have problems that Google and Apple are looking to avoid. Issues including requiring citizens to leave their phones unlocked or severe battery drainage. 

Google and Apple have informed that these Bluetooth-systems will run in the background and work even when the phone is locked. They have also released that this system will cease to run once the pandemic is over. 

The two big tech companies have created a high standard for privacy in the pandemic age. They will have to grant permission not only for the government applications to go live but for health authorities to access the technology (source). They have also released that they are developing policies on whether they will allow tracing apps to gather location. 

One Oxford University researcher said that around two-thirds of a country’s population would need to be involved for the contact tracing to be effective. However, the top U.S. infection diseases export says that many Americans would be inclined to reject any contact-tracing app that knowingly collects their location data.

The idea behind the Google/Apple partnership is to ensure governments are not forcing highly invasive technologies onto its citizens, and that while the world is engulfed in its chaos, personal privacy remains as intact as possible.

The NHSX has continued with its app development. However, it is alleged that they are in close contact with the Apple/Google partnership. The European Commission told one reporter that “mobile apps should be based on anonymized data and work with other apps in E.U. countries.” 

As the world struggles to contain this virus’ spread, apps and systems such as the Google/Apple partnerships could have a great effect on how COVID19 is managed. It’s important going forward not only to pay attention to how our data is being managed, but also how our anonymized data can be helped to save others.

 

Join our newsletter


How can working from home affect your data privacy?

How can working from home affect your data privacy?

On March 11, the World Health Organization declared the Coronavirus (COVID-19) a global pandemic, sending the world into a mass frenzy. Since that declaration, countries around the world have shut borders, closed schools, requested citizens to stay indoors, and sent workers home. 

While the world may appear to be at a standstill, some jobs still need to get done. Like us at CryptoNumerics, companies have sent their workers home with the tools they need to complete their regularly scheduled tasks from the comfort of their own homes. 

However, with a new influx of people working from home, insecure networks, websites or AI tools can lead company information vulnerable. In this article, we’ll go over where your privacy may be at risk during this work-from-home season.

Zoom’s influx of new users raises privacy concerns.

Zoom is a video-conferencing company used to host meetings, online-charts and online collaboration. Since people across the world are required to work or participate in online schooling, Zoom has seen a substantial increase in users. In February, Zoom shares raised 40%, and in 3 months, it has doubled its monthly active users from the entire year of 2019 (Source). 

While this influx and global exposure are significant for any company, this unprecedented level of usage can expose holes in their privacy protection efforts, a concern that many are starting to raise

Zoom’s growing demand makes them a big target for third-parties, such as hackers, looking to gain access to sensitive or personal data. Zoom is being used by companies large and small, as well as students across university campus. This means there is a grand scale of important, sensitive data could very well be vulnerable. 

Some university professors have decided against Zoom telecommuting, saying the Zoom privacy policy, which states that they may collect information about recorded meetings that take place in video conferences, raises too many concerns of personal privacy. 

On a personal privacy level, Zoom gives the administrator of the conference call the ability to see when a caller has moved to another webpage for over 30 seconds. Many are calling this option a violation of employee privacy. 

Internet-rights advocates have begun urging Zoom to begin publishing transparent reports detailing how they manage data privacy and data security.  

Is your Alexa listening to your work conversations?

Both Google Home and Amazon’s Alexa have previously made headlines for listening to homes without being called upon and saving conversation logs.  

Last April, Bloomberg released a report highlighting Amazon workings listening to and transcribing conversations heard through Alexa’s in people’s homes. Bloomberg reported that most voice assistant technologies rely on human help to help improve the product. They reported that not only were the Amazon employees listening to Alexa’s without the Alexa’s being called on by users but also sharing the things they heard with their co-workers. 

Amazon claims the recordings sent to the “Alexa reviewers” are only provided with an account number, not an address or full name to identify a user with. However, the entire notion of hearing full, personal conversations is uncomfortable.

As the world is sent to work from home, and over 100 million Alexa devices are in American homes, there should be some concern over to what degree these speaker systems are listening in to your work conversations.   

Our advice during this work-from-home-long-haul? Review your online application privacy settings, and be cautious of what devices may be listening when you have important meetings or calls. 

Join our newsletter


Banking and fraud detection; what is the solution?

Banking and fraud detection; what is the solution?

As the year comes to a close, we must reflect on the most historic events in the world of privacy and data science, so that we can learn from the challenges, and improve moving forward.

In the past year, General Data Protection Regulation (GDPR) has had the most significant impact on data-driven businesses. The privacy law has transformed data analytics capacities and inspired a series of sweeping legislation worldwide: CCPA in the United States, LGPD in Brazil, and PDPB in India. Not only has this regulation moved the needle on privacy management and prioritization, but it has knocked major companies to the ground with harsh fines. 

Since its implementation in 2018, €405,871,210 in fines have been actioned against violators, signalling that the DPA supervisory authority has no mercy in its fervent search for the unethical and illegal actions of businesses. This is only the beginning, as the deeper we get into the data privacy law, the more strict regulatory authorities will become. With the next wave of laws hitting the world on January 1, 2020, businesses can expect to feel pressure from all locations, not just the European Union.

 

The two most breached GDPR requirements are Article 5 and Article 32.

These articles place importance on maintaining data for only as long as is necessary and seek to ensure that businesses implement advanced measures to secure data. They also signal the business value of anonymization and pseudonymization. After all, once data has been anonymized (de-identified), it is no longer considered personal, and GDPR no longer applies.

Article 5 affirms that data shall be “kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed.”

Article 32 references the importance of “the pseudonymization and encryption of personal data.”

The frequency of a failure to comply with these articles signals the need for risk-aware anonymization to ensure compliance. Businesses urgently need to implement a data anonymization solution that optimizes privacy risk reduction and data value preservation. This will allow businesses to measure the risk of their datasets, apply advanced anonymization techniques, and minimize the analytical value lost throughout the process.

If this is implemented, data collection on EU citizens will remain possible in the GDPR era, and businesses can continue to obtain business insights without risking their reputation and revenue. However, these actions can now be done in a way that respects privacy.

Sadly, not everyone has gotten the message, as nearly 130 fines have been actioned so far.

The top five regulatory fines

GDPR carries a weighty fine:  4% of a business’s annual global turnover, or €20M, whichever is greater. A fine of this size could significantly derail a business, and if paired alongside brand and reputational damage, it is evident that GDPR penalties should encourage businesses to rethink the way they handle data

1. €204.6M: British Airways

Article 32: Insufficient technical and organizational measures to ensure information security

User traffic was directed to a fraudulent site because of improper security measures, compromising 500,000 customers’ personal data. 

 2. €110.3M: Marriott International

Article 32: Insufficient technical and organizational measures to ensure information security

The guest records of 339 million guests were exposed in a data breach due to insufficient due diligence and a lack of adequate security measures.

3. €50M: Google

Article 13, 14, 6, 5: Insufficient legal basis for data processing

Google was found to have breached articles 13, 14, 6, and 5 because it created user accounts during the configuration stage of Android phones without obtaining meaningful consent. They then processed this information without a legal basis while lacking transparency and providing insufficient information.

4. €18M: Austrian Post

Article 5, 6: Insufficient legal basis for data processing

Austrian Post created more than three million profiles on Austrians and resold their personal information to third-parties, like political parties. The data included home addresses, personal preferences, habits, and party-affinity.

5. €14.5M: Deutsche Wohnen SE

Article 5, 25: Non-compliance with general data processing principles

Deutsche Wohnen stored tenant data in an archive system that was not equipped to delete information that was no longer necessary. This made it possible to have unauthorized access to years-old sensitive information, like tax records and health insurance, for purposes beyond those described at the original point of collection.

Privacy laws like GDPR seek to restrict data controllers from gaining access to personally identifiable information without consent and prevent data from being handled in manners that a subject is unaware of. If these fines teach us anything, it is that investing in technical and organizational measures is a must today. Many of these fines could have been avoided had businesses implemented Privacy by Design. Privacy must be considered throughout the business cycle, from conception to consumer use. 

Businesses cannot risk violations for the sake of it. With a risk-aware privacy software, they can continue to analyze data while protecting privacy -with the guarantee of a privacy risk score.

Resolution idea for next year: Avoid ending up on this list in 2020 by adopting risk-aware anonymization.

Join our newsletter


Facial Recognition added to the list of privacy concerns

Facial Recognition added to the list of privacy concerns

Personal data privacy is a growing concern across the globe. And while we focus on where our clicks and metadata end up, there is a new section of privacy invasion being introduced: the world of facial recognition. 

Unbeknownst to the average person, facial recognition and tracking have infiltrated our lives in many ways and will only continue to grow in relevance as technology develops. 

Companies like Clearview AI and Microsoft are on two ends of the spectrum when it comes to facial recognition, with competing technologies and legislations fight to protect and expose personal information. Data privacy remains an issue as well, as products like Apple’s Safari are revealed to be leaking the very information it’s sworn to protect. 

Clearview AI is threatening privacy as we know it.

Privacy concerns due to facial recognition efforts are growing and relevant.

Making big waves in facial recognition software is a company called Clearview AI, which has created a facial search engine of over 3 billion photos. On Sunday, January 18th, the New York Times (NYT) wrote a scathing piece exposing the 2017 start-up. Until now, Clearview AI has managed to keep its operations under wraps, quietly partnering with 600 law enforcement agencies. 

By taking the photo of one person and submitting it into the Clearview software, Clearview spits out tens of hundreds of pictures of that same person from all over the web. Not only are images exposed, but the information about where they were taken, which can lead to discovering mass amounts of data on one person. 

For example, this software was able to find a murder suspect just by their face showing up in a mirror reflection of another person’s gym photo. 

The company is being questioned for serious privacy risk concerns. Not only are millions of people’s faces stored on this software without their knowledge, but the chances of this software being used for unlawful purposes are incredibly high.

The NYT also released that the software pairs with augmented reality glasses; someone could take a walk down a busy street and identify every person they passed, including addresses, age, etc. 

Many services prohibit people from scraping user’s images, including Facebook or Twitter. However, Clearview has violated said terms. When asked about its Facebook violation, the CEO, Mr. Ton-That disregarded, saying everybody does it. 

As mentioned, hundreds of police agencies in both the U.S and Canada allegedly have been using Clearview’s software to solve crimes since February of 2019. However, a Buzzfeed article has just revealed Clearview’s claim about helping to solve a 2019 subway terrorist threat is not real. The incident was a selling point for the facial recognition company to partner with hundreds of law enforcement across the U.S. The NYPD has claimed they were not involved at all. 

This company has introduced a dangerous tool into the world, and there seems to be no coming back. While it has great potential to help solve serious criminal cases, the risk for citizens is astronomical. 

Microsoft at the front of facial recognition protection

To combat privacy violations, similar to the concerns brought forward with Clearview AI, cities like San Fransisco have recently banned facial recognition technologies, fearing a privacy invasion. 

Appearing in front of Washington State Senate, two Microsoft employees sponsored two proposed bills supporting the regulation of facial recognition technologies. Rather than banning facial recognition, these bills look to place restrictions and requirements onto the technology owners.

Despite Microsoft offering facial recognition as a service, its president Brad Smith called for regulating facial recognition technologies in 2018.

Last year, similar bills, drafted by Microsoft, made it through Washington Senate. However, those did not go forward as the House made changes that Microsoft opposed. The amendments by the House included a certification that the technology worked for all skin tones and genders.

The first of these new Washington Bill’s looks similar to the California Consumer Privacy Act, which Microsoft has stated it complies with. This bill also requires companies to inform their customers when facial recognition is being used. The companies would be unable to add a person’s face to their database without direct consent.

The second bill has been proposed by Joseph Nguyen, who is both a state senator and a program manager at Microsoft. This proposed bill focuses on government use of facial recognition technology. 

A section of the second bill includes requiring that law enforcement agencies must have a warrant before using the technology for surveillance. This requirement has been met with heat from specific law enforcement, saying that people don’t have an expectation of privacy in public; thus, the demand for a warrant was unnecessary. 

Safari exposed as a danger to user privacy.

About data tracking, Google’s Information Security team has released a report detailing several security issues in the design of Apple’s Safari Intelligent Tracking Prevention (ITP). 

ITP is used to protect users from tracking across the web by preventing third-party affiliated websites from receiving information that would allow identifying the user. The report lists two of ITP’s main functionalities:

  • Establishing an on-device list of prevalent domains based on the user’s web traffic
  • Applying privacy restrictions to cross-site requests to domains designated as prevalent

The report, created by Google researchers Artur Janc, Krzysztof Kotowicz, Lucas Weichselbaum, and Roberto Clapis, reported five different attacks that exploit the ITP’s design. These attacks are: 

  • Revealing domains on the ITP list
  • Identifying individual visited websites
  • Creating a persistent fingerprint via ITP pinning
  • Forcing a domain onto the ITP list
  • Cross-site search attacks using ITP

(Source)

Even so, the advised ‘workarounds’ given in the report “will not address the underlying problem.” 

Most interesting coming from the report is that in trying to address privacy issues, Apple’s Safari created more significant privacy issues.

As facial recognition continues to appear in our daily lives, recognizing and educating on the implications these technologies will have is critical to how we move forward as an information-protected society.

To read more privacy blogs, click here

 

Join our newsletter


The privacy authorities are calling. Is your call centre data GDPR and CCPA compliant?

The privacy authorities are calling. Is your call centre data GDPR and CCPA compliant?

Every time someone calls your call centre, the conversation is recorded and transcribed into free-text data. This provides your business with a wealth of valuable data to derive insights from. The problem is, the way you are using the data today violates privacy regulations and puts you at risk of nine-figure fines and reputational damage.

Call centres often record and manage extremely sensitive data. For example, at a bank, a customer will provide their name, account number, and the answer to a security question (such as their mother’s maiden name). At a wealth management office, someone may call in and talk about their divorce proceedings. This information is not only incredibly personal, but using it for additional purposes without consent is against the law.

Data is transcribed for training purposes. However, the data is often repurposed. Businesses rely on this data for everything from upselling to avoiding customer churn – not to mention the revenue some earn from selling data. 

But under GDPR, data cannot be used for additional purposes without the explicit consent of the data subject.  To comply with privacy regulations, when data science and analytics are performed on the transcripts, a business must first inform and ask permission for each and every instance of use. 

Every time a business asks for permission, they risk requests for data deletion and denials of use that render the transcripts useless. This is because people do not want their data to be exposed, let alone be used to monitor their behaviour.

However, this does not mean all your transcript data is null and void. Why? Because by anonymizing data, you can protect customer privacy and take data out-of-scope from privacy regulations.

In other words, if you anonymize your call centre data, you can use the transcripts for any purpose.

However, anonymization of this kind of data is more complicated than applying traditional methods of privacy protection, like masking and tokenization. Audio transcripts are unstructured, and so using traditional anonymization methods render the data unusable. 

If you use improperly anonymized transcript data for additional purposes, without consent, you will be found in violation of GDPR. This means your business can be fined up to 4% of your revenue. Mistaking partially protected data as anonymized, or hoping manual approaches to de-identification will work, is not legally acceptable. Just ask Google how that turned out for them.

To avoid this, businesses must utilize systematic privacy assessments that quantify the re-identification risk score of their data and establish automated privacy protection based on a predetermined risk threshold. With this, businesses can be certain of the anonymization of their transcripts and perform secondary actions without risking GDPR non-compliance.

State-of-the-art technologies will also enable businesses to measure and reduce the impact of privacy protection on the analytical value of data.

Call centre transcripts are a rich source of customer data that can generate valuable business insights. But blindly using this information can cost your business millions. Use an advanced privacy protection solution to anonymize your transcripts while retaining the analytical value. 

Join our newletter


Is this a mirror world? Uber defends privacy and science centre exposes 174,000 names

Is this a mirror world? Uber defends privacy and science centre exposes 174,000 names

Typically we expect Uber to be on the wrong side of a privacy debacle. But this week, they claim to be defending the privacy of its users from the LA Department of Transportation. Meanwhile, the Ontario Science Centre experiences a data breach that exposed the personal information of 174,000 individuals. Are the upcoming state-level privacy laws the answer to consumers privacy concerns?

Uber claims LA’s data-tracking tool is a violation of state privacy laws.

LA Department of Transportation (LADOT) wants to use Uber’s dockless scooters and bikes to collect real-time trip data. But, Uber has repeatedly refused due to privacy concerns. This fight is coming to a head, as on Monday, Uber threatened to file a lawsuit and temporary restraining order (Source).

Last year, the general manager of LADOT, Reynolds began developing a system that would improve mobility in the city by enabling communication between them and every form of transportation. To do so, they implemented a mobility data specification (MDS) software program, called Provider, in November that mandated all dockless scooter and bikes operating in LA send their trip data to the city headquarters.

Then, a second piece of software was developed, Agency, which reported and alerted companies about their micro-mobility devices. For example, it would send alerts about an improperly parked scooter or imminent street closure (Source).

This would mean the city has access to each and every single trip consumers take. Yet, according to Reynolds, the data they are gathering is essential to manage the effects of micro-mobility on the streets. “At LADOT, our job is to move people and goods as quickly and safely as possible, but we can only do that if we have a complete picture of what’s on our streets and where.” (Source).

Other cities across the country were thrilled by the results and look to implement similar MDS solutions. 

In reality, the protocols exhibit Big Brother-like implications, and many privacy stakeholders seem to side with Uber. Determining that LADOT’s actions would in fact, “constitute surveillance.” (Source).This includes the EFF who stated that “LADOT must start taking seriously the privacy of Los Angeles residents.” What’s more in a letter to LA, they wrote that “the MDS appears to violate the California Electronic Communications Privacy Act (CalECPA), which prohibits any government entity from compelling the production of electronic device information, including raw trip data generated by electronic bikes or scooters, from anyone other than the authorized possessor of the device without proper legal process.” (Source)

While Uber seems to have validity in their concerns, there is fear that LADOT will revoke their permit to operate because of their refusal to comply (Source). As of Tuesday, the company’s permit was suspended. But with the lawsuit looming, the public can expect the courts to decide the legality of the situation (Source).

Ontario Science Centre data breach exposes 174,000 names

This week the Ontario Science Centre explains that on August 16, 2019, they were made aware of a data breach that affected 174,000 people. This was discovered by Campaigner, the third-party company that performs the mailings, newsletters, and invitations for the OSC. 

Between July 23 and August 7, “someone made a copy of the science centre’s subscriber emails and names without authorization.” (Source

Upon further investigation, it was learned that the perpetrator used a former Campaigner’s login credentials to access the data. While no other personal information was stolen, the mass number of consumers affected highlights the potentially negative consequences associated with using trusted third parties.

Anyone whose data was compromised in this incident was alerted by the science centre and was encouraged to ask any further questions. In addition, the Ontario Information and Privacy Commissioner, Beamish, was alerted about the breach one-day after the notices began going out to the public. 

Moving forward, the Ontario Science Centre is “reviewing data security and retention policies.” alongside Beamish to investigate the incident in full and ensure it is not repeated in the future (Source).

Will more states adopt privacy laws in 2020?

January 1, 2020, marks the implementation of the California Consumer Privacy Act (CCPA). This upcoming law has spread across the media, but soon more state-level privacy laws are expected that will reshape the privacy landscape in America. With a focus on consumer privacy and an increased risk of litigation, businesses are on the edge of their seats anticipating the state’s actions.

Bills in New York, New Jersey, Massachusetts, Minnesota, and Pennsylvania will be debated in the next few months. However, due to the challenge of mediating all stakeholders involved, several of the laws that were expected to have been passed this year were caught up in negotiations. Some have even fallen flat, like those in Arizona, Florida, Kentucky, Mississippi, and Montana. On the other hand, a few states are forming studies that will evaluate current privacy laws and where they should be updated or expanded by digging into data breaches and Internet privacy (Source).

Meanwhile, big tech is lobbying for a federal privacy law in an attempt to supersede state-level architecture (To learn more about this read our blog).

Any way you look at it, more regulations are coming, and the shift of privacy values will create mass changes in the United States and across the globe. This is more necessary than ever, in a new mirror world where Uber claims to be on a mission to protect user privacy and the science centre comes clean about a massive data breach. The question remains, are privacy laws the answer to the data-driven world? Perhaps, 2020 will be the year to make businesses more privacy-conscious.

Join our newsletter