Select Page
Facebook collecting healthcare data

Facebook collecting healthcare data

As many of our previous blogs have highlighted, COVID-19 is severely impacting the tech world. Privacy regulations have been a hot topic for debate between governments, Big tech, and its users. 

 Facebook has joined the top companies taking advantage of user data in COVID-19 research. As well, Brazil’ LGPD sees pushback in its enforcing because of COVID-19. Opposite to Brazil, US senators are introducing a new privacy bill to ensure American’s data privacy remains protected.

 

Facebook collecting symptom data

 In the current pandemic climate, tech companies of all sizes have stepped up to provide solutions and aid to governments and citizens struggling to cope with COVID-19. As we’ve highlighted in our previous blog posts, Google and Apple have been at the frontlines of introducing systems to protect their user privacy, while inflicting change in how communities track the virus.

 Following closely behind, Facebook has introduced its attempt to work with user data for the greater good of COVID-19 research. 

 Facebook announced its partnerships with different American universities to begin collecting symptom data in different countries. Facebooks CEO and founder told the Verge that the information could work to highlight COVID hotspots across the globe, especially in places where governments have neglected to address the virus’s severity.

Facebook has been working throughout this pandemic to demonstrated how aggregated & anonymized data can be used for good. 

However, not everyone is taking to Facebook’s sudden praise for user data control. One article highlighted how the company is still being investigated by the FTC over privacy issues

Facebook’s long list of privacy invasions for its users is raising some concerns over not how the data is currently being used, but how it will be handled after the pandemic has subsided. 

 

Brazil pushes back privacy legislation.

At the beginning of this year, we wrote an article outlining Brazil’s first data protection act, LGPD. This privacy legislation follows closely to that of the EU’s GDPR and will unify 40 current privacy laws the country has. 

Before COVID-19s effect on countries like Brazil, many tech companies were already pressuring the Brazilian government to change LGPD’s effective date.

On April 29th, the Brazilian president delayed the applicability date of the LGPD to May 3rd, 2021. By issuing this Provisional measure, the Brazilian Congress has been given 16 days to approve the new LGPD implementation. 

If Congress does not approve of this new date by May 15th, the Brazillian Congress must vote on the new LGPD date. If they do not, the LGPD will come into effect on August 14th, 2020. 

Brazil’s senate has now voted to move its introduction in January 2021, with sanctions coming to action in August 2021. Meaning all lawsuits and complaints can be proposed as of January 1st, and all action will be taken on August 1st (source).

 

America introduces new privacy law.

Much like Brazil’s privacy legislation being affected by COVID-19, some US senators have stepped up to ensure the privacy of American citizens data.

The few senators proposing this bill have said they are working to “hold businesses accountable to consumers if they use personal data to fight the COVID-19 pandemic.”

This bill does not target contact tracing apps like those proposed by Apple and Google. However, it does ensure that these companies are effectively using data and protecting it. 

The bill requires companies to gain consent from users in order to collect any health or location data. As well, it forces companies to ensure that the information they collect is properly anonymized and cannot be re-identified. The bill requires that these tech companies will have to delete all identifiable information once COVID-19 has subsided, and tracking apps are no longer necessary. 

The bill has wide acceptance across the congressional floor and will be enforced by the state attorney generals. This privacy bill is being considered a big win for Americans’ privacy rights, especially with past privacy trust issues between big tech companies and its users. 

Location data and your privacy

Location data and your privacy

As technology grows to surround the entirety of our lives, it comes as no surprise that each and every move is tracked and stored by the very apps we trust with our information. With the current COVID-19 pandemic, the consequences of inviting these big techs into our every movement are being revealed. 

At this point, most of the technology-users understand the information they do give to companies, such as their birthdays, access to pictures, or other sensitive information. However, some may be unknowing of the amount of location data that companies collect and how that affects their data privacy. 

Location data volume expected to grow

We have created over 90% of the world’s data since 2017. As wearable technology continues to grow in trend, the amount of data a person creates each day is on a steady incline. 

One study reported that by 2025, the installation of worldwide IoT-enabled devices is expected to hit 75 billion. This astronomical number highlights how intertwined technology is into our lives, but also how welcoming we are to that technology; technology that people may be unaware of the ways their data is collected. 

Marketers, companies and advertisers will increasingly look to using location-based information as its volume grows. A recent study found that more than 84% of marketers use location data for their 

The last few years have seen a boost in big tech companies giving their users more control over how their data is used. One example is in 2019 when Apple introduced pop-ups to remind users when apps are using their location data.

Location data is saved and stored for the benefit of companies to easily direct personalized ads and products to your viewing. Understanding what your devices collect from you, and how to eliminate data sharing on your devices is crucial as we move forward in the technological age. 

Click here to read our past article on location data in the form of wearable devices. 

COVID-19 threatens location privacy

Risking the privacy of thousands of people or saving thousands of lives seems to be the question throughout this pandemic; a question that is running out of time for debate. Companies across the big 100 have stepped up to volunteer its anonymized data, including SAS, Google and Apple. 

One of the largest concerns is not how this data is being used in this pandemic, but how it could be abused in the future. 

One Forbes article brought up a comparison of the regret many are faced with after sharing DNA with sites like 23andMe, leading to health insurance issues or run-ins with criminal activity. 

As companies like Google, Apple and Facebook step-up to the COVID-19 technology race, many are expressing their concerns as these companies have not been deemed reliable for user data anonymization. 

In addition to the data-collecting concern, governments and big tech companies are looking into contact-tracking applications. Civilian location data being used for surveillance purposes, while alluded for the greater good of health and safety, raises multiple red flags into how our phones can be used to survey our every movement. To read more about this involvement in contact tracing apps, read our latest article

Each company has released that it anonymizes its collected data. However, in this pandemic age, anonymized information can still be exploited, especially at the hands of government intervention. 

With all this said, big tech holds power over our information and are playing a vital role in the COVID-19 response. Paying close attention to how user data is managed post-pandemic will be valuable in exposing how these companies handle user information.

 

Google and Apple to lead data privacy in the global pandemic

Google and Apple to lead data privacy in the global pandemic

What happens to privacy in a global pandemic? This question continues to be debated as countries like Canada, the United States and the United Kingdom move into what is assumed to be the peak of COVID-19 spreading in their countries. 

The world watched as countries like South Korea and China introduced grave measures to track its citizens, essentially stripping their privacy rights. But as numbers continue to rise in the western world, governments are looking to implement similar tracking technologies into their own citizen’s devices. 

At the frontlines of the tracing-app, is the U.K.’s National Health Service’s health technology development unit (NHSX). The U.K.’s contract-tracing app would track COVID-19 positive patients and alert the people they had been in contact with. 

However, prior to their launching of the app, big tech companies Google and Apple released their joint contact-tracing system, limiting invasive apps on their devices and therefore derailing the development of the app. 

Google and Apple have released that they are not releasing an app themselves, but instead a set of ‘privacy-focused API’s” to ensure that governments are not releasing invasive apps onto their citizen’s devices. 

Countries like Singapore that have these contact tracing apps on phones have problems that Google and Apple are looking to avoid. Issues including requiring citizens to leave their phones unlocked or severe battery drainage. 

Google and Apple have informed that these Bluetooth-systems will run in the background and work even when the phone is locked. They have also released that this system will cease to run once the pandemic is over. 

The two big tech companies have created a high standard for privacy in the pandemic age. They will have to grant permission not only for the government applications to go live but for health authorities to access the technology (source). They have also released that they are developing policies on whether they will allow tracing apps to gather location. 

One Oxford University researcher said that around two-thirds of a country’s population would need to be involved for the contact tracing to be effective. However, the top U.S. infection diseases export says that many Americans would be inclined to reject any contact-tracing app that knowingly collects their location data.

The idea behind the Google/Apple partnership is to ensure governments are not forcing highly invasive technologies onto its citizens, and that while the world is engulfed in its chaos, personal privacy remains as intact as possible.

The NHSX has continued with its app development. However, it is alleged that they are in close contact with the Apple/Google partnership. The European Commission told one reporter that “mobile apps should be based on anonymized data and work with other apps in E.U. countries.” 

As the world struggles to contain this virus’ spread, apps and systems such as the Google/Apple partnerships could have a great effect on how COVID19 is managed. It’s important going forward not only to pay attention to how our data is being managed, but also how our anonymized data can be helped to save others.

 

Data sharing in a global pandemic

Data sharing in a global pandemic

As the world continues to brace the impact of Covid-19, data privacy remains a central concern in the development of working from home and containing the spread. Last week, Google released an anonymized dataset of location data outlining hotspots of group gatherings. In addition to this, UK officials are looking to introduce contact tracing applications. For those companies containing the spread within their offices, Zoom has risen as a trusted video conference app. However, in the last few weeks, serious privacy concerns have increased.  

Google releases location data.

On Friday, Google released its COVID-19 Community Mobility Reports. These reports are a collection of data from users who have opted in to sharing their location history with the search giant. This location history comes from Google maps, where the data is aggregated and anonymized. 

Google says that by releasing this data, public health officials are able to determine which businesses are most crowded. This helps to determine the type of grand scale decisions to be made in terms of curfews, stay-at-home orders, or which companies are necessary to remain open. 

The reports are open for public viewing, opening up the data of 131 countries with certain countries displaying regional data such as provinces or states. After selecting the country, Google creates a PDF file with the data for downloaded. 

Each PDF report contains six categories of location data. These include: 

  • Retail and recreation (restaurants, shopping centers, libraries, etc.)
  • Grocery and pharmacy (supermarkets, drug stores)
  • Parks (beaches, dog parks)
  • Transit stations (subways, bus and train stations)
  • Offices 
  • Residences 

(source

Creating these reports comes after weeks of requests from public health officials asking for applications to test a person’s contact with an infected patient. While Google’s data is unable to determine that these datasets may help cities or countries to determine preventive measures. 

Other countries have used similar, more aggressive technology using location data. At the beginning of March, we released an article about Korea’s efforts to stop the spreading by using people’s location to track if they leave their house or not. 

Another news article revealed Taiwan also participated in using location data to track its citizens. Even by going as far as calling phones twice a day to ensure citizens are not just leaving their house without their phone. 

Google released that its data will cover the past 48-72 hours, and has yet to determine when the data will be updated.

Contact Tracing Apps

Similar to the data released by Google, there is more pressure on governments to introduce contact tracing apps like the ones seen in Korea or Taiwan.

In the UK, researchers have begun compiling papers to discuss how privacy can be handled and mishandled in these tracing apps.

One researcher, Dr. Yves-Alexandre de Montjoye, created a whitepaper outlining eight questions to understanding how privacy is protected in these types of potential apps. 

These eight questions include: 

  • How do you limit personal data gathered by the app developers?
  • How do you protect the anonymity of every user? 
  • Does the app reveal to its developers the identity of users who are at risk? 
  • Could the app be used by users to learn who is infected or at risk, even in their social circle? 
  • Does the app allow users to learn any personal information about other users? 
  • Could external parties exploit the app to track users or find out who’s infected? 
  • Do you put in place additional measures to protect the personal data of infected and at-risk users?
  • How can users verify that the system does what it says? 

(source

As governments move quickly to contain the spread of Covid-19, actions like contact tracing apps are becoming seriously considered for introduction. However, patient privacy should not disappear. As the world braces for an even more significant influx of Covid-19 cases, it is in the hand of government officials and big tech to work together to contain the spread while maintaining data privacy.

Zoom faced with data sharing lawsuit

A few weeks ago, we released an article outlining small privacy concerns about introducing Zoom into the work from home environment. Since that article, Zoom has not only increased in popularity but insignificant privacy concerns as well. 

On March 26, Vice news released an article detailing Zoom’s relationship with Facebook. 

Vice reported that Zoom sends analytic data to Facebook, even when a Zoom user doesn’t have a Facebook. While this type of data transfer is not uncommon between companies and Facebook SDK, Zoom’s own privacy policy leaves its data-sharing out.

The article report that Zoom notifies Facebook when a user opens its app as well as specific identifying details for companies to target users with advertisements through. 

Zoom reached out to the news article stating they will be removing the Facebook SDK. However, it wasn’t long before the video-conferencing company was hit with a lawsuit. 

But the privacy concerns don’t just come from data-sharing. The past few weeks have seen numerous reports of account hacking, allegedly not offering end-to-end encryption, password stealing, leaks, and microphone/camera hijacking

And these claims are only just starting to roll in. As Zoom projects to the top of one used video-conferencing technology during this work-from-home burst, the next few weeks could see thousands of data privacy broken.

4 techniques for data science

4 techniques for data science

With growing tension between privacy and analytics, the job of data scientists and data architects has become more complicated. The responsibility of data professionals is not just to maximize the value of the data, but to find ways in which data can be privacy protected while preserving its analytical value.

The reality today is that regulations like GDPR and CCPA have disrupted the way in which data flows through organizations. Now data is being siloed and protected using techniques that are not suited for the data-driven enterprise. Data professionals are left with long processes to access the information they need and, in many cases, the data they receive has no analytical value after it has been protected. 

This emphasizes the importance of using adequate privacy protection tactics to ensure that personally identifiable information (PII) is accessible in a privacy-protected manner and that it can be used for analytics.

To satisfy GDPR and CCPA, organizations can choose between three options, pseudonymization, anonymization, and consent: 

Pseudonymization is replacing direct identifiers, like names or emails, with pseudonyms to protect the privacy of the individual. However, this process is still in the scope of the privacy regulations, and the risk for re-identification remains very high.

Anonymization, on the other hand, looks at direct identifiers and quasi-identifiers and transforms the data in a way that’s now out-of-scope for privacy regulations and can be used for analytics. 

Consent requires organizations to ask customers for their consent on the usage of data, this opens up the opportunity for opt-outs. If the usage of the data changes, as it often does in an analytics environment, then consent may very well be required each time.

There are four main techniques that can help data professionals with privacy protection. All of them have different impacts on both privacy protection and data quality. These are: 

Masking: A de-identification technique that focuses on the redaction or transformation of information within a dataset to prevent exposure. 

K-anonymity: This privacy model ensures that each individual is indistinguishable from at least k-1 other individuals based on their attributes in a dataset.

Differential Privacy: Is a technique applied to an algorithm that mathematically guarantees that the output of the algorithm doesn’t change whether an individual is in the dataset or not. It is achieved through the addition of noise to the algorithm. 

Secure Multi-Party Computation: This is a cryptographic technique where a group of parties can compute a function over their inputs while keeping their inputs private.

Keep your eyes peeled in the next few weeks for our whitepaper, which will explore these four techniques in further detail.

IoT and everyday life; how interconnected are we?

IoT and everyday life; how interconnected are we?

The Internet of Things (IoT) is a term spanning a variety of ‘smart’ applications. This ranges from things like smart fridges, to smart cities. This idea of ‘smart’ or IoT is the connectedness between everything and the internet. 

It’s hard to grasp the amount of data one person creates each day and understanding where IoT fits into that. And with this new era of ‘smart’ everything, the realm of knowledge is pushed even farther away. 

To understand just how much our smart technologies follow our everyday behaviours, let’s focus on only one person’s use of a smartwatch. 

But first, what are the implications of a smartwatch? This wearable technology gained its popularity starting in 2012, giving users the ability to track their health and set fitness goals at the tap of their wrist. Since then, smartwatches have infiltrated all sorts of markets, from the ability to pay using the watch, take phone calls, or update a Facebook status.

The technology in our lives has become so interconnected, de-identifying our data, while achievable, on a grand scale, is seemingly complicated. Take the smartwatch, our unique footprints, recreated each day are logged and monitored through the small screen on our wrist. While the data created is anonymized to an extent, it’s not sufficient

But why not? After all, technology has moved mountains in the last decade. To better understand this connectedness of our data, let’s follow one person’s day through the point of view of just their smartwatch. 

Imagine Tom is a 30-year-old man in excellent health who, like the rest of us, follows a pretty general routine during his workweek. Outside of the many technologies that collect Tom’s data, what might just his smartwatch collect? 

Let’s take a look. 

Every morning, Tom’s smartwatch alerts him at 7:30 am to wake up and start his day. After a few days of logging Tom’s breathing patterns and heart rate, and monitoring his previous alarm settings, Tom’s smartwatch has learned the average time Tom should be awake and alerts Tom to set a 7:30 alarm each night before bed. 

Before ever having to tell his watch which time he gets up in the morning, his watch already knows. 

Similar to his smartwatches alarm system, this watch knows and labels the locations of 6 specific places that Tom spends most time in the week. Tom didn’t have to tell his watch where he was and why; based on the hours of the day Tom spends at this location, with his sleeping patterns and other movements, his watch already knows. 

Not only are these places determined from his geographical location, but from the other information, his watch creates. 

When Tom is at the gym, his sped-up heart rate and lost calories are logged. When Tom goes to his local grocery store or coffee shop, Tom uses his smartwatch to pay. At his workplace, Tom’s watch records the amount of time spent at the location and is able to determine the two main places Tom spends his time is between his home location and his work. 

Based on a collection of spatial-temporal data, transactional data, health data and repeated behaviour, it is easy to create a very accurate picture of who Tom is.

Let’s keep in mind that this is all created without Tom having to explicitly tell his smartwatch where he is or what he is doing at each minute. Tom’s smartwatch operates on learned behaviours based on the unique pattern Tom creates each day.

This small peak into Tom’s life, according to his watch, isn’t even much of a “peak” at all. We could analyze the data retained by his smartwatch with each purchase, each movement of location or only by the data pertaining to his health. 

This technology is seen in our cars, fridges, phones and TVs. Thus, understanding how just one device collects and understands so much about your person is critical to how we interact with these technologies. What’s essential to understand next is how this data is dealt with, protected and shared. 

The more advanced our technology gets, the easier it is to connect a person based on the data the technology collects. It’s important more than ever to understand the impacts of our technology use, what of our data is being collected, and where it is going. 

At CryptoNumerics we have been developing a solution that can de-identify this data without destroying its analytical value. 

If your company has transactional and/or spatio-temporal data that needs to be privacy-protected, contact us to learn more about our solution.