FaceApp and Facebook: Under the Magnifying Glass

FaceApp and Facebook: Under the Magnifying Glass

FaceApp is Under Heavy Scrutiny After Making a Comeback

The U.S. government has aired its concerns regarding privacy risks with the new trending face-editing photo app, FaceApp. With the 2020 Presidential Elections campaigns underway, the FBI and Federal Trade Commission are conducting a national security and privacy investigation into the app.

Written in the fine print, the app’s terms of use and privacy policies are rather shocking, according to Information security expert Nick Tella. It states that as a user, you “grant FaceApp a perpetual, irrevocable, non-exclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you”. 

Social media experts and journalists don’t deny that if users are downloading the app, they are willingly handing over their data because of the above terms of use. However, government bodies and other institutions are aiming to make regulations stronger and ensure data protection is effectively enforced. 

On the other side, FaceApp has denied any accusations of data selling or misuse of user data. In a statement cited by TechCrunch, the company stated that “99% of users don’t log in; therefore, we don’t have access to any data that could identify a person”. Additionally, they made claims assuring the public that they delete ‘most images’ from their services within 48 hours of the image upload time. Furthermore, they added that their research and development team is their only team based in Russia and that their servers are in the U.S.

With everything going on in the world around privacy and user data misuse, we must ask ourselves; should we think twice before trusting apps like FaceApp? 

Facebook to Pay $5 USD Billion in Fines

On Friday, July 12th, the FTC and Facebook finalized a settlement to resolve the Cambridge Analytica data misuse from last year, for a fine of $5 billion U.S. dollars. Unfortunately, concerns still arise over whether or not Facebook will even change any of their privacy policies or data usage after paying this fine. “None of the conditions in the settlement will impose strict limitations on Facebook’s ability to collect and share data with third parties,” according to the New York Times. 

Although the FTC has approved this settlement, it still needs to get approved by the Justice Department, which rarely rejects agreements reached by the FTC. 

Join our newsletter

How Google Can Solve its Privacy Problems

How Google Can Solve its Privacy Problems

Google and the University of Chicago’s Medical Center have made headlines for the wrong reasons.  According to a June 26th New York Times report, a lawsuit filed in the US District Court for Northern Illinois alleged that a data-sharing partnership between the University of Chicago’s Medical Center and Google had “shared too much personal information,” without appropriate consent. Though the datasets had ostensibly been anonymized, the potential for re-identification was too high. Therefore, they had compromised the privacy rights of the individual named in the lawsuit.

The project was touted as a way to improve prediction in medicine and realize the utility of electronic health records, through data science. Its coverage today instead focuses on risks to patients, and invasions of privacy.  Across industries like finance, retail, telecom, and more, the same potential for positive impact through data science exists, as does the potential for exposure-risk to consumers. The potential value created through data science is such that institutions must figure out how to address privacy concerns.

No one wants their medical records and sensitive information to be exposed. Yet they do want research to progress, and to benefit from innovation. That is the dilemma faced by individuals today. People are okay with their data being used in medical research, so long as their data is protected, and cannot be used to re-identify them. So where did the University of Chicago go wrong in sharing data with Google — and was it a case of negligence, ignorance, or a lack of investment?

The basis of the lawsuit claims that the data shared between the two parties were still susceptible to re-identification through inference attacks and mosaic effects. Though the datasets had been stripped of direct identifiers and anonymized, they still contained date stamps of when patients checked in and out of the hospital. When combined with other data that Google held separately, like location data from phones and mapping apps, they could be used to re-identify individuals in the data set. Free text medical notes from doctors, though de-identified in some fashion, were also contained in the data set, further compounding the exposure of private information.

Inference attacks and mosaic effect methods combine information from different datasets to re-identify individuals. They are now well-documented realities that institutions cannot be excused in being ignorant of. Indirect identifiers must also be assessed for the risk of re-identification of an individual and included when considering privacy-protection. What most are unaware of, is that they can be, without decimating the analytical value of the data required for data science, analytics, and ML.

Significant advancements in data science have led to improvements in data privacy technologies, and controls for data collaboration.  Autonomous, systematic, meta-data classification, and re-identification risk assessment and scoring are two that would have made an immediate difference, in this case. Differential Privacy and Secure Multiparty-Computation are two others.

Privacy Automation systems encompassing these technologies are a reality today.   Privacy management is often seen as an additional overhead cost to data science projects. That is a mistake. Tactical use of data security solutions like encryption and hashing to privacy-protect datasets are also not enough, as attested to by this case involving Google and the University of Chicago Medical Center.  

As we saw with Cybersecurity over the last decade, it took several years and continued data theft and hacks making headlines before organizations implemented advanced Cybersecurity and intrusion detection systems. Cybersecurity solutions are now seen as an essential component of an enterprise’s infrastructure and have a commitment at the board level to keep company data safe and their brand untarnished. Boards must reflect on the negative outcomes of lawsuits like this one, where the identity of its customers are being compromised, and their trust damaged. 

Today’s data science projects, without advanced automated privacy protection solutions, should not pass internal privacy governance and data compliance. Additionally, these projects should not use customer data, even if the data is anonymized, until automated privacy risk assessments solutions can accurately reveal the level of re-identification risk (inclusive of inference attacks, and the mosaic effect).  

With the sensitivity around privacy in data science projects in our public discourse today, any enterprise not investing and implementing advanced privacy management systems only exposes itself as having no regard for the ethical use of customer data. The potential for harm is not a matter of if, but when.

Join our newsletter

Do You Know What Your Data is Worth?

Do You Know What Your Data is Worth?

Facebook privacy issues

Where is your data coming from and how is it being used? As an employer, learn how you can make sure your employees’ and candidates’ data is safe.

Do You Know What Your Data is Worth?

Your data is more than your name, age, gender, and address. Your Google searches, tweets, comments, time spent on videos and posts, purchase behaviours, smart home assistant commands and much more is also your data.

There is a new bill in the U.S. senate, hoping to enforce technology companies to disclose to each of their users, the actual value of their data. While this proposed law, seeks to further protect individuals’ privacy, evaluating the exact value of someone’s data is more difficult that it seems. Currently, evaluations range from $1.00 USD for an average person’s data to $100.00 USD for someone with an active social media presence. 

Data sensitivity doesn’t just come from the data itself, but also how companies and agencies can use the data to exert influence. Author of The Support Economy: Why Corporations Are Failing Individuals, Shoshana Zuboff expands on this, as she claims that tech giants like Google and Facebook are practicing surveillance capitalism with intentions to shape consumer behaviour towards a more profitable future.

The truth is, datafication, which refers to the processes and tools that transform a business into a data-driven enterprise, doesn’t affect everyone equally. Women, minorities and people with low-incomes are affected much more than the rest. Thus, the new proposed bill aims to address these concerns. 

Three Tips to Maintain Data Privacy for Marketers

A large chunk of a digital marketer’s time is spent understanding and working with consumer data. Marketers analyze consumer data daily, from click-through rates, to unsubscribe rates. The more data they have, the more powerful their personalization efforts become-from relevant product recommendations to a consumer’s preferred communication method.

Additionally, marketers must know and comply with privacy regulations, such as GDPR, HIPAA, and CCPA. Here are three tips you can use to prepare for new privacy regulations without sacrificing your digital marketing efforts:

  • Conduct regular data reviews to make sure company policies are up to date
  • Know how the data is collected, used, analyzed and shared
  • Use the right technology and that allow you to gain insights from data while protecting people’s privacy

Data Sharing in the Healthcare Field

When it comes to the use of healthcare data, many ethical questions arise. Who is responsible for the safety of health data? Who owns co-produced clinical trial data?  

“We owe it to patients participating in research to make the data they help generate widely and responsibly available. The majority desire data sharing, viewing it as a natural extension of their commitment to advance scientific research” (Source).

Researchers can develop new cures and speed up the innovation process with the use of data-sharing. However, data is not easily shared, especially in the healthcare field. To address this problem, several university researchers from universities such as Yale and Stanford are creating a set of good data-sharing practices for both healthcare facilities and pharmaceutical companies. They have also partnered with key stakeholders and end-users to ensure an all-rounded approach to their guidelines. 


Join our newsletter

Where is Your Data Coming From?

Where is Your Data Coming From?

Facebook privacy issues

Where is your data coming from and how is it being used? As an employer, learn how you can make sure your employees’ and candidates’ data is safe.

Where is Your Data Coming From?

Meaningful data helps companies reach a higher number of potential customers and improve customer retention. However, since not all data is reliable, it is crucial to know where the data comes from.

There are many ways in which data can be gathered and used to enhance marketing.

Zero-party data is collected directly from the source and it is freely given to support brand experience, not sales.

First-party data is also collected directly from the consumer through transactions, social media interactions, or website traffic reports.

Second-party data is another company’s first-party data acquired to complement first-party data.

Third-party data is data gathered from various sources used to attain depth and scale. This data tends to be outdated. 

All of the above can significantly improve marketing strategies and therefore, business outcomes. However, ensuring privacy and ethical use of data should come first when working with personal information.

Employers: Stand Behind Your Employees’ Privacy Rights

Employers collect data from their employees, who give it away in hopes of better productivity, benefits or well-being. 

A survey of almost 1,500 C-suite executives shows that only 30% feel they are using their employees’ data responsibly. It is even worse for job applicants, who are willing to share some of their most private information, such as, previous salary and social security number in hopes of securing an interview.

It doesn’t have to be this way. Here are some ways you can ensure your employees have explicit control over their personal information: 

  • Provide a clear and brief consent notice and privacy policy on your career page
  • Bring candidates straight to your career page, instead of through third-party job boards
  • Use your influence to encourage job boards to disclose where they sent candidates’ data and how it is used
  • Access controls

Would You Trade Your Social Media for More Privacy?

About 40% of participants of a recent Kaspersky report shows that consumers would indeed give up their social media accounts in exchange for a guarantee that their data would remain private forever. 

With social media taking up such a huge chunk of our time and lives, it is hard to believe that someone could give it up. But what if giving it up meant your data remains private from now onwards? Unfortunately, even giving social media up is not enough-it is an on-going process, not a one-time deal.

Here’s how Kaspersky recommends keeping your digital privacy safe:

  • Regularly change your access controls, passwords, and privacy settings
  • Do not download or open suspicious files
  • Start using reliable software and security protocols to lower privacy violations, such as CN-Protect.
Join our newsletter

The Safety of Healthcare Data is a Top Priority

The Safety of Healthcare Data is a Top Priority

Facebook privacy issues

There is no doubt that medical data and healthcare records are highly sensitive. However, recent events have shone light on this data not being secure enough. How can we prevent privacy risk but still allow researchers to benefit us all from our medical data?

Pressure builds to secure health care data

Due to recent healthcare data breaches, there has been a strong push toward the US federal government for increased personal medical information protections. Especially as more of the healthcare processes shift from on-paper to online, and as many of the data is turned into analytics for better patient care in the future, this is a booming concern.

For example, reporter Maggie Miller states that “one major recent data breach led to the personal information of 20 million customers of blood testing groups Quest Diagnostics, LabCorp and Opko Health being exposed”.

Currently, much of the momentum has been in efforts to urge law officials to focus on the sale and use of data amidst the social media space, however, in light of recent breaches, there has been much more attention geared towards the importance of securing health record and medical data.

Evidence That Consumers Are Now Putting Privacy Ahead Of Convenience: Gartner

Gartner researchers have discovered a considerable amount of consumers and employees, that do not consent to trading their data’s security, safety and peace-of-mind for more convenience. 

With that in mind, many companies and organizations are redefining their internal views of customer data.

Chris Howard, a distinguished research vice president at Gartner, states that “As a CIO, you have a mandate to maintain data protections on sensitive data about consumers, citizens and employees. This typically means putting someone in charge of a privacy management program, detecting and promptly reporting breaches, and ensuring that individuals have control of their data. This is a board-level issue, yet barely half of organizations have adequate controls in place” (Source).

Recently, at the Gartner IT Symposium in Toronto, he argued that companies must be able to change their practices and become more adaptive to privacy-related demands. Gartner calls this the ‘ContinuousNext’ approach, and they hope it will build momentum through digital transformation and beyond. 

The steady erosion of privacy at home 

Most public areas are under the watch of AI cameras, cellphone companies, and advertisers that watch your every move. 

All these internet connected gadgets-smart assistants, internet-connected light bulbs, video doorbells, Wi-Fi thermostats, you name it, and they’re watching you. 

The problem: These devices learn to pick up your voice, interests, habits, TV preferences, meals, times home and away, and all other types sensitive data. The gadgets then relay this information back to the companies where they were manufactured.

However, can people switch back to their old ways? Can people go back to regular temperature control systems, TV’s that aren’t smart, and human assistants rather than robotic ones?

Nevertheless, the Supreme Court has indeed placed new boundaries along the lines of digital snooping especially without warrants and consent. What does the future look like, for a world that cannot live without tech?

Join our newsletter

Lack of Quality Data is Hurting Patients

Lack of Quality Data is Hurting Patients

Facebook privacy issues

Less data means less healthcare breakthroughs, which means less longevity. So let’s promote data sharing for healthcare benefits because sharing is caring.

Having high-quality healthcare records means improved and advanced insights for patient care, increased operational efficiency and of course, possibly finding cures for diseases. Without this data, the effectiveness of health-related research declines.

Unfortunately, in the healthcare industry, the truth is that data is being siloed and not shared because it represents a liability, Healthcare organizations are experiencing a demand greater than ever, for them to share their data with researchers for analytics.

Why is it difficult to access/share healthcare data?

There are several reasons why healthcare organizations do not engage in data sharing with health care researchers. Here are a few of them:

  • People unfamiliar with new technology around de-identifying data, thus increasing risk of identification.
  • Healthcare organizations engaging in methods that produce high-risk datasets
  • Organizations are not able to be 100% confident in protecting privacy while sharing data
  • Healthcare organizations prefer to sell data over sharing data  (Source).

When dealing with sensitive PII and PHI, the need to balance the demand for quality data while still complying with privacy regulation poses a trade-off. Unfortunately and even today, 2 out of 3 people do not have confidence in their organization’s risk compliance to protect their patients’ individual data (Source).

In healthcare research today, researchers are not able to acquire data, due to information being siloed. “Healthcare big data silos make it nearly impossible for providers, pharmacies, and other stakeholders to work together for truly coordinated care”, Brent Clough, CEO of Trio Health states. “This siloed nature of healthcare prevents physicians, pharmaceutical companies, manufacturers and payers from accessing and interpreting important data sets, instead, encouraging each group to make decisions based upon a part of the information rather than the whole” (Source). To bust these silos down, researchers need what is known as ‘legislative directives’ to allow data sharing. These directives not only encourage data sharing, but they also include security requirements to protect personally identifiable information (PII) and protected health information (PHI) (Source).

If we break through all or most of these barriers, the possibilities of what we can do with all the financial, clinical, R&D, administration and operational data available are endless.

Why is this data useful?

In general, the largest use for data is for primary analysis, especially quality assurance (QA). 72% of surveyed respondents claim that they use data for QA (Source). That translates well into the healthcare sector, as quality patient care

The quality in patient care is missing is many healthcare practices. As seen in the United States, 5% of diagnoses are incorrect and these diagnostic errors contribute to about 10% of patient deaths (Source). Naturally, opening up data will not only help patients’ health, but it will also boost healthcare professionals’ overall reputation and a nations’ economy.

One great use of health care analytics is robotic usage, which allows furthered data-driven decision making, increased operations, decreasing costs and focused clinical effectiveness. Artificial Intelligence (A.I.) is a great example of a promising future in healthcare. Prospective advantages of data sharing in the health sector to enhance A.I. include:

  • Robotic exams and surgeries
  • Workflow optimization
  • Improved hospital supplies inventory control
  • Better health record organization.

In the near future, a mere routine doctor’s visit could be switched out for regularly monitoring a patient’s health status and consultations from their home itself.

Who is benefitting from this?

Everyone is benefitting from health care organizations sharing their data with researchers. Healthcare professionals are able to provide better service, researchers are able to use raw data to get useful findings to both professionals and patients, patients are diagnosed efficiently and effectively, and so on.

For example, AstraZeneca, a pharmaceutical company, has a long-term alliance with HealthCore, a health services research company to conduct first-hand research and determine the most effective and cost-efficient treatments for certain chronic diseases. AstraZeneca aims to use HealthCore’s data, along with its own trial data to help them make informed financial and healthcare decisions (Source).

Another example of the benefits of data sharing to enhance healthcare, could be seen in the diagnosis part of healthcare. A recent video by IBM and Medtronic shows their future intentions to have their insulin pumps work autonomously, regularly checking blood-glucose levels and injecting insulin when needed. This was in hopes to prevent disruption in the user’s daily life and make it as easy as possible (Source).

Humber River Hospital in Canada is increasing the quality of service by spending more time front-hand with their patients, by automating 80% of their backhand services (such as pharmacy, laundry, and food delivery) with robots and other technologies (Source).

Businesses can benefit off data sharing too. For example, last year, Apple partnered with 13 healthcare systems such as John Hopkins, so that they could download electronic health-related data onto its devices, of course, with consent (Source).

Without this crucial healthcare research, innovation to medicine and disease management are at a standstill which hurts us all.
CryptoNumerics enables data sharing using state-of-the-art privacy techniques like secure multiparty computation. Our software allows healthcare organizations to satisfy privacy regulations and data residency restrictions allowing valuable insights in the healthcare field, to better serve patients. Visit us on our website to find out more.

Join our newsletter