What do Trump, Google, and Facebook Have in Common?

What do Trump, Google, and Facebook Have in Common?

This year, the Trump Administration declared the need for a national privacy law to supersede a patchwork of state laws. But, as the year comes to a close, and amidst the impeachment inquiry, time is running out. Meanwhile, Google plans to roll out encrypted web addresses, and Facebook stalls research into social media’s effect on democracy. Do these three seek privacy or power?

The Trump Administration, Google, and Facebook claim that privacy is a priority, and… well… we’re still waiting for the proof. Over the last year, the news has been awash with privacy scandals and data breaches. Every day we hear promises that privacy is a priority and that a national privacy law is coming, but so far, the evidence of action is lacking. This begs the question, are politicians and businesses using the guise of “privacy” to manipulate people? Let’s take a closer look.

Congress and the Trump Administration: National Privacy Law

Earlier this year, Congress and the Trump Administration agreed they wanted a new federal privacy law to protect individuals online. This rare occurrence was even supported and campaigned for by major tech firms (read our blog “What is your data worth” to learn more). However, despite months of talks, “a national privacy law is nowhere in sight [and] [t]he window to pass a law this year is now quickly closing.” (Source)

Disagreement over enforcement and state-level power are said to be holding back progress. Thus, while senators, including Roger Wicker, who chairs the Senate Commerce Committee, insist they are working hard, there are no public results; and with the impeachment inquiry, it is possible we will not see any for some time (Source). This means that the White House will likely miss their self-appointed deadline of January 2020, when the CCPA goes into effect.

Originally, this plan was designed to avoid a patchwork of state-level legislature that can make it challenging for businesses to comply and weaken privacy care. It is not a simple process, and since “Congress has never set an overarching national standard for how most companies gather and use data.”, much work is needed to develop a framework to govern privacy on a national level (Source). However, there is evidence in Europe with GDPR, that a large governing structure can successfully hold organizations accountable to privacy standards. But how much longer will US residents need to wait?

Google Encryption: Privacy or Power

Google has been trying to get an edge above the competition for years by leveraging the mass troves of user data it acquires. Undoubtedly, their work has led to innovation that has redefined the way our world works, but our privacy has paid the price. Like never before, our data has become the new global currency, and Google has had a central part to play in the matter. 

Google has famously made privacy a priority and is currently working to enhance user privacy and security with encrypted web addresses.

Unencrypted web addresses are a major security risk, as they make it simple for malicious persons to intercept web traffic and use fake sites to gather data. However, in denying hackers this ability, power is given to companies like Google, who will be able to collect more user data than ever before. For the risk is “that control of encrypted systems sits with Google and its competitors.” (Source)

This is because encryption cuts out the middle layer of ISPs, and can change the mechanisms through which we access specific web pages. This could enable Google to become the centralized encryption DNS provider (Source).

Thus, while DoH is certainly a privacy and security upgrade, as opposed to the current DNS system, shifting from local middle layers to major browser enterprises centralizes user data, raising anti-competitive and child-protection concerns. Further, it diminishes law enforcement’s ability to blacklist dangerous sites and monitor those who visit them. This also opens new opportunities for hackers by reducing their ability to gather cybersecurity intelligence from malware activity that is an integral part of being able to fulfil government-mandated regulation (Source).

Nonetheless, this feature will roll out in a few weeks as the new default, despite the desire from those with DoH concerns to wait until learning more about the potential fallout.

Facebook and the Disinformation Fact Checkers

Over the last few years, Facebook has developed a terrible reputation as one of the least privacy-centric companies in the world. But it is accurate? After the Cambridge Analytica scandal, followed by endless cases of data privacy ethical debacles, Facebook stalls its “disinformation fact-checkers” on the grounds of privacy problems.

In April of 2018, Mark Zuckerburg announced that the company would develop machine learning to detect and manage misinformation on Facebook (Source). It then promised to share this information with non-profit researchers who would flag disinformation campaigns as part of an academic study on how social media is influencing democracies (Source). 

To ensure that the data being shared could not be traced back to individuals, Facebook applied differential privacy techniques.

However, upon sending this information, researchers complained data did not include enough information about the disinformation campaigns to allow them to derive meaningful results. Some even insisted that Facebook was going against the original agreement (Source). As a result, some of the people funding this initiative are considering backing out.

Initially, Facebook was given a deadline of September 30 to provide the full data sets, or the entire research grants program would be shut down. While they have begun offering more data in response, the full data sets have not been provided.

A spokesperson from Facebook says, “This is one of the largest sets of links ever to be created for academic research on this topic. We are working hard to deliver on additional demographic fields while safeguarding individual people’s privacy.” (Source). 

While Facebook may be limiting academic research on democracies, perhaps they are finally prioritizing privacy. And, at the end of the day with an ethical framework to move forward, through technological advancement and academic research, the impact of social media and democracy is still measurable without compromising privacy.

In the end, it is clear that privacy promises hold the potential to manipulate people into action. While the US government may not have a national privacy law anywhere in sight, the motives behind Google’s encrypted links may be questionable, and Facebook’s sudden prioritization of privacy may cut out democratic research, at least privacy is becoming a hot topic, and that holds promise for a privacy-centric future for the public.

Join our newsletter
Your health records are online, and Amazon wants you to wear Alexa on your face

Your health records are online, and Amazon wants you to wear Alexa on your face

This week’s news was flooded with a wealth of sensitive medical information landing on the internet, and perhaps, in the wrong hands. Sixteen million patient scans were exposed online, the European Court of Justice ruled Google does not need to remove links to sensitive information, and Amazon released new Alexa products for you to wear everywhere you go.

Over five million patients have had their privacy breached and their private health information exposed online. These documents contain highly sensitive data, like names, birthdays, and in some cases, social security numbers. Worse, the list of compromised medical record systems is rapidly increasing, and the data can all be accessed with a traditional web browser. In fact, Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security, reports “[i]t’s not even hacking,” because the data is so easily accessible to the average person (Source).

One of these systems belongs to MobilexUSA, whose records, which showed patients’ names, date of birth, doctors, and a list of procedures, were found online (Source

Experts report that this could be a direct violation of HIPAA and many warn that the potential consequences of this leak are devastating, as medical data is so sensitive, and if in the wrong hands, could be used maliciously (Source).

According to Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, “[m]edical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice.” (Source

Such a statement signals a privacy crisis in the healthcare industry that requires a desperate fix. According to Pianykh, the problem is not a lack of regulatory standards, but rather that “medical device makers don’t follow them.” (Source) If that is the case, should we expect HIPAA to crackdown the same way GDPR has?

With a patient’s privacy up in the air in the US, a citizens’ “Right to be Forgotten” in the EU is also being questioned. 

The “Right to be Forgotten” states that “personal data must be erased immediately where the data are no longer needed for their original processing purpose, or the data subject has withdrawn [their] consent” (Source). This means that upon request, a data “controller” must erase any personal data in whatever means necessary, whether that is physical destruction or permanently over-writing data with “special software.” (Source)

When this law was codified in the General Data Protection Regulation (GDPR), it was implemented to govern over Europe. Yet, France’s CNIL fined Google, an American company, $110,000 in 2016 for refusing to remove private data from search results. Google argued changes should not need to be applied to the google.com domain or other non-European sites (Source). 

On Tuesday, The European Court of Justice agreed and ruled that Google is under no obligation to extend EU rules beyond European borders by removing links to sensitive personal data (Source). However, the court made a distinct point that Google “must impose new measures to discourage internet users from going outside the EU to find that information.” (Source) This decision sets a precedent for the application of a nation’s laws outside its borders when it comes to digital data. 

While the EU has a firm stance on the right to be forgotten, Amazon makes clear that you can “automatically delete [your] voice data”… every three to eighteen months (Source). The lack of immediate erasure is potentially troublesome for those concerned with their privacy, especially alongside the new product launch, which will move Alexa out of your home and onto your body.

On Wednesday, Amazon launched Alexa earbuds (Echo Buds), glasses (Echo Frames), and rings (Echo Loop). The earbuds are available on the marketplace, but the latter two are an experiment and are only available by invitation for the time being (Source). 

With these products, you will be able to access Alexa support wherever you are, and in the case of the EchoBuds, harness the noise-reduction technology of Bose for only USD $130 (Source). However, while these products promise to make your life more convenient, in using these products Amazon will be able to monitor your daily routines, behaviour, quirks, and more. 

Amazon specified that their goal is to make Alexa “ubiquitous” and “ambient” by spreading it everywhere, including our homes, appliances, cars, and now, our bodies. Yet, at the same time as they open up about their strategy for lifestyle dominance, Amazon claims to prioritize privacy, as the first tech giant to allow users to opt-out of their voice data being transcribed and listened to by employees. Despite this, it is clear that “Alexa’s ambition and a truly privacy-centric customer experience do not go hand in hand.” (Source). 

With Amazon spreading into wearables, Google winning the “Right to be Forgotten” case, and patient records being exposed online, this week is wrapping up to be a black mark on user privacy. Stay tuned for our next weekly news blog to learn about how things shape up. 

Join our newsletter


CryptoNumerics Partners with TrustArc on Privacy Insight Webinar

CryptoNumerics Partners with TrustArc on Privacy Insight Webinar

We’re excited to partner up with TrustArc on their Privacy Insight Series on Thursday, September 26th at 12pm ET to talk about “Leveraging the Power of Automated Intelligence for Privacy Management”! 

With the increasing prevalence of privacy technology, how can the privacy industry leverage the benefits of artificial intelligence and machine learning to drive efficiencies in privacy program management? Many papers have been written on managing the potential privacy issues of automated decision-making, but far fewer on how the profession can utilize the benefits of technology to automate and simplify privacy program management.

Privacy tools are starting to leverage technology to incorporate powerful algorithms to automate repetitive, time-consuming tasks. Automation can generate significant cost and time savings, increase quality, and free up the privacy office’s limited resources to focus on more substantive and strategic work. This session will bring together expert panelists who can share examples of leveraging intelligence within a wide variety of privacy management functions.

 

Key takeaways from this webinar:
  • Understand the difference between artificial Intelligence, machine learning, intelligent systems and algorithms
  • Hear examples of the benefits of using intelligence to manage privacy compliance
  • Understand how to incorporate intelligence into your internal program and/or client programs to improve efficiencies

Register Now!

Can’t make it? Register anyway – TrustArc will automatically send you an email with both the slides and recording after the webinar.

To read more privacy articles, click here.

This content was originally posted on TrustArc’s website. Click here to view the original post.

Join our newsletter


Google Prioritizes Privacy Amidst the YouTube Scandal and Facebook Dating Launch

Google Prioritizes Privacy Amidst the YouTube Scandal and Facebook Dating Launch

Photo by rawpixel.com from Pexels

In the wake of the University of Zurich study and current affairs with YouTube’s children’s privacy issues, it is clear that data security is of the utmost importance to protect citizen’s rights and companies from noncompliance fines and scandals. However, today’s standards are minimal and conventional techniques, such as anonymization, are not enough. The issue of reoccurring privacy issues is also evident in Facebook, leaving users questioning what Facebook Dating means for their privacy. In spite of this, alongside the launch of Google’s open-source differential privacy library for companies, there are clear signals that organizations are making data security a priority.

The University of Zurich published a study on September 2, 2019, indicating that they were able to “identify the participants in confidential legal cases, even though such participants had been anonymized.” (Source) Through the combination of AI and big data, researchers were able to de-anonymize the data of 84% of 120,000+ participants in less than one hour. Such demonstrates that “linkage,” or anonymization without consideration of available public information, is not a secure manner to guarantee privacy, and the speed and ease at which these results were obtained sets an alarming precedent. More specifically, this industry-specific case sets the tone that the government’s systems cannot protect the privacy of plaintiffs and defendants, signalling the need for enhanced privacy and automation to prevent classified information from being released.

However, the government is not the only one who’s data security was called into question last week, as Google agrees to pay a record $170 million for illegally harvesting children’s data (Source). On Wednesday, the Federal Trade Commission and New York’s attorney general reached an agreement that the company must pay the fine and improve the children’s privacy protection on YouTube. The repercussions are a direct result of YouTube collecting personal data of children under the age of thirteen without parental permission and targeting them with behavioural ads, in violation of the Children’s Online Privacy Protection Act (COPPA). Part of the settlement indicates that moving forward will require child-directed content to be designated as such and behavioural ads on corresponding content should be prevented. Further, YouTube will not obtain previously shared data without parental consent.

The public is largely displeased with these sanctions, calling them a mere “slap on the wrist,” and some industry professionals have similar beliefs. Jeffery Chester, executive director of the Center for Digital Democracy, reports “It’s the equivalent of a cop pulling somebody over for speeding at 110 miles an hour – and they get off with a warning.” (Source) Such expressions indicate a trend for increased corporate responsibility and legal repercussions for privacy breaches.

Between the University of Zurich study and the ramifications of YouTube’s illegal data gathering reports last week, it is evident that companies should question how well protected their datasets are. Without adequate systems in place, they risk noncompliance fines and a loss of public trust.

This lack of trust follows Facebook, as users question what Facebook Dating will mean for their data in spite of its new privacy and security features (Source). Announced on Thursday, this service is set to roll out across the US to allow users to create a separate dating-specific profile and be matched with other users based on location, indicated preferences, events attended, and groups, amongst other factors. Some of the features include the ability to hide a profile from friends or to share plans with select people. Beyond this, Facebook will offer “Secret Crush” on Instagram, which allows individuals to compile a list of friends they are interested in, and to be matched if the crush also lists them.

This data is likely not as safe as Facebook suggests. Digital strategist, Jason Kelley, states that “If you’re trying to avoid dating services that have red flags, you can’t really find one that has more red flags than Facebook.” (Source) After all, days ago ~200 million Facebook users’ phone numbers were exposed online (Source).

The primary concern, given Facebook’s history of mishandling personal data, is Facebook’s ability to develop a more sophisticated ad profile based on their dating information. This includes “what kinds of people users like, whom they match with, and even how dates go” (Source). Mark Weinstein, a privacy advocate and founder of social network MeWe, even goes so far as to say that “Facebook will use Facebook Dating as a new portal into users’ lives; collecting, targeting, and selling dating history, romantic preferences, emotions, sexual interests, fetishes, everything.” (Source)

An additional concern due to their track record is that while Facebook reports dating profiles will not be connected to their Facebook activity, sensitive information, like sexual orientation, could be at risk of exposure.

The immense cloud of doubt surrounding Facebook is one other organizations hope to avoid, and as a result, there has been an increased focus on privacy protection. Google made this a priority when launching its open-source differential privacy library this week (Source).

With Google’s new library, developers are able to “take this library and build their own tools that can work with aggregate data without revealing personally identifiable information either inside or outside their companies.” (Source) This is a direct result of differential privacy, the most advanced technology to date.

Differential privacy enables the public sharing of information about a dataset while maintaining the confidentiality of individuals in the dataset. Using this method of security, according to Miguel Guevara, a Privacy and Data Protection Product Manager at Google, is vital because “without strong privacy protections, you risk losing the trust of your citizens, customers, and users.” (Source)

Google’s prioritization of privacy through its open-source launch signals that improved privacy protection is expected in today’s market. Such corroborates the needs outlined in the University of Zurich study, the fines Google faces over the YouTube scandal, as well as the impact Facebook’s history looks to have on the trust of their new service. Thus, it is evident: data security and privacy protection are more consequential than ever.

Join our newsletter


Rewarded for sharing your data? Sign me up!

Rewarded for sharing your data? Sign me up!

Companies now starting to pay users for their data, in efforts to be more ethical. Large Bluetooth security flaw detected proving potentially harmful to millions. Blockchain’s future looking bright as privacy-preserving technology booms. Canadian federal elections being ‘watched’ for their history of ‘watching’ public.

Rewarded for sharing your data? Sign me up!

Drop Technologies has secured USD$44 million in investments towards growing a technology-based alternative towards traditional customer loyalty programs. With over three million users signed up already, as well as 300 brands on its platform, such as Expedia and Postmates, the company is headed in the right direction. 

Given that Facebook and other tech giants are monetizing data without user permission, getting paid for it doesn’t seem like a bad idea after all. “I’m a Facebook user and an Instagram user, and these guys are just monetizing my data left and right, without much transparency,” said Onsi Sawiris, a managing partner at New York’s HOF Capital.” At least if I’m signing up for Drop, I know that if they’re using my data I will get something in return, and it’s very clear” (Source).

This alternative to rewards programs basically tracks your spending with all of their 300+ brands, and lets you earn points that you can spend at certain companies such as Starbucks of Uber Eats. If it’s an alternative to credit card rewards, it will be beneficial to consumers looking for extra savings on their purchases. So don’t drop it till you try it!

Bluetooth proving to be a potential data breach vulnerability 

Researchers have discovered a flaw that leaves millions of Bluetooth users vulnerable to data breaches. This flaw enables attackers to interfere while two users are trying to connect without being detected, as long as they’re within a certain range. From music to conversations, to data entered through a Bluetooth device, anything could be at risk. “Upon checking more than 14 Bluetooth chips from popular manufacturers such as Qualcomm, Apple, and Intel, researchers discovered that all the tested devices are vulnerable to attacks” (Source). 

Fortunately, some companies such as Apple and Intel have already implemented security upgrades on their devices. Users are also advised to keep their security, software, and firmware updated at all times. 

Get ready for blockchain advancements like never before

For the past decade, blockchain has been used to build an ecosystem where cryptocurrencies and peer-to-peer transactions are just a few of the many use cases. (Source).

Traditionally, data is shared across centralized networks, leaving systems vulnerable to attacks. However, with decentralization as an added security measure to blockchain, the threat of a single point of failure across a distributed network is eradicated. 

As more and more companies turn to blockchain to gain the benefits of more efficient data sharing and easier data transfers, privacy is overlooked.

In most public blockchains today, transactions are visible to all nodes of a network. Naturally, of course, the issue of privacy is raised due to the sensitive nature of the data, and this transparency comes at a cost. With digital transformation happening all around us, privacy protection cannot be ignored.

To address privacy, many blockchain companies are employing privacy-preserving mechanisms on their infrastructures, from zero-knowledge proofs to encryption algorithms such as Multi-Party Computation (MPC). These mechanisms encrypt data as it’s shared and only reveal the specific elements needed for a specific task (Source).

Costs efficiencies and a better understanding of consumer needs are just a few of the advantages of privacy-preserving mechanisms being introduced. As data and privacy go hand in hand in the future, equitability and trust will be our key to unlock new possibilities that enhance life as we know it (Source).

Upcoming Canadian elections could turn into surveillance problem

Once again, the Canadian federal elections are raising concerns about interference and disruption through the misuse of personal data. In the past, political parties have been known to use their power to influence populations who are not aware of how their data is being used. 

Since data has played a major role in elections, this could become a surveillance issue because experts who study surveillance say that harnessing data has been the key to electoral success, in past elections. “Politicians the world over now believe they can win elections if they just have better, more refined and more accurate data on the electorate” (Source).

A related issue is a lack of transparency between voters and electoral candidates. “There is a divide between how little is publicly known about what actually goes on in platform businesses that create online networks, like Facebook or Twitter, and what supporters of proper democratic practices argue should be known” (Source).

The officials of this upcoming election should be paying close attention to the public’s personal data and how it is being used.

Join our newsletter


A deep dive into Facebook’s privacy today

A deep dive into Facebook’s privacy today

This week we take an in-depth look into what privacy looks like for Facebook. First, we will explore what user data Facebook is collecting. Then, we will look at how Facebook is invading users’ privacy… again. Finally, we will discuss the new privacy scam directed at Facebook.

See and control what Facebook collects from you

Last year, Facebook announced their upcoming release of a tool to ‘clear history’ and delete data that third-party websites and apps share with the social media giant. Fast-forward to today, the company has kept its word and has released the tool in Ireland, South Korea, and Spain. 

The tool, known as ‘Off-Facebook Activity’, allows you to see and control what information has been collected about you by apps and websites and sent to Facebook. It will show you information about your online activities, the questions you search on Google and your online shopping history. However, while it has the option to disconnect the data, it cannot delete it.

If you choose to clear your activity, Facebook will simply remove your identifying information from the data and unlink it to your account. It will not delete the data (Source).

This is the first step in the right direction, as this is the first time Facebook has allowed users to control or even see this information.

Facebook’s voice transcripts more invasive

Facebook has been transcribing users’ audio clips for quality control and to improve the accuracy of their services. Unlike Alexa or Google Home workers listening to user recordings, Facebook’s audio does not come from users giving smart assistants commands but from human-to-human communication. Bloomberg reported that Facebook contractors were kept in the dark with regards to where the audio came from and why these audio clips needed to be transcribed. 

While Google, Apple, and Facebook have temporarily suspended human audio reviews, Amazon has chosen to let its users opt-out (Source).

Another Facebook privacy scam, and this time it’s not Facebook’s fault

People have been reposting and resharing a viral message, that explicitly notifies Facebook of their rights as users.

“Don’t forget tomorrow starts the new Facebook rule where they can use your photos. Don’t forget Deadline today!!! It can be used in court cases in litigation against you. Everything you’ve ever posted becomes public from today Even messages that have been deleted or the photos not allowed. It costs nothing for a simple copy and paste, better safe than sorry. Channel 13 News talked about the change in Facebook’s privacy policy. I do not give Facebook or any entities associated with Facebook permission to use my pictures, information, messages or posts, both past and future. With this statement, I give notice to Facebook it is strictly forbidden to disclose, copy, distribute, or take any other action against me based on this profile and/or its contents. The content of this profile is private and confidential information. The violation of privacy can be punished by law (UCC 1-308- 1 1 308-103 and the Rome Statute. NOTE: Facebook is now a public entity. All members must post a note like this. If you prefer, you can copy and paste this version. If you do not publish a statement at least once it will be tacitly allowing the use of your photos, as well as the information contained in the profile status updates. FACEBOOK DOES NOT HAVE MY PERMISSION TO SHARE PHOTOS OR MESSAGES.”

It is not real, it is a scam, and there are several reasons why:

1. The message is written poorly with no attention to capitalization and grammar.

2. There is no way you can end up in court by using social media.

3. Facebook does not own your content, there are several discrepancies. 

4. Posting a statement on your Facebook timeline that is contrary to Facebook’s privacy terms has no legal effect nor does it change Facebook’s privacy policies (Source).

However, if you are still wary about your privacy being at risk, take some measures to be safer. Change your privacy controls. Don’t post content that you don’t want being shared. Or, simply cancel your account for the best protection guaranteed. 

 

Join our newsletter