Is this a mirror world? Uber defends privacy and science centre exposes 174,000 names

Is this a mirror world? Uber defends privacy and science centre exposes 174,000 names

Typically we expect Uber to be on the wrong side of a privacy debacle. But this week, they claim to be defending the privacy of its users from the LA Department of Transportation. Meanwhile, the Ontario Science Centre experiences a data breach that exposed the personal information of 174,000 individuals. Are the upcoming state-level privacy laws the answer to consumers privacy concerns?

Uber claims LA’s data-tracking tool is a violation of state privacy laws.

LA Department of Transportation (LADOT) wants to use Uber’s dockless scooters and bikes to collect real-time trip data. But, Uber has repeatedly refused due to privacy concerns. This fight is coming to a head, as on Monday, Uber threatened to file a lawsuit and temporary restraining order (Source).

Last year, the general manager of LADOT, Reynolds began developing a system that would improve mobility in the city by enabling communication between them and every form of transportation. To do so, they implemented a mobility data specification (MDS) software program, called Provider, in November that mandated all dockless scooter and bikes operating in LA send their trip data to the city headquarters.

Then, a second piece of software was developed, Agency, which reported and alerted companies about their micro-mobility devices. For example, it would send alerts about an improperly parked scooter or imminent street closure (Source).

This would mean the city has access to each and every single trip consumers take. Yet, according to Reynolds, the data they are gathering is essential to manage the effects of micro-mobility on the streets. “At LADOT, our job is to move people and goods as quickly and safely as possible, but we can only do that if we have a complete picture of what’s on our streets and where.” (Source).

Other cities across the country were thrilled by the results and look to implement similar MDS solutions. 

In reality, the protocols exhibit Big Brother-like implications, and many privacy stakeholders seem to side with Uber. Determining that LADOT’s actions would in fact, “constitute surveillance.” (Source).This includes the EFF who stated that “LADOT must start taking seriously the privacy of Los Angeles residents.” What’s more in a letter to LA, they wrote that “the MDS appears to violate the California Electronic Communications Privacy Act (CalECPA), which prohibits any government entity from compelling the production of electronic device information, including raw trip data generated by electronic bikes or scooters, from anyone other than the authorized possessor of the device without proper legal process.” (Source)

While Uber seems to have validity in their concerns, there is fear that LADOT will revoke their permit to operate because of their refusal to comply (Source). As of Tuesday, the company’s permit was suspended. But with the lawsuit looming, the public can expect the courts to decide the legality of the situation (Source).

Ontario Science Centre data breach exposes 174,000 names

This week the Ontario Science Centre explains that on August 16, 2019, they were made aware of a data breach that affected 174,000 people. This was discovered by Campaigner, the third-party company that performs the mailings, newsletters, and invitations for the OSC. 

Between July 23 and August 7, “someone made a copy of the science centre’s subscriber emails and names without authorization.” (Source

Upon further investigation, it was learned that the perpetrator used a former Campaigner’s login credentials to access the data. While no other personal information was stolen, the mass number of consumers affected highlights the potentially negative consequences associated with using trusted third parties.

Anyone whose data was compromised in this incident was alerted by the science centre and was encouraged to ask any further questions. In addition, the Ontario Information and Privacy Commissioner, Beamish, was alerted about the breach one-day after the notices began going out to the public. 

Moving forward, the Ontario Science Centre is “reviewing data security and retention policies.” alongside Beamish to investigate the incident in full and ensure it is not repeated in the future (Source).

Will more states adopt privacy laws in 2020?

January 1, 2020, marks the implementation of the California Consumer Privacy Act (CCPA). This upcoming law has spread across the media, but soon more state-level privacy laws are expected that will reshape the privacy landscape in America. With a focus on consumer privacy and an increased risk of litigation, businesses are on the edge of their seats anticipating the state’s actions.

Bills in New York, New Jersey, Massachusetts, Minnesota, and Pennsylvania will be debated in the next few months. However, due to the challenge of mediating all stakeholders involved, several of the laws that were expected to have been passed this year were caught up in negotiations. Some have even fallen flat, like those in Arizona, Florida, Kentucky, Mississippi, and Montana. On the other hand, a few states are forming studies that will evaluate current privacy laws and where they should be updated or expanded by digging into data breaches and Internet privacy (Source).

Meanwhile, big tech is lobbying for a federal privacy law in an attempt to supersede state-level architecture (To learn more about this read our blog).

Any way you look at it, more regulations are coming, and the shift of privacy values will create mass changes in the United States and across the globe. This is more necessary than ever, in a new mirror world where Uber claims to be on a mission to protect user privacy and the science centre comes clean about a massive data breach. The question remains, are privacy laws the answer to the data-driven world? Perhaps, 2020 will be the year to make businesses more privacy-conscious.

Join our newsletter


The Consequences of Data Mishandling: Twitter, TransUnion, and WhatsApp

The Consequences of Data Mishandling: Twitter, TransUnion, and WhatsApp

Who should you trust? This week highlights the personal privacy risks and organizational consequences when data is mishandled or utilized against the best interest of the account holder. Twitter provides advertisers with user phone numbers that had been used for two-factor authentication, 37,000 Canadians’ personal information is leaked in a TransUnion cybersecurity attack, and a GDPR-related investigation into Facebook and Twitter threatens billions in fines.
Twitter shared your phone number with advertisers.

Early this week, Twitter admitted to using the phone numbers of users, which had been provided for two-factor authentication, to help profile users and target ads. This allowed the company to create “Tailored Audiences,” an industry-standard product that enables “advertisers to target ads to customers based on the advertiser’s own marketing lists.” In other words, the profiles in the marketing list an advertiser uploaded were matched to Twitter’s user list with the phone numbers users provided for security purposes.

When users provided their phone numbers to enhance account security, they never realized that this would be the tradeoff. This manipulative approach to gaining user-information raises questions over Twitter’s data privacy protocols. Moreover, the fact that they provided this confidential information to advertisers should leave you wondering what other information is made available to business partners and how (Source). 

Curiously, after realizing what happened, rather than come forward, the company rushed to hire Ads Policy Specialists to look into the problem. 

On September 17, the company “addressed an “error” that allowed advertisers to target users based on phone numbers.” (Source) That same day, they then posted a job advertisement for someone to train internal Twitter employees on ad policies, and to join a team working on re-evaluating their advertising products.

Now, nearly a month later, Twitter has publicly admitted their mistake and said they are unsure how many users were affected. While they insist no personal data was shared externally, and are clearly taking steps to ensure this doesn’t occur again, is it too late?

Third-Party Attacks: How Valid Login Credentials Led to Banking Information Exposure 

A cybersecurity breach at TransUnion highlights the rapidly increasing threat of third party attacks and the challenge to prevent them. The personal data of 37,000 Canadians was compromised when legitimate business customer’s login credentials were used illegally to harvest TransUnion data. This includes their name, date of birth, current and past home addresses, credit and loan obligation, and repayment history. While this may not include information on bank account numbers, social insurance numbers may also have been at risk. This compromise occurred between June 28 and July 11 but was not detected until August (Source).

While alarming, these attacks are very frequent, accounting for around 25% of cyberattacks in the past year. Daniel Tobok, CEO of Cytelligence Inc. reports that the threat of third party attacks is increasing, as more than ever, criminals are using the accounts of trusted third parties (customers, vendors) to gain access to their targets’ data. This method of entry is hard to detect due to the nature of the actions taken. In fact, often the attackers are simulating the typical actions taken by the users. In this case, the credentials for the leading division of Canadian Western Bank were used to login and access the credit information of nearly 40,000 Canadians, an action that is not atypical of the bank’s regular activities (Source).

Cybersecurity attacks like this are what has caused the rise on two-factor authentication, which looks to enhance security -perhaps in every case other than Twitter’s. However, if companies only invest in hardware, they only solve half the issue, for the human side of cybersecurity is a much more serious threat than often acknowledged or considered. “As an attacker, you always attack the weakest link, and in a lot of cases unfortunately the weakest link is in front of the keyboard.” (Source)

 

Hefty fines loom over Twitter and Facebook as the Irish DPC closes their investigation.

The Data Protection Commission (DPC) in Ireland has recently finished an investigation into Facebook’s WhatsApp and Twitter over breaches to GDPR (Source). These investigations looked into whether or not WhatsApp provided information about the app’s services in a transparent manner to both users and non-users, and about a Twitter data breach notification in January 2019.

Now, these cases have moved onto the decision-making phase, and the companies are now at risk of a fine up to 4% of their global annual revenue. This means Facebook could expect to pay more than $2 billion.

This decision moves to Helen Dixon, Ireland’s chief data regulator, and we expect to hear by the end of the year. These are landmark cases, as the first Irish legal proceedings connected to US companies since GDPR came into effect a little over a year ago (May 2018) (Source). Big tech companies are on edge about the verdict, as the Irish DPC plays the largest GDPR supervisory role over most big tech companies, due to the fact that many use Ireland as the base for their EU headquarters. What’s more, the DPC has opened dozens of investigations into other major tech companies, including Apple and Google, and perhaps the chief data regulator’s decision will signal more of what’s to come (Source).

In the end, it is clear that the businesses and the public must become more privacy-conscious, as between Twitter’s data mishandling, the TransUnion third-party attack, and the GDPR investigation coming to a close, it is clear that privacy is affecting everyday operations and lives.

Join our newsletter

What do Trump, Google, and Facebook Have in Common?

What do Trump, Google, and Facebook Have in Common?

This year, the Trump Administration declared the need for a national privacy law to supersede a patchwork of state laws. But, as the year comes to a close, and amidst the impeachment inquiry, time is running out. Meanwhile, Google plans to roll out encrypted web addresses, and Facebook stalls research into social media’s effect on democracy. Do these three seek privacy or power?
The Trump Administration, Google, and Facebook claim that privacy is a priority, and… well… we’re still waiting for the proof. Over the last year, the news has been awash with privacy scandals and data breaches. Every day we hear promises that privacy is a priority and that a national privacy law is coming, but so far, the evidence of action is lacking. This begs the question, are politicians and businesses using the guise of “privacy” to manipulate people? Let’s take a closer look.

Congress and the Trump Administration: National Privacy Law

Earlier this year, Congress and the Trump Administration agreed they wanted a new federal privacy law to protect individuals online. This rare occurrence was even supported and campaigned for by major tech firms (read our blog “What is your data worth” to learn more). However, despite months of talks, “a national privacy law is nowhere in sight [and] [t]he window to pass a law this year is now quickly closing.” (Source)

Disagreement over enforcement and state-level power are said to be holding back progress. Thus, while senators, including Roger Wicker, who chairs the Senate Commerce Committee, insist they are working hard, there are no public results; and with the impeachment inquiry, it is possible we will not see any for some time (Source). This means that the White House will likely miss their self-appointed deadline of January 2020, when the CCPA goes into effect.

Originally, this plan was designed to avoid a patchwork of state-level legislature that can make it challenging for businesses to comply and weaken privacy care. It is not a simple process, and since “Congress has never set an overarching national standard for how most companies gather and use data.”, much work is needed to develop a framework to govern privacy on a national level (Source). However, there is evidence in Europe with GDPR, that a large governing structure can successfully hold organizations accountable to privacy standards. But how much longer will US residents need to wait?

Google Encryption: Privacy or Power

Google has been trying to get an edge above the competition for years by leveraging the mass troves of user data it acquires. Undoubtedly, their work has led to innovation that has redefined the way our world works, but our privacy has paid the price. Like never before, our data has become the new global currency, and Google has had a central part to play in the matter. 

Google has famously made privacy a priority and is currently working to enhance user privacy and security with encrypted web addresses.

Unencrypted web addresses are a major security risk, as they make it simple for malicious persons to intercept web traffic and use fake sites to gather data. However, in denying hackers this ability, power is given to companies like Google, who will be able to collect more user data than ever before. For the risk is “that control of encrypted systems sits with Google and its competitors.” (Source)

This is because encryption cuts out the middle layer of ISPs, and can change the mechanisms through which we access specific web pages. This could enable Google to become the centralized encryption DNS provider (Source).

Thus, while DoH is certainly a privacy and security upgrade, as opposed to the current DNS system, shifting from local middle layers to major browser enterprises centralizes user data, raising anti-competitive and child-protection concerns. Further, it diminishes law enforcement’s ability to blacklist dangerous sites and monitor those who visit them. This also opens new opportunities for hackers by reducing their ability to gather cybersecurity intelligence from malware activity that is an integral part of being able to fulfil government-mandated regulation (Source).

Nonetheless, this feature will roll out in a few weeks as the new default, despite the desire from those with DoH concerns to wait until learning more about the potential fallout.

Facebook and the Disinformation Fact Checkers

Over the last few years, Facebook has developed a terrible reputation as one of the least privacy-centric companies in the world. But it is accurate? After the Cambridge Analytica scandal, followed by endless cases of data privacy ethical debacles, Facebook stalls its “disinformation fact-checkers” on the grounds of privacy problems.

In April of 2018, Mark Zuckerburg announced that the company would develop machine learning to detect and manage misinformation on Facebook (Source). It then promised to share this information with non-profit researchers who would flag disinformation campaigns as part of an academic study on how social media is influencing democracies (Source). 

To ensure that the data being shared could not be traced back to individuals, Facebook applied differential privacy techniques.

However, upon sending this information, researchers complained data did not include enough information about the disinformation campaigns to allow them to derive meaningful results. Some even insisted that Facebook was going against the original agreement (Source). As a result, some of the people funding this initiative are considering backing out.

Initially, Facebook was given a deadline of September 30 to provide the full data sets, or the entire research grants program would be shut down. While they have begun offering more data in response, the full data sets have not been provided.

A spokesperson from Facebook says, “This is one of the largest sets of links ever to be created for academic research on this topic. We are working hard to deliver on additional demographic fields while safeguarding individual people’s privacy.” (Source). 

While Facebook may be limiting academic research on democracies, perhaps they are finally prioritizing privacy. And, at the end of the day with an ethical framework to move forward, through technological advancement and academic research, the impact of social media and democracy is still measurable without compromising privacy.

In the end, it is clear that privacy promises hold the potential to manipulate people into action. While the US government may not have a national privacy law anywhere in sight, the motives behind Google’s encrypted links may be questionable, and Facebook’s sudden prioritization of privacy may cut out democratic research, at least privacy is becoming a hot topic, and that holds promise for a privacy-centric future for the public.

Join our newsletter


Your health records are online, and Amazon wants you to wear Alexa on your face

Your health records are online, and Amazon wants you to wear Alexa on your face

This week’s news was flooded with a wealth of sensitive medical information landing on the internet, and perhaps, in the wrong hands. Sixteen million patient scans were exposed online, the European Court of Justice ruled Google does not need to remove links to sensitive information, and Amazon released new Alexa products for you to wear everywhere you go.

Over five million patients have had their privacy breached and their private health information exposed online. These documents contain highly sensitive data, like names, birthdays, and in some cases, social security numbers. Worse, the list of compromised medical record systems is rapidly increasing, and the data can all be accessed with a traditional web browser. In fact, Jackie Singh, a cybersecurity researcher and chief executive of the consulting firm Spyglass Security, reports “[i]t’s not even hacking,” because the data is so easily accessible to the average person (Source).

One of these systems belongs to MobilexUSA, whose records, which showed patients’ names, date of birth, doctors, and a list of procedures, were found online (Source

Experts report that this could be a direct violation of HIPAA and many warn that the potential consequences of this leak are devastating, as medical data is so sensitive, and if in the wrong hands, could be used maliciously (Source).

According to Oleg Pianykh, the director of medical analytics at Massachusetts General Hospital’s radiology department, “[m]edical-data security has never been soundly built into the clinical data or devices, and is still largely theoretical and does not exist in practice.” (Source

Such a statement signals a privacy crisis in the healthcare industry that requires a desperate fix. According to Pianykh, the problem is not a lack of regulatory standards, but rather that “medical device makers don’t follow them.” (Source) If that is the case, should we expect HIPAA to crackdown the same way GDPR has?

With a patient’s privacy up in the air in the US, a citizens’ “Right to be Forgotten” in the EU is also being questioned. 

The “Right to be Forgotten” states that “personal data must be erased immediately where the data are no longer needed for their original processing purpose, or the data subject has withdrawn [their] consent” (Source). This means that upon request, a data “controller” must erase any personal data in whatever means necessary, whether that is physical destruction or permanently over-writing data with “special software.” (Source)

When this law was codified in the General Data Protection Regulation (GDPR), it was implemented to govern over Europe. Yet, France’s CNIL fined Google, an American company, $110,000 in 2016 for refusing to remove private data from search results. Google argued changes should not need to be applied to the google.com domain or other non-European sites (Source). 

On Tuesday, The European Court of Justice agreed and ruled that Google is under no obligation to extend EU rules beyond European borders by removing links to sensitive personal data (Source). However, the court made a distinct point that Google “must impose new measures to discourage internet users from going outside the EU to find that information.” (Source) This decision sets a precedent for the application of a nation’s laws outside its borders when it comes to digital data. 

While the EU has a firm stance on the right to be forgotten, Amazon makes clear that you can “automatically delete [your] voice data”… every three to eighteen months (Source). The lack of immediate erasure is potentially troublesome for those concerned with their privacy, especially alongside the new product launch, which will move Alexa out of your home and onto your body.

On Wednesday, Amazon launched Alexa earbuds (Echo Buds), glasses (Echo Frames), and rings (Echo Loop). The earbuds are available on the marketplace, but the latter two are an experiment and are only available by invitation for the time being (Source). 

With these products, you will be able to access Alexa support wherever you are, and in the case of the EchoBuds, harness the noise-reduction technology of Bose for only USD $130 (Source). However, while these products promise to make your life more convenient, in using these products Amazon will be able to monitor your daily routines, behaviour, quirks, and more. 

Amazon specified that their goal is to make Alexa “ubiquitous” and “ambient” by spreading it everywhere, including our homes, appliances, cars, and now, our bodies. Yet, at the same time as they open up about their strategy for lifestyle dominance, Amazon claims to prioritize privacy, as the first tech giant to allow users to opt-out of their voice data being transcribed and listened to by employees. Despite this, it is clear that “Alexa’s ambition and a truly privacy-centric customer experience do not go hand in hand.” (Source). 

With Amazon spreading into wearables, Google winning the “Right to be Forgotten” case, and patient records being exposed online, this week is wrapping up to be a black mark on user privacy. Stay tuned for our next weekly news blog to learn about how things shape up. 

Join our newsletter


CryptoNumerics Named Strata Data Top 3 Disruptive Finalist

CryptoNumerics Named Strata Data Top 3 Disruptive Finalist

TORONTO, September 24, 2019— Cutting edge techniques and technology. We’ve been named a Strata Data Top 3 “Disruptive Start-up” Finalist!

CryptoNumerics is proud to have been named a top 3 “Disruptive Startup” finalist by Strata. The Strata Data Awards recognize the most innovative startups, leaders, and data science projects from around the world, and we are honoured to be amongst them. Being recognized, alongside some of the most innovative and advanced new ideas, validates the increasing importance of privacy in the world of big data.

Our team will be at the Strata Data Conference in New York from September 24-26 to engage with industry leaders and showcase the revolutionary business impact of their solutions. Stop by our booth (P21) to discuss how we can help automate your big data privacy-protection and investigate the intersections between cutting-edge data science and privacy.  

Please text to 22333 and type “CRYPTO” to vote for us and show us your support!

About CryptoNumerics:

CryptoNumerics is where data privacy meets data science. The company creates enterprise-class software solutions which include privacy automation and virtual data collaboration that Fortune 1000 enterprises are deploying to address privacy compliance such as the GDPR, CCPA, and PIPEDA, while still driving data science and innovation projects to obtain greater business and customer insights. CryptoNumerics’ privacy automation reduces corporate liability and protects brand value from privacy non-compliance exposures.

Join our newsletter