What do Trump, Google, and Facebook Have in Common?

What do Trump, Google, and Facebook Have in Common?

This year, the Trump Administration declared the need for a national privacy law to supersede a patchwork of state laws. But, as the year comes to a close, and amidst the impeachment inquiry, time is running out. Meanwhile, Google plans to roll out encrypted web addresses, and Facebook stalls research into social media’s effect on democracy. Do these three seek privacy or power?

The Trump Administration, Google, and Facebook claim that privacy is a priority, and… well… we’re still waiting for the proof. Over the last year, the news has been awash with privacy scandals and data breaches. Every day we hear promises that privacy is a priority and that a national privacy law is coming, but so far, the evidence of action is lacking. This begs the question, are politicians and businesses using the guise of “privacy” to manipulate people? Let’s take a closer look.

Congress and the Trump Administration: National Privacy Law

Earlier this year, Congress and the Trump Administration agreed they wanted a new federal privacy law to protect individuals online. This rare occurrence was even supported and campaigned for by major tech firms (read our blog “What is your data worth” to learn more). However, despite months of talks, “a national privacy law is nowhere in sight [and] [t]he window to pass a law this year is now quickly closing.” (Source)

Disagreement over enforcement and state-level power are said to be holding back progress. Thus, while senators, including Roger Wicker, who chairs the Senate Commerce Committee, insist they are working hard, there are no public results; and with the impeachment inquiry, it is possible we will not see any for some time (Source). This means that the White House will likely miss their self-appointed deadline of January 2020, when the CCPA goes into effect.

Originally, this plan was designed to avoid a patchwork of state-level legislature that can make it challenging for businesses to comply and weaken privacy care. It is not a simple process, and since “Congress has never set an overarching national standard for how most companies gather and use data.”, much work is needed to develop a framework to govern privacy on a national level (Source). However, there is evidence in Europe with GDPR, that a large governing structure can successfully hold organizations accountable to privacy standards. But how much longer will US residents need to wait?

Google Encryption: Privacy or Power

Google has been trying to get an edge above the competition for years by leveraging the mass troves of user data it acquires. Undoubtedly, their work has led to innovation that has redefined the way our world works, but our privacy has paid the price. Like never before, our data has become the new global currency, and Google has had a central part to play in the matter. 

Google has famously made privacy a priority and is currently working to enhance user privacy and security with encrypted web addresses.

Unencrypted web addresses are a major security risk, as they make it simple for malicious persons to intercept web traffic and use fake sites to gather data. However, in denying hackers this ability, power is given to companies like Google, who will be able to collect more user data than ever before. For the risk is “that control of encrypted systems sits with Google and its competitors.” (Source)

This is because encryption cuts out the middle layer of ISPs, and can change the mechanisms through which we access specific web pages. This could enable Google to become the centralized encryption DNS provider (Source).

Thus, while DoH is certainly a privacy and security upgrade, as opposed to the current DNS system, shifting from local middle layers to major browser enterprises centralizes user data, raising anti-competitive and child-protection concerns. Further, it diminishes law enforcement’s ability to blacklist dangerous sites and monitor those who visit them. This also opens new opportunities for hackers by reducing their ability to gather cybersecurity intelligence from malware activity that is an integral part of being able to fulfil government-mandated regulation (Source).

Nonetheless, this feature will roll out in a few weeks as the new default, despite the desire from those with DoH concerns to wait until learning more about the potential fallout.

Facebook and the Disinformation Fact Checkers

Over the last few years, Facebook has developed a terrible reputation as one of the least privacy-centric companies in the world. But it is accurate? After the Cambridge Analytica scandal, followed by endless cases of data privacy ethical debacles, Facebook stalls its “disinformation fact-checkers” on the grounds of privacy problems.

In April of 2018, Mark Zuckerburg announced that the company would develop machine learning to detect and manage misinformation on Facebook (Source). It then promised to share this information with non-profit researchers who would flag disinformation campaigns as part of an academic study on how social media is influencing democracies (Source). 

To ensure that the data being shared could not be traced back to individuals, Facebook applied differential privacy techniques.

However, upon sending this information, researchers complained data did not include enough information about the disinformation campaigns to allow them to derive meaningful results. Some even insisted that Facebook was going against the original agreement (Source). As a result, some of the people funding this initiative are considering backing out.

Initially, Facebook was given a deadline of September 30 to provide the full data sets, or the entire research grants program would be shut down. While they have begun offering more data in response, the full data sets have not been provided.

A spokesperson from Facebook says, “This is one of the largest sets of links ever to be created for academic research on this topic. We are working hard to deliver on additional demographic fields while safeguarding individual people’s privacy.” (Source). 

While Facebook may be limiting academic research on democracies, perhaps they are finally prioritizing privacy. And, at the end of the day with an ethical framework to move forward, through technological advancement and academic research, the impact of social media and democracy is still measurable without compromising privacy.

In the end, it is clear that privacy promises hold the potential to manipulate people into action. While the US government may not have a national privacy law anywhere in sight, the motives behind Google’s encrypted links may be questionable, and Facebook’s sudden prioritization of privacy may cut out democratic research, at least privacy is becoming a hot topic, and that holds promise for a privacy-centric future for the public.

Join our newsletter
The Key to Anonymizing Datasets Without Destroying Their Analytical Value

The Key to Anonymizing Datasets Without Destroying Their Analytical Value

Enterprise need for “anonymised” data lies at the core of everything from modern medical research, to personalised recommendations, to modern data science, to ML and AI techniques for profiling your customers for upselling and market segmentation. At the same time, anonymised data forms the legal foundation for demonstrating compliance with privacy regimes such as GDPR, CCPA, HIPPA, and all other established and emerging data residency and privacy laws from around the world.

For example, the GDPR Recital 26 defines anonymous information as “information which does not relate to an identified or identifiable natural person” or “personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.” Under GDPR law, only properly anonymized information can be handled or utilized by enterprises.


The perils of poorly or partially anonymised data

Why is anonymised data such a central part of demonstrating legal and regulatory privacy compliance? And why does failing to comply expose organisations to the risk of significant fines, and brand and reputational damage?

Because if the individuals in a dataset can be re-identified, then their promised privacy protections evaporate. Hence “anonymisation” is the process of removing personal identifiers, both direct and indirect, that may lead to an individual being identified. An individual may be directly identified from their name, address, postcode, telephone number, photograph or image, or some other unique personal characteristics. An individual may also be indirectly identifiable when certain information is combined or linked together with other sources of information, including their place of work, job title, salary, gender, age, their postcode or even the fact that they have a particular medical diagnosis or condition.

Anonymization is so relevant to legislation such as GDPR because recent research has now conclusively shown that poorly or partially anonymised data can lead to an individual being identified simply by combining that data with another dataset. In 2008, individuals were re-identified from an anonymised Netflix dataset of film ratings by comparing the ratings information with public scores on the IMDb film website. In 2014. the home addresses of New York taxi drivers were identified from an anonymous datasets of individual taxi trips in the city.  

In 2018, The University of Chicago Medical team shared with Google anonymised patient records which included appointment date and time stamps and medical notes. A 2019 pending class action lawsuit brought against Google and the University claims that Google can combine the appointment date and time stamps with other records its holds from Waze, Android phones and other location records to re-identify these individuals.

And data compliance isn’t the only reason that organizations need to be smart with how they anonymize data. An equally major issue is that fully anonymised techniques tend to devalue the data or render it less useful for purposes such as data science, AI and ML, and other applications looking to gain insights and extract value. This is particularly true with indirect identifying information.     

The challenges of anonymization present businesses with a dilemma: Fully anonymising directly and indirectly identifying customer data keeps them compliant, but it renders that data less valuable and useful. But partially anonymising and the increased risks of individuals being identified.


How to anonymise datasets without wiping out their analytical value

The good news is that it is possible to create fully complaint anonymised datasets and still retain the analytical value of data for data science, and AI and ML applications. You just need the right software.

The first challenge is to understand the risk of re-identification of an individual or individuals from a dataset. This cannot be done manually or by scanning a dataset. A systematic and automated approach has to be applied to assess the risk of re-identification. This risk assessment forms a key part of demonstrating your Privacy Impact Assessment (PIA), especially in a data science and data lake environments. How many unique individuals or identifying attributes exist is a dataset that can identify an individual directly or indirectly?  For example, say there are three twenty-eight-year-old males living in a certain neighbourhood in Toronto. As there are only three individuals, if this information was combined with one other piece of information – such as employer, or car driven, or medical condition – then you have a high probability of being able to identify the individual. 

Once we’re armed with this risk assessment information, modern systems-based approaches to anonymisation can be applied. In the first example, using an anonymisation generalisation technique, we can generalise the indirect identifiers in such a manner that the analytical value of the data is still retained but we can also meet our privacy compliance objectives to fully anonymise the dataset.  So with the twenty-eight-year-old males living in a certain neighbourhood in Toronto, we can generalise gender to show that there are nine twenty-eight-year-old individuals living there, thereby reducing the risk of an individual being identified.  

Another example is age binning, where the analytical value of the data is preserved by generalising the age attribute. By binning the age “28” to a range such as “25 to 30,” we now show that there are 15 individuals aged 25 to 30 living in the Toronto neighbourhood, further reducing the risk of identification of an individual.

In the above examples, two key technologies enable us to fully anonymize datasets while retaining the analytical value: 

  1. An automated risk assessment feature which identifies the risk of re-identification in each and every dataset in a consistent and defensible manner across the enterprise is the first step. 
  2. The application of anonymisation protection using privacy protection actions such as generalisation, hierarchies, and differential privacy techniques.

Using these two techniques, enterprises can start to overcome the anonymisation dilemma.

 

Subscribe to our newsletter



What is your data worth?

What is your data worth?

How much compensation would you require to give a company complete access to your data? New studies demonstrate that prescribing a price tag to data may be the wrong approach to go about fines for noncompliance. Meanwhile, 51 CEOs write an open letter to Congress to request a federal consumer data privacy law and the Internet Associations joins them in their campaign. At the same time, Facebook is caught using Bluetooth in the background to track users and drive up profits.

Would you want your friends to know every facet of your digital footprint? How about your:

  • Location
  • Visited sites
  • Searched illnesses
  • Devices connected to the internet
  • Content read
  • Religious views
  • Political views
  • Photos
  • Purchasing habits


How about strangers? No? We didn’t think so. Then, the question remains, why are we sharing non-anonymized or improperly-anonymized copies of our personal information with companies? 

Today, many individuals are regularly sharing their data unconsciously with companies who collect it for profit. This data is used to monitor behaviour and profile you for targeted advertising that will make big data and tech companies, like Facebook, $30 per year in revenue per North American user (Source). Due to the profitability of data mining and the increasing number of nine-figure fines for data breaches, researchers have become fascinated by the economics of privacy. 

A 2019 study in the Journal of Consumer Policy questioned how users value their data. In the study, individuals stated they would only be willing to pay $5/month to protect personal data. While the low price tag may sound like privacy is a low priority, it is more likely that individuals’ believe their privacy should be a given, rather than something they have to pay to receive. This theory is corroborated by the fact that in reversing ownership in the question, and asking how much users would accept for full access to their data, there was a median response of $80/month (Source). 

While this study demonstrates a clear value placed on data from the majority, some individuals attributed a much higher cost and others said they would share data for free. Thus, the study concluded that “both willingness to pay and willingness to accept measures are highly unreliable guides to the welfare effects of retaining or giving up data privacy.” (Source)

In calling into question the ability of traditional measures of economic value to determine fines for data breaches and illegally harvesting data, other influential players in the data privacy research were asked how to go about holding corporations accountable to privacy standards. Rebecca Kelly Slaughter, Federal Trade Commission (FTC) Commissioner, stated that “injury to the public can be difficult to quantify in monetary terms in the case of privacy violations.” (Source

Rohit Chopra, a fellow FTC commissioner, also explained that current levels of monetary fines are not a strong deterrent for companies like Facebook, as their business model will remain untouched. As a result, the loss could be recouped through the further monetization of personal data. Consequently, both commissioners suggested that holding Facebook executives personally liable would be a stronger approach (Source).

If no price can equate to the value of personal data, and fines do not deter prolific companies like Facebook, should we continue asking what data is worth? Alessandro Acquisti, of Carnegie Mellon University, suggests an alternative method to look at data privacy is to view it as a human right. This model of thinking poses an interesting line of inquiry for both big data players and lawmakers, especially as federal data privacy legislature increases in popularity in the US (Source).

On September 10, 51 top CEOs, members of Business Roundtable, an industry lobbying organization, sent an open letter to Congress to request a US federal data privacy law that would supersede state-level privacy laws to simplify product design, compliance, and data management. Amongst the CEOs were the executives from Amazon, IBM, Salesforce, Johnson & Johnson, Walmart, and Visa.  

Throughout the letter, the giants accredited the patchwork of privacy regulations on a state-level for the disorder of consumer privacy in the United States. Today, companies face an increasing number of state and jurisdictional legislation that uphold varying standards to which organizations must comply. This, the companies argue, is inefficient to protect citizens, whereas a federal consumer data privacy law would provide reliable and consistent protections for Americans.

The letter also goes so far as to offer a proposed Framework for Consumer Privacy Legislation that the CEOs believe should be the base for future legislation. This framework states that data privacy law should…

  1. Champion Consumer Privacy and Promote Accountability.
  2. Foster Innovation and Competitiveness
  3. Harmonize Regulations
  4. Achieve Global Interoperability

While a unified and consistent method to hold American companies accountable could benefit users, many leading privacy advocates, and even some tech giants, have pointed out the immoral intentions of the CEOs. This is because they regarded the proposal as a method “to aggregate any privacy lawmaking under one roof, where lobby groups can water-down any meaningful user protections that may impact bottom lines.” (Source)

This pattern of a disingenuous push for a federal privacy law continued last week as the Internet Association (IA), a trade group funded by the largest tech companies worldwide, launched a campaign to request the same. Members are largely made up of companies who make a profit through the monetization of consumer data, including Google, Microsoft, Facebook, Amazon, and Uber (Source).

In an Electronic Frontier Foundation (EFF) article, this campaign was referred to as a “disingenuous ploy to undermine real progress on privacy being made around the country at the state level.” (Source) Should this occur, the federal law would supersede state laws, like The Illinois Biometric Information Privacy Act (BIPA) that makes it illegal to collect biometric data without opt-in consent, and the California Consumer Privacy Act (CCPA) which will give state residents the right to access and opt-out of the sale of their personal data (Source). 

In the last quarter alone, the IA has spent close to USD $176,000 to try and weaken CCPA before it takes effect without success. As a result, now, in conjunction with Business Roundtable and Technet, they have called for a “weak national ‘privacy’ law that will preempt stronger state laws.” (Source)

One of the companies campaigning to develop a national standard is Facebook, who is caught up, yet again, in a data privacy scandal.

Apple’s new iOS 13 update looks to rework the smartphone operating system to prioritize privacy for users (Source). Recent “sneak peeks” showed that it will notify users of background activity from third-party apps surveillance infrastructure used to generate profit by profiling individuals outside their app-usage. The culprit highlighted, unsurprisingly, is Facebook, who has been caught using Bluetooth to track nearby users

While this may not seem like a big deal, in “[m]atching Bluetooth (and wif-fi) IDs that share physical location [Facebook could] supplement the social graph it gleans by data-mining user-to-user activity on its platform.” (Source) Through this, Facebook can track not just your location, but the nature of your relationship with others. In pairing Bluetooth-gathered interpersonal interactions with social tracking (likes, followers, posts, messaging), Facebook can escalate its ability to monitor and predict human behaviour.

While you can opt-out of location services on Facebook, this means you cannot use all aspects of the app. For instance, Facebook Dating requires location services to be enabled, a clause that takes away a user’s ability to make a meaningful choice about maintaining their privacy (Source).

In notifying users about apps using their data in the background, iOS 13 looks to bring back a measure of control to the user by making them aware of potential malicious actions or breaches of privacy.

In the wake of this, Facebook’s reaction has tested the bounds of reality. In an attempt to get out of the hot seat, they have rebranded the new iOS notifications as “reminders” (Source) and, according to Forbes, un-ironically informed users “that if they protect their privacy it might have an adverse effect on Facebook’s ability to target ads and monetize user data.” (Source) At the same time, Facebook PR has also written that “We’ll continue to make it easier for you to control how and when you share your location,” as if to take credit for Apple’s new product development (Source).

With such comments, it is clear that in the upcoming months, we will see how much individuals value their privacy and convenience. Between the debate over the value of data, who should govern consumer privacy rights, and another privacy breach by Facebook, the relevance of the data privacy conversation is evident. To stay up to date, sign up for our monthly newsletter and keep an eye out for our weekly blogs on privacy news.

Join our newsletter


CryptoNumerics Launches Re-Identify.com

CryptoNumerics Launches Re-Identify.com

Does your anonymized data protect privacy? Recent research demonstrates that conventional anonymization techniques are ineffective, exposing companies to fines. CryptoNumerics just launched Re-Identify.com, a web-based tool used to measure the risk of re-identification and learn if your data complies with current privacy regulation standards.

TORONTO, September 9, 2019- Today CryptoNumerics, an enterprise software company, announced the release of Re-Identify.com, a web-based tool for measuring a dataset’s risk of re-identification. Users can upload a previously anonymized dataset that will be analyzed to produce an instant privacy risk score and an estimated cost of a data breach. Users can also view how CN-Protect, CryptoNumerics’ privacy automation solution, will anonymize their data in a way that substantially reduces the risk of re-identification while maintaining analytical value. This demonstration tool can be accessed via CryptoNumerics.com or Re-identify.com.

Gathering personal data has become increasingly valuable for driving better business decisions through insights and data analysis. In recent years, regulations have required organizations to protect their customers’ data. To comply, many organizations have implemented anonymization techniques. However, current research demonstrates that these methods are ineffective in protecting datasets to the point of reducing re-identification. As a result, organizations are in direct violation of the standards of anonymization outlined in the European General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA), resulting in non-compliance penalties and fines. These can include a fine of up to 4% of annual global turnover, a ban on data processing, and/or a suspension on data transfers to third countries (Source).

GDPR and CCPA emphasize that for data to be considered anonymous, every person in a dataset must be protected against the risk of re-identification. This includes datasets that have no apparent identifiers but can still be used to re-identify an individual by linking or combining with other available data. 

Citizens are increasingly concerned with confidentiality, privacy, and ethical usage of their personal information. At the same time, data is an essential driver of business and technological advancement that looks to revolutionize the future. As such, companies must learn to comply with the increasingly strict protection laws while retaining the ability to achieve accurate and detailed insights. With CryptoNumerics’ new privacy tool, businesses can quickly gain insights from their data while protecting people’s privacy.

CryptoNumerics team is excited to be at the Strata Data Conference in New York from September 24-26 to engage with industry leaders and showcase the revolutionary business impact of their solutions. They are proud to have been nominated “Most Disruptive Startup” finalists. Being recognized, amongst some of the most innovative and advanced new ideas, validates the increasing importance of privacy in the world of big data.

Try Re-Identify.com today.

 

About CryptoNumerics:

CryptoNumerics is where data privacy meets data science. The company creates enterprise-class software solutions which include privacy automation and virtual data collaboration that Fortune 1000 enterprises are deploying to address privacy compliance such as the GDPR, CCPA, and PIPEDA, while still driving data science and innovation projects to obtain greater business and customer insights. CryptoNumerics’ privacy automation reduces corporate liability and protects brand value from privacy non-compliance exposures.

Join our newsletter


Rewarded for sharing your data? Sign me up!

Rewarded for sharing your data? Sign me up!

Companies now starting to pay users for their data, in efforts to be more ethical. Large Bluetooth security flaw detected proving potentially harmful to millions. Blockchain’s future looking bright as privacy-preserving technology booms. Canadian federal elections being ‘watched’ for their history of ‘watching’ public.

Rewarded for sharing your data? Sign me up!

Drop Technologies has secured USD$44 million in investments towards growing a technology-based alternative towards traditional customer loyalty programs. With over three million users signed up already, as well as 300 brands on its platform, such as Expedia and Postmates, the company is headed in the right direction. 

Given that Facebook and other tech giants are monetizing data without user permission, getting paid for it doesn’t seem like a bad idea after all. “I’m a Facebook user and an Instagram user, and these guys are just monetizing my data left and right, without much transparency,” said Onsi Sawiris, a managing partner at New York’s HOF Capital.” At least if I’m signing up for Drop, I know that if they’re using my data I will get something in return, and it’s very clear” (Source).

This alternative to rewards programs basically tracks your spending with all of their 300+ brands, and lets you earn points that you can spend at certain companies such as Starbucks of Uber Eats. If it’s an alternative to credit card rewards, it will be beneficial to consumers looking for extra savings on their purchases. So don’t drop it till you try it!

Bluetooth proving to be a potential data breach vulnerability 

Researchers have discovered a flaw that leaves millions of Bluetooth users vulnerable to data breaches. This flaw enables attackers to interfere while two users are trying to connect without being detected, as long as they’re within a certain range. From music to conversations, to data entered through a Bluetooth device, anything could be at risk. “Upon checking more than 14 Bluetooth chips from popular manufacturers such as Qualcomm, Apple, and Intel, researchers discovered that all the tested devices are vulnerable to attacks” (Source). 

Fortunately, some companies such as Apple and Intel have already implemented security upgrades on their devices. Users are also advised to keep their security, software, and firmware updated at all times. 

Get ready for blockchain advancements like never before

For the past decade, blockchain has been used to build an ecosystem where cryptocurrencies and peer-to-peer transactions are just a few of the many use cases. (Source).

Traditionally, data is shared across centralized networks, leaving systems vulnerable to attacks. However, with decentralization as an added security measure to blockchain, the threat of a single point of failure across a distributed network is eradicated. 

As more and more companies turn to blockchain to gain the benefits of more efficient data sharing and easier data transfers, privacy is overlooked.

In most public blockchains today, transactions are visible to all nodes of a network. Naturally, of course, the issue of privacy is raised due to the sensitive nature of the data, and this transparency comes at a cost. With digital transformation happening all around us, privacy protection cannot be ignored.

To address privacy, many blockchain companies are employing privacy-preserving mechanisms on their infrastructures, from zero-knowledge proofs to encryption algorithms such as Multi-Party Computation (MPC). These mechanisms encrypt data as it’s shared and only reveal the specific elements needed for a specific task (Source).

Costs efficiencies and a better understanding of consumer needs are just a few of the advantages of privacy-preserving mechanisms being introduced. As data and privacy go hand in hand in the future, equitability and trust will be our key to unlock new possibilities that enhance life as we know it (Source).

Upcoming Canadian elections could turn into surveillance problem

Once again, the Canadian federal elections are raising concerns about interference and disruption through the misuse of personal data. In the past, political parties have been known to use their power to influence populations who are not aware of how their data is being used. 

Since data has played a major role in elections, this could become a surveillance issue because experts who study surveillance say that harnessing data has been the key to electoral success, in past elections. “Politicians the world over now believe they can win elections if they just have better, more refined and more accurate data on the electorate” (Source).

A related issue is a lack of transparency between voters and electoral candidates. “There is a divide between how little is publicly known about what actually goes on in platform businesses that create online networks, like Facebook or Twitter, and what supporters of proper democratic practices argue should be known” (Source).

The officials of this upcoming election should be paying close attention to the public’s personal data and how it is being used.

Join our newsletter


How to Decode a Privacy Policy

How to Decode a Privacy Policy

How to Decode a Privacy Policy

91% of Americans skip privacy policies before downloading apps. It is no secret that people and businesses are taking advantage of that, given that there’s a new app scandal, data breach, or hack everyday. For example, take a look at the FaceApp fiasco from last month.

In their terms of use, they clearly state the following;

 “You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you. When you post or otherwise share User Content on or through our Services, you understand that your User Content and any associated information (such as your [username], location or profile photo) will be visible to the public” (Source).

However, these documents should actually be rendered important, especially since it discloses legal information about your data, including what the company will do with your data, how they will use it and with whom they will share it. 

So let’s look at the most efficient way to read through these excruciating documents. Search for specific terms by doing a keyword or key phrase search. The following terms are a great starting point: 

  • Third parties
  • Except
  • Retain
  • Opt-out
  • Delete
  • With the exception of
  • Store/storage
  • Rights 
  • Public 

“All consumers must understand the threats, their rights, and what companies are asking you to agree to in return for downloading any app,” Adam Levin, Founder of CyberScout says. “We’re living in an instant-gratification society, where people are more willing to agree to something because they want it right now. But this usually comes at a price” (Source).

New York Passes Data Breach Law

A New York law has recently been passed, known as the SHIELD Act, or the Stop Hacks and Improve Electronic Data Security Act. This act requires businesses that collect personal data from New York residents to comply. Below are some of the act’s enforcement and features: 

  • requires notification to affected consumers when there is a security breach,
  • broadens the scope of covered information, 
  • expands the definition of what a data breach means, 
  • and extends the notification requirement to any entity with the private information of a New York resident (Source)

Why Apple Won’t Let You Delete Siri Recordings

Apple claims to protect its users’ privacy by not letting them delete their specific recordings. “Apple’s Siri recordings are given a random identifier each time the voice assistant is activated. That practice means Apple can’t find your specific voice recordings. It also means voice recordings can’t be traced back to a specific account or device” (Source).

After it was reported that contractors were listening to private Siri conversations, including doctor discussions and intimate encounters, Apple needed to change its privacy policies. 

The reason why Siri works differently than its rivals is because of how Google Assistant or Alexa data is connected directly with a user’s account for personalization and customer service reasons. Apple works differently, as they don’t rely too much on ad revenue and customer personalization like their rivals – they rely on their hardware products and services.

LAPD Data Breach Exposes 2,500 Officers’ Data

The PII of about 17,500 LAPD applicants and 2,500 officers has been stolen in a recent data breach, with information such as names, IDs, addresses, dates of birth and employee IDs compromised.

LAPD and the city are working together to understand the severity and impact of the breach. 

“We are also taking steps to ensure the department’s data is protected from any further intrusions,” the LAPD said. “The employees and individuals who may have been affected by this incident have been notified, and we will continue to update them as we progress through this investigation” (Source).

Join our newsletter