Avoid Data Breaches and Save Your Company Money

Avoid Data Breaches and Save Your Company Money

Tips on how to avoid privacy risks and breaches that big companies face today. How much data breaches cost in 2019. Why consumers are shying away from sharing their data. Airline phishing scam could prove to be fatal in the long-run.

Stay Ahead of the Privacy Game

The Equifax data breach is another wake-up call for all software companies. There’s so much going on today, with regards to data exposure, fraud and, threats. Especially with the new laws proposed, companies should take the necessary steps to stay away from penalties and breaches. Here are some ways you can stay ahead of the privacy game. 

  1. Get your own security hackers – Many companies have their own cybersecurity team, to test out for failures, threats, etc. Companies also hire outside hackers to uncover any weaknesses in the company’s privacy or security tactics. “Companies can also host private or public “bug bounty” competitions where hackers are rewarded for detecting vulnerabilities” (Source)
  2. Establish trust with certificates of compliance – Earn your customers’ trust by achieving certificates of compliance. The baseline certification is known as the ISO 27001. If your company offers cloud services, you can attain the SOC 2 Type II certificate of compliance.
  3. Limit the data you need – Some companies ask for too much information, for example, when a user is signing up for a free trial in hopes of making easy money. Why ask for their credit card number when you are offering a free trial service? If they love the product or service, they themselves will offer to pay for full services. Have faith in your product or service.
  4. Keep the data for as long as needed only – Keeping this data for long periods of time, when you don’t need it is simply a risk for your company. Think about it: As a consumer yourself, how would you react if your own personal data was compromised because of a trial you signed up for years ago? (Source)

How much does a data breach cost today?

According to a 2019 IBM + Ponemon Institute report, the average data breach costs a company approximately USD$1.25 million to USD$8.19 million, depending on the country and industry.

Each record costs companies an average of USD$148, based on the report’s results, which surveyed 507 organizations and was based on 16 regions in the world, across 17 industries. The U.S. takes first place with the highest data breach, at USD$8.19 million. Healthcare is the most expensive industry in terms of data breach costs, sitting in at an average of USD$6.45 million. 

However, the report isn’t all negative, as it provides tips to improve your data privacy. You can reduce the cost of a potential data breach by up to USD$720,000, through simple mitigating steps such as an incident response team or having encryption in place (Source).

Consumers more and more hesitant to share their data

Marketers and data scientists all over – beware. A survey of 1,000 Americans conducted by the Advertising Research Foundation indicates that consumers’ will to share data with companies has decreased drastically since last year. “I think the industry basically really needs to communicate the benefits to the consumer of more relevant advertising,” said ARF Chief Research Officer Paul Donato. It is important to remember that not all consumers would happily give up their data for better-personalized advertisements (Source).

Air New Zealand breach could pose long-term effects

Air New Zealand’s recent phishing scam from earlier this week has caused fear among citizens. The data breach exposed about 112,00 Air New Zealand Airpoints customers to long-term privacy concerns. 

Victims received emails requesting them to disclose personal information. They then responded with personal information like passport numbers and credit card numbers. 

“The problem is, the moment things are out there, then they can be used as a means to gain further information,” said  Dr. Panos Patros, a specialist in cybersecurity at the University of Waikato. “Now they have something of you so then they can use it in another attack or to confuse someone else” (Source).

A good practice for situations similar to this is to regularly change your passwords and monitor your credit card statements. Refrain from putting common security question information on your social media such as the first school you attended or your first pet’s name, etc. Additionally, delete all suspicious emails immediately without opening them (Source). 

Join our newsletter


How to Decode a Privacy Policy

How to Decode a Privacy Policy

How to Decode a Privacy Policy

91% of Americans skip privacy policies before downloading apps. It is no secret that people and businesses are taking advantage of that, given that there’s a new app scandal, data breach, or hack everyday. For example, take a look at the FaceApp fiasco from last month.

In their terms of use, they clearly state the following;

 “You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you. When you post or otherwise share User Content on or through our Services, you understand that your User Content and any associated information (such as your [username], location or profile photo) will be visible to the public” (Source).

However, these documents should actually be rendered important, especially since it discloses legal information about your data, including what the company will do with your data, how they will use it and with whom they will share it. 

So let’s look at the most efficient way to read through these excruciating documents. Search for specific terms by doing a keyword or key phrase search. The following terms are a great starting point: 

  • Third parties
  • Except
  • Retain
  • Opt-out
  • Delete
  • With the exception of
  • Store/storage
  • Rights 
  • Public 

“All consumers must understand the threats, their rights, and what companies are asking you to agree to in return for downloading any app,” Adam Levin, Founder of CyberScout says. “We’re living in an instant-gratification society, where people are more willing to agree to something because they want it right now. But this usually comes at a price” (Source).

New York Passes Data Breach Law

A New York law has recently been passed, known as the SHIELD Act, or the Stop Hacks and Improve Electronic Data Security Act. This act requires businesses that collect personal data from New York residents to comply. Below are some of the act’s enforcement and features: 

  • requires notification to affected consumers when there is a security breach,
  • broadens the scope of covered information, 
  • expands the definition of what a data breach means, 
  • and extends the notification requirement to any entity with the private information of a New York resident (Source)

Why Apple Won’t Let You Delete Siri Recordings

Apple claims to protect its users’ privacy by not letting them delete their specific recordings. “Apple’s Siri recordings are given a random identifier each time the voice assistant is activated. That practice means Apple can’t find your specific voice recordings. It also means voice recordings can’t be traced back to a specific account or device” (Source).

After it was reported that contractors were listening to private Siri conversations, including doctor discussions and intimate encounters, Apple needed to change its privacy policies. 

The reason why Siri works differently than its rivals is because of how Google Assistant or Alexa data is connected directly with a user’s account for personalization and customer service reasons. Apple works differently, as they don’t rely too much on ad revenue and customer personalization like their rivals – they rely on their hardware products and services.

LAPD Data Breach Exposes 2,500 Officers’ Data

The PII of about 17,500 LAPD applicants and 2,500 officers has been stolen in a recent data breach, with information such as names, IDs, addresses, dates of birth and employee IDs compromised.

LAPD and the city are working together to understand the severity and impact of the breach. 

“We are also taking steps to ensure the department’s data is protected from any further intrusions,” the LAPD said. “The employees and individuals who may have been affected by this incident have been notified, and we will continue to update them as we progress through this investigation” (Source).

Join our newsletter


Capital One: An Expensive Lesson to Learn

Capital One: An Expensive Lesson to Learn

As part of their business practices, organizations are uploading private customer information to the Cloud. However, just focusing on how secure the data is and not thinking about privacy is a mistake.

Capital One’s recent data breach proves that organizations need to be more conscious and proactive about their data protection efforts to prevent potential privacy exposure risks. Organizations have an obligation to ensure their customers’ data is fully privacy-protected before it is uploaded to the Cloud. This doesn’t just mean eliminating or encrypting client names, ID’s, etc. It also entails understanding the risks of re-identification and applying as many privacy-protecting techniques as needed.

Capital One’s $150 Million USD Mistake

This month, one of the United States’ largest credit card issuers, Capital One, publicly disclosed a massive data breach affecting over 106 million people. Full names, addresses, postal codes, phone numbers, email addresses, dates of birth, SINs/SSNs, credit scores, bank balances and, income amounts were compromised (Source).

Former AWS systems engineer, Paige Thompson, was arrested for computer fraud and abuse, as a result of obtaining unauthorized access to Capital One customer data and credit card applications (Source). “Thompson accessed the Capital One data through exploiting a ‘misconfiguration’ of a firewall on a web application, allowing her to determine where the information was stored”, F.B.I. officials stated. “These systems are very complex and very granular. People make mistakes” (Source).

To make amendments, Capital One is providing any affected customers with free credit monitoring and identity theft insurance. They will also be notifying customers if their data has been compromised (Source). 

Unfortunately, the company is expecting the breach to cost about $150 million USD, and these costs are driven by customer notifications, credit monitoring, technology costs, and legal support.

How the breach could have been avoided

Simply encrypting data clearly isn’t enough, because Thompson was able to exploit a security system vulnerability and decrypt the data (Source). 

Organizations should apply as many privacy-protecting techniques as possible to their dataset to minimize risks of customer re-identification in case of a data breach.

One way in which data can be privacy-protected to reduce the risk of re-identification is by anonymizing it. The best privacy technique to accomplish anonymization is differential privacy, which uses mathematical guarantees to hide whether an individual is present in a data set or not. 

A second way to reduce the risk of re-identification is by combining pseudonymization of direct identifiers with generalization and suppression techniques of indirect identifiers. Optimal k-anonymity is a privacy technique that generalizes and suppresses data to make it impossible to distinguish any specific individual from the rest of the individuals.

Organizations should elevate their understanding of privacy-protection to the same level at which they understand cyber-security. There are two essential questions that every organization need to be able to answer:

  1. What is the re-identification risk of my data?
  2. What privacy-protecting techniques can we implement throughout our data pipeline?

To learn more about how CryptoNumerics can help you privacy-protect your data, click here.

Join our newsletter


Your Employer Is Watching You

Your Employer Is Watching You

This Week: Your employer has too much information about you. Lessons to be learned from Facebook’s $5 billion USD settlement. Why Snapchat is different.

Think Facebook and Amazon have too much of your personal data? Think again.

The truth is, your employer has much more of your private data than your social media, banking, or e-commerce accounts. 

The majority of employees feel uncomfortable with their employers tracking their moves in the workplace, network, or devices. However, this is slowly evolving. A Gartner study has found that as employers become more transparent about monitoring their employees, employees are more willing to accept being watched. 

Regardless, there is still a significant power distance. Unfortunately, employers have unlimited scope to install monitoring tools and tracking systems in their employee devices and internet connection. There is yet to be federal regulation preventing workplace surveillance. 

There are three key ways employers monitor their employees: 

  1. Location tracking via an employee ID badge or a company device.
  2. Communication tracking by monitoring email, Slack messaging, and keystroke logging. 
  3. Health monitoring, such as sleeping patterns and fitness through wellness programs. 

Here are some steps employees can take to protect yourself from your employer’s surveillance systems:

  1. Assume you are always being watched. Anything you do on the company’s devices, Wi-Fi, email, messaging platform, etc. could be tracked. 
  2. Keep it professional. Keep your work and personal devices separate. Anything on the company Wi-Fi can be scanned.
  3. Understand what information you are giving to your employer. Carefully, read over documents and contracts, like the company’s privacy policy, union laws, and your employment contract.

Three takeaways from the $5 Billion USD FTC and Facebook settlement

Known as the largest fine imposed by the FTC, the settlement reached with Facebook has three key takeaways

  1. The impact of the settlement or fine amount itself. Keep in mind Facebook did have to agree to this amount, but is $5 billion USD significant enough for Facebook to make changes to their policies?
  2. The structural remedies made necessary in addition to the fine. For example, the company will need to create a committee that deals exclusively with privacy. 
  3. The definitions that appear at the beginning of the settlement order. Some of these may show the FTC’s approach to how they interpret current laws and regulations. For example, the settlement order defines and clarifies the meaning of “covered information,” and “personally identifiable information” (PII), which is understood differently across the world.

For more information on this settlement, click here to watch the IAPP video segment. 

Snap has risen above tech giants

Snap cares about your privacy way more than you can imagine. Their most used app, Snapchat, was originally designed for private conversations. It has unique features such as automatic content deletion, private posts, increased user privacy control and much more. “We’ve invested a lot in privacy, and we care a lot about the safety of our community,” CEO Evan Spiegel said in a quarterly earnings call. 

Several brand-safety conscious companies like Proctor & Gamble have boycotted Google and YouTube after inappropriate videos were posted openly, and as a result, they are prioritizing brand-safety conscious companies. Currently, Snap is hoping to secure a venture with P&G, as their values on privacy and user safety are aligned. 

Join our newsletter


FaceApp and Facebook Under the Magnifying Glass

FaceApp and Facebook Under the Magnifying Glass

FaceApp is Under Heavy Scrutiny After Making a Comeback

The U.S. government has aired its concerns regarding privacy risks with the new trending face-editing photo app, FaceApp. With the 2020 Presidential Elections campaigns underway, the FBI and Federal Trade Commission are conducting a national security and privacy investigation into the app.

Written in the fine print, the app’s terms of use and privacy policies are rather shocking, according to Information security expert Nick Tella. It states that as a user, you “grant FaceApp a perpetual, irrevocable, non-exclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you”. 

Social media experts and journalists don’t deny that if users are downloading the app, they are willingly handing over their data because of the above terms of use. However, government bodies and other institutions are aiming to make regulations stronger and ensure data protection is effectively enforced. 

On the other side, FaceApp has denied any accusations of data selling or misuse of user data. In a statement cited by TechCrunch, the company stated that “99% of users don’t log in; therefore, we don’t have access to any data that could identify a person”. Additionally, they made claims assuring the public that they delete most images from their services within 48 hours of the image upload time. Furthermore, they added that their research and development team is their only team based in Russia and that their servers are in the U.S.

With everything going on in the world around privacy and user data misuse, we must ask ourselves; should we think twice before trusting apps like FaceApp? 

Facebook to Pay $5 Billion USD in Fines

On Friday, July 12th, the FTC and Facebook finalized a settlement to resolve the Cambridge Analytica data misuse from last year, for a fine of $5 billion USD. Unfortunately, concerns still arise over whether or not Facebook will even change any of their privacy policies or data usage after paying this fine. “None of the conditions in the settlement will impose strict limitations on Facebook’s ability to collect and share data with third parties,” according to the New York Times. 

Although the FTC has approved this settlement, it still needs to get approved by the Justice Department, which rarely rejects agreements reached by the FTC. 

Join our newsletter


How Google Can Solve its Privacy Problems

How Google Can Solve its Privacy Problems

Google and the University of Chicago’s Medical Center have made headlines for the wrong reasons.  According to a June 26th New York Times report, a lawsuit filed in the US District Court for Northern Illinois alleged that a data-sharing partnership between the University of Chicago’s Medical Center and Google had “shared too much personal information,” without appropriate consent. Though the data sets had ostensibly been anonymized, the potential for re-identification was too high. Therefore, they had compromised the privacy rights of the individual named in the lawsuit.

The project was touted as a way to improve predictions in medicine and realize the utility of electronic health records through data science. Its coverage today instead focuses on risks to patients and invasions of privacy. Across industries like finance, retail, telecom, and more, the same potential for positive impact through data science exists, as does the potential for exposure-risk to consumers. The potential value created through data science is such that institutions must figure out how to address privacy concerns.

No one wants their medical records and sensitive information to be exposed. Yet, they do want research to progress and to benefit from innovation. That is the dilemma faced by individuals today. People are okay with their data being used in medical research, so long as their data is protected and cannot be used to re-identify them. So where did the University of Chicago go wrong in sharing data with Google — and was it a case of negligence, ignorance, or a lack of investment?

The basis of the lawsuit claims that the data shared between the two parties were still susceptible to re-identification through inference attacks and mosaic effects. Though the data sets had been stripped of direct identifiers and anonymized, they still contained date stamps of when patients checked in and out of the hospital. When combined with other data that Google held separately, like location data from phones and mapping apps, the university’s data could be used to re-identify individuals in the data set. Free text medical notes from doctors, though de-identified in some fashion, were also contained in the data set, further compounding the exposure of private information.

Inference attacks and mosaic effect methods combine information from different data sets to re-identify individuals. They are now well-documented realities that institutions cannot be excused for being ignorant of. Indirect identifiers must also be assessed for the risk of re-identification of an individual and included when considering privacy-protection. 

Significant advancements in data science have led to improvements in data privacy technologies, and controls for data collaboration. Autonomous, systematic, meta-data classification, and re-identification risk assessment and scoring, are two processes that would have made an immediate difference in this case. Differential privacy and Secure Multiparty-Computation are two others.

Privacy automation systems encompassing these technologies are a reality today. Privacy management is often seen as an additional overhead cost to data science projects. That is a mistake. Tactical use of data security solutions, like encryption and hashing, to privacy-protect data sets are also not enough, as can be attested to by the victims of this case.

As we saw with Cybersecurity over the last decade, it took several years and continued data theft and hacks making headlines before organizations implemented advanced Cybersecurity and intrusion detection systems. Cybersecurity solutions are now seen as an essential component of an enterprise’s infrastructure and have a commitment at the board level to keep company data safe and their brand untarnished. Boards must reflect on the negative outcomes of lawsuits like this one, where the identity of its customers are being compromised, and their trust damaged. 

Today, data science projects without advanced automated privacy protection solutions should not pass internal privacy governance and data compliance. Additionally, these projects should not use customer data, even if the data is anonymized, until automated privacy risk assessments solutions can accurately reveal the level of re-identification risk (inclusive of inference attacks, and the mosaic effect).  

With the sensitivity around privacy in data science projects in our public discourse today, any enterprise not investing and implementing advanced privacy management systems only exposes itself as having no regard for the ethical use of customer data. The potential for harm is not a matter of if, but when.

Join our newsletter