You are here

Why it's so hard to punish companies for data breaches

It is difficult for regulators to determine how and where firms like Facebook went wrong, as well as the financial impact of cybersecurity lapses.

Regulators fear heavy penalties may lead to Facebook's exit from markets.

WHAT happens to the companies that allow our personal data to be stolen? In most cases, nothing. Sometimes, there is a short-lived flurry of bad publicity, a brief dip in stock prices, a class-action lawsuit or a Federal Trade Commission (FTC) investigation that leads to a token settlement or fine. Facebook is unlikely to face any serious, long-term consequences as a result of a security breach it announced last month, which exposed the account data of 50 million users.

At first glance, the lack of consequences that companies face for data breaches might seem to be a clear problem and something that can be easily remedied through heavy regulation like the European Union's General Data Protection Regulation. However, the problem turns out to be more complicated than that. Two challenges, in particular, have hindered effective legal and regulatory responses to breaches: determining whether a company was negligent in its security practices, and figuring out how to calculate the monetary value of stolen personal information and the harm inflicted on the people whose data was breached.

The fact that your personal information was stolen from a company does not necessarily mean that the company did a poor job of securing your data and therefore deserves to be punished. The Facebook breach, for example, was made possible by three software vulnerabilities tied to user tools for privacy and for uploading birthday videos. These vulnerabilities might seem like problems that Facebook should have caught early on, but the truth is that every company has bugs like these in its software.

The crucial question for determining Facebook's culpability for this breach is whether it should have been able to catch and fix those particular vulnerabilities sooner. Did the company test the code rigorously before releasing it? Did it ignore any warning signs or outside notifications about the bugs? Did it take swift action when it first realised something was wrong?

Perhaps Facebook did absolutely everything right and was just unlucky. If that's the case, then it would be unfair and unproductive to penalise the company severely. On the other hand, if Facebook ignored several warning signs and failed to properly vet its new tools before releasing them, then it is entirely appropriate for the company to face a significant fine as an incentive to be more attentive to security in the future.

Your feedback is important to us

Tell us what you think. Email us at

The difficulty lies in trying to determine where the line is between companies that do their due diligence and those that are negligent. In one of the few cases that directly tackled this issue, the United States Court of Appeals for the Third Circuit ruled in 2015 that the Wyndham Worldwide hotel chain had failed to provide its customers with reasonable security protection because it failed to use firewalls or encryption and did not require users to change default passwords. But for a company like Facebook, which certainly clears that very low bar, it is still not clear what level of security is high enough for it to fulfil its responsibility to its customers.

Beyond the challenge of deciding whether a company like Facebook was negligent, it is often difficult for judges and regulators to put a dollar value on how much harm a data breach has caused to the people affected. It's easier to assess the damages if the stolen data can be directly tied to financial losses, but often, stolen data leads to more personal humiliations, as in the case of the 2015 breach of the Ashley Madison website that revealed the identities of people seeking extramarital affairs, or Sony Pictures employees whose embarrassing emails were leaked in the aftermath of the company's 2014 breach.

We do not know how to put a price on these types of losses of privacy, and that makes it harder to use legal or regulatory remedies to punish the companies that are responsible. That, in turn, reduces the financial incentives for all companies to invest in securing user data. The class-action lawsuit brought against Ashley Madison, for instance, culminated last year in a settlement totalling US$11.2 million. And the FTC, which had initially sought a penalty of US$17.5 million for that breach, agreed to a US$1.6 million settlement. Meanwhile, the company reported strong growth in its user numbers following the breach.

After its 2017 breach of data of 146 million people, the Equifax credit reporting bureau is also doing just fine. Just one year later, it is on track to post record profits. Yes, Britain fined Equifax 500,000 pounds (S$904,000) last month (hardly a staggering sum for a company that brings in upward of US$3 billion in annual revenue), but the United States Consumer Financial Protection Bureau decided this year to back off from its investigation of the breach.

It's unlikely that the United States government will take the lead in investigating what happened at Facebook and whether the breach warrants any consequences. It's possible that the European Union will use the increased penalties in its new General Data Protection Regulation to fine Facebook up to US$1.63 billion, but the actual fines will almost certainly be much less than that since regulators will be wary of chasing the company away. Not only would European Facebook users be upset if they could no longer use the site, but Facebook employs thousands of people in Ireland, the country whose Data Protection Commission is currently investigating the breach. A fine of more than US$1.5 billion would hurt Ireland as much as it hurts Facebook if it drove the company out of the country.

What should happen to these companies? Ideally, they would face a combination of consequences that included both fines and corrective security measures. The fines would need to be hefty enough to motivate greater investment in data security and cover their customers' losses - recognising not just lost money but also lost time, lost privacy and lost peace of mind - but not so hefty that the fines could never be enforced or would drive the company out of the country.

The corrective measures would be designed to ensure that breached companies ramp up their security and privacy practices. This could mean requiring outside audits of their security programs or mandating that they adhere to recommended security standards published by the National Institute of Standards and Technology (an agency of the Department of Commerce) or the International Organization for Standardization. It could also involve asking them to provide security resources and assistance to the third parties whose security has also been undermined by the breach, as in the Facebook case, where apps such as Spotify and Yelp that allowed users to authenticate using their Facebook credentials were also affected by the breach .

We are caught between two extremes: a weak regulatory system in the United States that refuses to so much as investigate the Equifax breach and a fine-based scheme in Europe that is so harsh that regulators will never be able to impose the maximum allowable penalties. Neither of these systems comes close to striking the right balance of financial penalties mixed with corrective security measures. And until they do, companies will continue to escape serious consequences for their breaches. NYTIMES

  • The writer is an assistant professor at the Rochester Institute of Technology and the author of You'll See This Message When It Is Too Late: The Legal and Economic Aftermath of Cybersecurity Breaches. @josephinecwolff.

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to