The issues related to data breaches – their causes, impacts, and solutions – are vast. The data breach trends section shows that the source of data breaches can be from outside attacks, whether initiated by hacktivists, state-sponsored hackers, or attackers motivated by financial gain. They can be inside attacks with their own set of motivations; or they can result from accidental loss. The case study section highlights the reasons for and ways to a system, and the resulting impacts on the organisation, its customers, and others. We now focus on a set of issues that emerge repeatedly, and point the way toward recommendations.

First, the impact of data breaches can be extensive, and broad-ranging. In the case of the Target breach, there were significant financial costs imposed on Target, on the banks forced to replace compromised credit cards, and on customers having to address the resulting fraud. In the case of Ashley Madison, the costs extend far beyond the financial as users’ personal affairs were exposed. In the case of the Office of Office of Personnel Management, not only were employees’ and others’ private data exposed, but the breach made it possible to establish the identity of certain employees by using stolen biometric information, with unknowable consequences.

In the face of these financial and non-financial costs, it is puzzling to learn many of these breaches exploited known vulnerabilities, and were preventable. For some of these, there were patches available, but they were not used. Some involved social engineering attacks on employees, again using known approaches, which are possible to guard against.

Of course, not all breaches result from attacks, and not all attacks are preventable. Some are the result of attacks using zero-day exploits that were not known before they were employed. Others result from an accidental disclosure of data, sometimes through the loss of a device containing sensitive data. While not preventable, given how common they are, such breaches are at least foreseeable. Therefore, it is possible to mitigate the impact.

The question here is ‘why?’ Why, given the cost of a breach, more is not done to address the preventable ones, and to lower the cost and impact of foreseeable ones? This is where the economics of trust becomes relevant.

This section is organised as follows. First, it outlines the actions that could be taken prevent the preventable attacks, and to mitigate the non-preventable attacks. It is followed by the economics of why such actions are not uniformly taken.

Many attacks are preventable

It is striking to learn many, if not most, attacks, could be prevented with up-to-date systems and employees trained in data security and how to avoid social engineering attacks. One recent study of reported data breaches stated 93% were avoidable.1

Known Vulnerabilities

According to a Verizon report, 70% of outside attacks rely on known vulnerabilities, some of which date as far back as 1999.2 Further, the report shows ten known vulnerabilities accounted for almost 97% of the security exploits for 2014, and 85% in 2015.3 While these must be patched, that still leaves a long tail of known vulnerabilities to address

Another report raised a specific angle of the same problem. Symantec showed 78% of websites they had scanned had known vulnerabilities. Further, 15% of these were critical, allowing malicious code that could result in a data breach, and compromise visitors to the websites.4

As one prominent example of security challenges, many web attacks focus on third-party plugins. These include web browser plugins such as the Adobe Flash Player, which has been a significant source of attacks over the years, including a large proportion of Zero-day exploits.5

Plugin issues are not restricted to browsers, but also impact websites. WordPress is the basis of 25% of global websites and allows anyone to write a plugin. These plugins increase the functionality of websites, for instance enabling easy entry of contact details, but may also be vulnerable to attacks such as SQL Injections.

Software providers can enable third-party developers to add features to their software through plugins. For instance, web browsers enable plugins to be developed and installed by users to run audio, video, or offer other features such as changing the look and feel of the original software.

A common example of a plugin is Adobe Flash, which enables audio or video playback on web browsers. Developers of video content can make it available in Adobe Flash, and users will be prompted to download the Player plugin to view such content if they do not already have it.

The features that make plugins valuable also make them vulnerable. The ability to use plugins allows third-party developers to add functionality to the underlying software platform. Ready-made solutions such as Adobe Flash can help simplify the way that content is served, making it easier for providers to deliver content.

This helps to promote content availability. However, it also creates more targets for attacks because the user base is larger, and the plugins can be developed and installed separately from the underlying platform, which reduces the ability to screen the software and prevent attacks.6

While not all attacks on websites focus on plugins, they offer a good illustration of the challenges resulting from opening a platform to thirdparty software that may have security vulnerabilities.

Social Engineering

Social engineering is a common technique hackers use to gain entry to a closed system. Employees are tricked into giving up their passwords or directly introducing the infection themselves.

One popular practice is called phishing. An official-looking email directs users to login to a fake site or includes a malware attachment. Spearphishing is a more targeted, and lucrative, approach than simple phishing.

There is evidence these phishing campaigns are quite effective, even against security companies. This technique was used to attack Target, via their refrigeration contractor.

According to Verizon, in one test 150,000 emails were sent out and within the first hour, 50% of users had opened them and clicked on phishing links, with the first click coming within 82 seconds.7

A more recent trend is known as spear-phishing, which is more targeted than a general phishing campaign. In this case, the emails target employees of a particular company, and often specific employees, providing more details than a typical phishing email, to increase the chances the attack is successful.

The attackers may even use social media to learn about the target employee or company to make the email look like it comes from someone who knows the target.

These campaigns can get even more targeted, using specific information about a company to deliver instructions, typically from a travelling CEO. Using a fake email address, the attackers request to transfer money as part of a transaction. As improbable as it sounds, in one case such a campaign led a company to transfer USD 46.7 million to overseas accounts.8

Phishing is not the only means of social engineering an attack. Some experiments have been conducted in which USB memory sticks were dropped in areas such as employee parking lots. Up to half were plugged in, the first in as little as six minutes.9 In these cases, the USB stick relayed back to the researchers that it was opened, but a malicious person could have infected the computer with malware.

The problem of social engineering is magnified by common work trends. With the increase in homework, along with ‘bring your own device’ (BYOD) policies enabling employees to use their PCs or mobile devices, which may not be sufficiently protected, a social engineering attack on an individual through his or her personal device could also compromise the employer’s system.

The human tendency to re-use passwords does not help. If someone uses the same password in their private and professional lives, a phishing attempt could compromise their corporate network, leading to a data breach.

A number of password shortcomings, including re-use, were recently revealed by the full release in 2016 of 167 million user accounts from LinkedIn that were hacked 4 years earlier in a massive data breach. This leak had significant implications.

First, Mark Zuckerberg illustrated how it is common to re-use passwords. His password on LinkedIn was apparently ‘dadada’ and hackers used that to take over his Twitter and Pinterest accounts, mainly, it appears, to brag they had done so.10

Second, it may not always be necessary to use social engineering to learn a password, as the most common password turned out to be ‘123456’ followed by ‘linkedin’, and then, of course, ‘password’.11

Not all attacks are preventable

It is not possible to protect against all cyber vulnerabilities. Some are unknown, or difficult to fix. Other breaches result from accidental loss or release of data. In all cases, however, actions can be taken to mitigate the impact of the outcome.12

Unknown vulnerabilities

While it is possible to protect against known security vulnerabilities, at some point each known vulnerability was unknown, and so there would be no way to prevent such attacks.13 These are called zero-day exploits.

According to Symantec, the number of zero-day exploits has been increasing in recent years, to 54 in 2015; versus 24 in 2014, and 14 in 2013. Of course, by definition, not all zero-day exploits are known today, some may be waiting for the right target or the right price.

There is a sophisticated black market for zero-day exploits, which can be sold to hackers, governments, and the companies who produced the software. Once the zero-day is used, it may become a ‘half-day’ exploit used on targets that have not yet patched the vulnerability.14

Market prices for a zero-day exploit depend on the target of the vulnerability but may be as high as USD 250,000, for example, for a recent Apple iOS vulnerability. There is also a ‘white market’ for the exploits, which may be offered by the original software developer, but typically the price is USD 10,000 or lower.15

Finally, some zero-day exploits are used by those developing them. The Hacking Team, a company selling commercial surveillance software to governments and other buyers, develops such exploits to use in their software. However, many of their zero-day exploits were released in a breach of the Hacking Team and were quickly included in exploit kits such as Angler, for broader usage.16

Insider actions

In addition to outside attacks, which according to most studies represent the largest group of attacks, employees also play a role in data breaches. Sometimes this is with malicious intent, in other cases it results from accidental disclosures or loss of devices with valuable data. Symantec provides a breakdown for 2015 in the following graph.

Everyone makes mistakes, and that can include coding a new website with bugs, losing a USB key, or hiring the wrong person, and some of these mistakes lead to data breaches. As discussed in the recommendations section, it seems safer to design technology around humans than to expect humans to design their actions around technology.

In some cases, even trying to do the right thing may not be enough. In one study, a security firm bought 200 second-hand hard drives off eBay and Craigslist and found in spite of attempts to delete data, 67% contained personal information that could be recovered, including social security numbers, and 11% had sensitive corporate data, including emails and sales projection data.17

In most cases, the data had been deleted, but in ways that could be recovered, such as putting it in the trash or recycle bin. In only 10% of the cases was the data permanently erased. While part of the issue is understanding the difference between ‘delete’ and ‘erase’, the tools need to make this simple for all users.

It happens to the best of them.

Further proving that breaches will happen even when security is a core business, a number of recent targets have been in the cyber security business themselves, but they still could not avoid a targeted attack.

RSA Security: In 2011, An employee of RSA’s parent company EMC clicked on a file entitled ‘2011 Recruitment plan.xls’ attached to a spear-phishing email, which used a zero-day exploit to install an infection through Adobe Flash. This enabled the hackers to gather information about the SecurID two-factor authentication product of RSA, presumably to be used on other targets.21

Hacking Team: Hacking Team is an Italian IT company that develops commercial surveillance – e.g. hacking – tools for governments, law enforcement, and commercial companies. It was the subject of a data breach in 2015 using a zero-day exploit, which leaked 400 gigabytes of data, including emails, a few zero-day exploits which Hacking Team had found for its own uses, and a list of clients. The clients included some repressive governments, leading the Italian government to revoke its license to sell outside of Europe without permission.22

Kaspersky Lab: The Russian Internet security company was hacked in 2015, using what the company called a sophisticated attack involving three zero-day exploits. The company claims that some data was taken, but nothing critical to its operations.23

The list goes on. Security and surveillance companies appear to present an attractive target for attackers. Some, seemingly for bragging rights, to prove they can break the most secure of systems, and others, to use the security information gathered to attack their true target.

For our purposes, this reinforces the idea that full prevention is not possible, and there is no such thing as absolute security – a determined and skilled attacker, focused on a particular company, seems to be unstoppable. However, there are steps that can be taken to increase the cost and difficulty of successfully executing an attack, to increase the possibility of detecting an intrusion, to mitigate data breaches, and to recover faster.

Organizations can mitigate the impact of an attack

Prevention is important, to protect against opportunistic attacks like phishing exercises, and even against more targeted attacks, like spearphishing. However, prevention cannot be the only plan, because it seems a determined attacker will likely succeed.

Accepting a breach is possible under the best of circumstances, and probable under the worst, steps can be taken to minimise the damage. The full playbook is lengthy and requires a broad and deep strategy including various technical tools, such as early detection tools, training, and a legal and communications plan.24

Here are two straightforward ways to mitigate the impact of a breach

  • First, attackers cannot take data that does not exist.
  • Second, any data that is taken has no value if it cannot be read.

More detail on these principles can be found in the recommendations section.

The increasing number of devices and sensors gathering data, online activity generating input, and venture capital seeking the next big thing, are all matched in pace by the falling cost of data storage, creating a perfect storm of big data.

However, as cybersecurity expert Bruce Schneier has pointed out, such data can be a ‘toxic asset’.25 The cost of the data, in a breach, can far outweigh any benefits it may have reaped otherwise.

Of course, data gathering for use can be minimised, but may nonetheless still be essential. Companies should reduce the impact of any data that is lost, through appropriate encryption – if it cannot be read, it cannot be used.

Many organisations are not routinely minimising the data they collect and encrypting what they have. These are such obvious protective measures that, without looking at the economic factors, it is hard to understand why they are not used more extensively. More detail on these principles can be found in the recommendation section.

Why are organisations not taking more steps to prevent breaches and mitigate costs?

The economics of data breaches and their impact on trust is at the heart of this report. This report highlights some of the costs of breach, which can be quite high, and some of the causes. While not all breaches are preventable, many of them are, as discussed in the case studies section.

For instance, Target was hacked through a connection to a refrigeration contractor. One of the contractor’s employees fell prey to a phishing attack, which succeeded due to inappropriate anti-virus software. The malware was used to access Target’s point of sale terminals to gather data, likely because of the use of weak or default passwords in one or more systems. Was the employee trained in the risk and dangers of phishing attacks? Why was a home version of an anti-virus program considered sufficient? Did Target have any way to vet the security of the refrigeration contractor’s system before connecting? Why were default passwords still in use?

Likewise, after a breach, could the impact be lowered? In the case of TalkTalk, at first, the CEO said she did not know if the customer data stolen had been encrypted. Then, admitting it was not encrypted, she argued TalkTalk had met all of their regulatory requirements. Why would the CEO of a major broadband provider, experiencing its third security event in succession, not know if its customer data was encrypted?

In the case of Ashley Madison, some members whose personal information was exposed had paid the company USD 19 to delete their records, which was either not done, or not done correctly. Charging to delete customer records is not a common practice, but perhaps understandable given the nature of Ashley Madison’s core service. But having offered the paid service of deleting records, why take the risk of not fully deleting them?

In the Target case, did the refrigeration contractor, having provided the initial breach point, bear any of the cost of the breach? Target itself did not bear all the costs. The banks spent at least USD 240 million replacing compromised credit cards, although they were able to recover some through lawsuits.26 The aftermath of data breaches also reveals some clues. Ashley Madison customers had no way to know that their records were not safe – could another service have competed by claiming they could have offered better data security?

Why, given the potential costs, were more efforts not taken to prevent or mitigate the risks of a data breach? In economic terms, we can explain this with two concepts that can be boiled down simply to costs and benefits. The cost of a breach is not entirely borne by the organisation breached, and the benefit of offering better data security is not high enough.


In all likelihood, the data collector who is breached does not bear all of the costs of the breach – the cost borne by others is an Externality.

  • While the CEO of Ashley Madison had his own alleged extramarital affairs exposed from the breach, he might not account for the full impact of potential disclosure on others when he decides how much to spend on security.
  • While Target bore a significant cost after their breach, they did not bear the cost of replacing all of their customer’s credit cards, an externality borne by the banks.
  • An employee provided the information used to hack the AOL account of the CIA Director, whose emails were exposed and had to take the time to deal with the breach.

In countries where disclosure is not even required, the externalities are yet greater, as the companies may not even bear any reputational cost from the breach, further lowering the incentive to invest in cybersecurity.

Further, the weight of data breaches impacts future trust, both for those who were directly affected, but also among those who learn about them indirectly. This can lead to a reluctance to go online, and once online, a reluctance to use services requiring personal information, which in turn can limit the growth of the Internet economy. This impact on trust is an externality, and from an economic perspective, there is no reason for organisations to account for their impact on trust in the entire Internet when they take their decisions on data breach prevention and mitigation. However, this is an impact which society cannot neglect.

Asymmetric information

Stakeholders have asymmetric information about the risks they may face, making it difficult to take rational decisions. In particular, it makes it difficult for organisations to benefit from taking the right steps to avoid data breaches. Target cannot check the anti-malware software of every one of its contractors; the CIA Director cannot know how well Verizon employees are trained to resist social engineering attacks. The issue is deeper than this. Ashley Madison cannot credibly signal they have done the utmost to protect the data of their current customers, and that they have truly deleted the data of the former users who paid to have it deleted.

Issues of adverse selection and moral hazard arise from the asymmetric information. Consider the example of an online retailer, who is worried about being hacked, and wants to take actions to protect the company from a data breach.

Assume the retailer decided to invest a significant amount to protect their users’ information from hackers, as a means to compete with other online retailers who might be more vulnerable. How would they signal this credibly to users? They could point out they have not been hacked, but that does not mean they could not be hacked. If there is no way to signal it, there is no way to win more customers, and thus by adverse selection, the market would consist of retailers who have underinvested in security.

If the retailer is still worried about the risks of a data breach – not having invested in the optimal amount of cybersecurity, the company might instead choose protection through cybersecurity insurance (this would be an example of adverse selection – those most at risk are most likely to take insurance). Now moral hazard can kick in – having the insurance means potentially investing even less in cybersecurity, because there is even lower cost from a breach, which of course becomes more likely.

Of course, this is a stylized example, and there are no doubt many companies that recognize the full costs of a data breach, and invest wisely to prevent them. Regardless, this example raises some significant issues that must be addressed to increase security. In particular, the ways to credibly signal different attributes of security.

Economics 101

Externalities and asymmetric information are examples of market failure

Positive or negative externalities arise when a decision taken by one party provides a benefit or harm, to other parties, who have no voice in the decision. For instance, when a homeowner paints their house, they do it because it pleases them, even though it may make the neighbourhood more pleasant for other neighbours, and possibly even raise the value of their houses. On the other hand, if they paint their house in garish colours, it may have the opposite effect. Either way, the homeowner has no reason to take those effects into account – unless, of course, there are historical regulations or homeowner agreements governing the upkeep and colour of houses in the neighbourhood, to promote positive externalities and avoid negative ones.

Asymmetric information arises when one party to an agreement or exchange has more information than the other about the object of the exchange. The classic example is the used car market. The seller of the car knows more about its quality, and how it has been treated, than the buyer. It is difficult, however, for the owners of high-quality cars to convince buyers that they are high quality, so cars that are the same on paper (model, year, mileage driven), will sell for the same average price. As a result, high-quality cars are less likely to be sold, and the market is full of low quality ‘lemons’.27 While a used car dealer may be able to create a reputation for selling high-quality cars or provide a warranty to protect buyers, the individual seller of the used car may not have any reputation to uphold, and cannot credibly offer a personal warranty.

There are two particular outcomes of asymmetric information of interest here.

  • Adverse selection. Those with better information will be selective in how they participate in a market. In the used car market, without a means to signal if a used car is high-quality, only those with lower quality cars will sell, resulting in a market of lemons. In insurance markets, people understand their own risk better than the insurance company, which can also result in adverse selection, as those with higher risk may be more likely to take out insurance (and then, with a riskier pool of insured, premiums will rise accordingly).28
  • Moral hazard. Insurance may lead those with coverage to take less care because they do not bear the full cost of their actions. For instance, if one had a car insurance with no deductible, and no increase in premiums, then people would have less incentive to park their cars securely, or may even take more risk driving. This is known as moral hazard.

Car insurance has deductibles to address asymmetric information. First, with a deductible, owners bear some of the cost of their actions so there is less moral hazard. Second, some insurance companies offer different levels of premium and deductible to address adverse selection. Owners who know they have low risk will choose a low premium and a high deductible that they expect not to have to pay. Those with high risk will choose a higher premium and a lower deductible, that they know they may be likely to pay.

While adverse selection can be addressed privately, such as offering deductibles, in other cases the government may need to intervene. For instance, in healthcare, individuals know more about their own health history, genetic makeup, and daily activities, than any insurance company could hope to find out (although, with cheap DNA tests, online histories, and fitness trackers, that could change). As a result of adverse selection, those of us with more risks would be more likely to take health insurance, raising the premiums. One of the many reasons for governments to provide healthcare (as in the United Kingdom) or to require everyone to have private insurance (as in Switzerland) is to make a broader and healthier pool to spread the risks and lower the premiums.

Similar issues arise with cybersecurity – the private market can help to find solutions to address asymmetric information, but governments may need to intervene in certain cases to help convey certain attributes of security.

The Attributes of Asymmetric Information

While we have already seen the challenges on assessing the quality of a used car, even for a new car there is a lot of asymmetric information involved in the purchase decision. While the challenges in assessing the quality of a used car are easy to understand, even for a new car there is a lot of uncertainty. There are many attributes involving different degrees of asymmetric information, and several ways to make sure the car meets those attributes. Buyers first need to decide the type of car to purchase. Even for a new car there are concerns about the quality; its fuel efficiency; and then what safety features it has. While some of these attributes are clear, others may never be.

So how do we decide?

The first thing many people choose is the type of car; some want a two-seat sports car, others a seven seat sport utility vehicle, and it is easy to identify which cars to consider based on those attributes. Other details are harder to find out – how the car drives, and how well it holds up over time. People can test drive the car to see how it handles, and the reputation of the car manufacturer may signal the quality. Finally, however, people cannot test the airbags, fuel efficiency, pollution levels, or the resistance of the car body in an accident. Here, people may need to rely on a third party, such as the government, to test and certify the car meets minimum standards.

In economic terms, there are three specific attributes of products or services, with respect to asymmetric information:

Attributes one can identify in advance, such as the type of car.


Attributes that only become apparent over time, such as the quality of the car.


Attributes one may never learn about, such as the quality of the airbag.

A number of models have emerged to assist us in assessing these attributes, which typically involve a third-party to help test, certify, or mandate one or more attributes.


Trusted third-party agents can test products and services against a number of attributes, and provide ratings for consumers before they purchase. For instance, Consumer Reports is a publication that rates a wide variety of products on a wide variety of attributes. For cars, it rates safety, reliability, and general consumer satisfaction with each model rated.29


For some attributes, it may not be necessary to provide a rating, but simply determine the product meets a certain baseline standard. For instance, UL (formerly Underwriters Laboratories) is a private company that can certify safety standards of products such as electrical products, often against their own benchmark.30 In automobiles, car manufacturers are allowed to self-certify certain attributes such as fuel economy and emissions in some countries, which has recently highlighted the need for third parties.31


For credence attributes, such as safety, a consumer or private third party agent may never be able to assess them. Governments may need to mandate safety standards. For instance, governments may be best placed to test-crash automobiles and ensure that they meet safety standards.

Because of asymmetric information, it is difficult for customers to assess the data security of organisations along various attributes. Ways for organisations to send credible signals of their security levels involving third-parties are discussed in the recommendations section.


In economics, we say there is a market failure when a market outcome is not efficient. A market outcome is efficient when no one could be made better off without harming someone else. One example of a market failure is monopoly power – when one company controls the market and can set prices higher than in competitive markets, then there will be potential customers who are willing to pay the cost, but not the inflated monopoly price. This excess demand is inefficient. In the case of a market failure, there is an argument that a third party could intervene. This is the role, for instance, of competition or antitrust authorities in governing market power.

When it is difficult for customers to distinguish the quality level between goods or services, asymmetric information poses a problem. In particular, the seller with high-quality items wants to distinguish themselves from the lower-quality sellers. One solution is to send a desirable signal to potential customers – to be credible, the signal must be one that only high quality sellers can make. Branding is one type of signal – a company that invests in advertising its brand sends a signal it knows it is high quality and will be able to recoup its investment. Banks attempt to send a similar signal by investing in expensive buildings. However, given the importance of banks in the economy, governments may support deposit insurance as the ultimate credible signal to customers that their deposits are secure.

The economics of data breaches highlights some key solutions.

First, organisations must be induced to internalise the negative externalities they cause other organisations and users, and society at large, to reduce the incentive to create them. In many cases, this can be monetary – just as taxes can reduce certain types of pollution, increasing the liability or penalty faced by the organisation responsible for allowing a breach to occur will no doubt lower the probability of one occurring. Just as some types of pollution are too toxic and must be outlawed, such as lead in paint or gasoline, there may be a need to impose certain data security practices outright.

Second, the way to address the problems of asymmetric information is to make information more symmetric. If organisations can credibly signaltheir cybersecurity levels to customers, then they will be more likely to invest in it as their investment will be rewarded. This will also lead to a more vibrant cybersecurity insurance market, and reduce the extent of moral hazard as companies with better practices will be rewarded with more favourable policies. In the end, customers will benefit because the organisations they interact with online will have the right incentives to increase data security.

These recommendations are addressed more fully in the next section.

Internet of Things

Looking forward, we can see similar economics issues playing out in the emerging Internet of Things devices.

For instance, software companies typically avoid liability through their license conditions.32 As devices become more connected, they contain more software, and could seek similar licenses. In the case of the connected Jeep hack, the company was arguing for the minimum level of liability, stating the hack was the cause of a vandal, rather than a product defect that would raise its liability levels. This lack of liability could lead to significant externalities imposed by a broader range of devices including health devices, baby monitors, and a wide variety of sensors.

Likewise, someone shopping for a baby monitor, WiFi router, or connected car, has no way to learn how well it has been protected from attackers. There is less incentive to invest in safety, and instead, rush the device out to compete with others. Addressing any security issues through patches is problematic when the patches themselves may be difficult to apply, as in the case of the Jeep, leading again to suboptimal security levels.

The potential issues go beyond data breaches. While a connected car may be hacked to give its location, the hack can also extend to personal safety, potentially at the cost of life and limb. We note the lessons of this report may extend forward to the Internet of Things, as well as more broadly to general security breaches.