Recommendations

The Data and Case Studies sections show that data breaches are a significant issue worldwide. Yet, despite growing awareness of the risk, they still happen, with a negative impact on user trust in the Internet. In seeking to understand the problem, the report highlights the underlying economic issues that may be hindering proper investment in, and adoption of, adequate data security measures.

This report highlights five recommendations for addressing the issues we have raised regarding the economics of data breaches. Each one helps to reinforce the others as part of a virtuous data security circle, as shown below.

The first recommendation is to put users, who are the ultimate victims of data breaches, at the centre of the solutions. As a way to kick-start this approach, our second recommendation is to increase transparency about the risk, incidence and impact of data breaches globally. This will help make data security a priority and create demand for better security tools and approaches to prevent and mitigate the problem.

To help increase the economic incentives for organisations to implement these tools, they should have increased accountability and bear more of the cost when a data breach occurs. At the same time, those organisations that have invested in better protection against data breaches should be able to provide credible security signals to the market, so that they can benefit from their increased security investments.

Underpinning these five recommendations are two important principles: data stewardship and collective responsibility.

We recognise these are medium and long term recommendations and that input from all relevant stakeholders is needed. As a starting point, we provide some suggestions on key points to begin the process of implementing them. We wish to start the dialogue and point the way, and not attempt to impose our own solutions.

The high-level principles underpinning the five recommendations are:

  • Data stewardship
    Organisations should regard themselves as custodians of their users’ data, protecting their data not only as a business necessity but also on behalf of the individuals themselves. This is consistent with the user-centric recommendation discussed below. Users would like organisations to view their personal information as more than a revenue source. Organisations should apply an ethical approach to data handling, and understand that they can do well by doing good – protecting users’ data should be a goal in its own right, which also protects the organisation.
  • Collective responsibility
    On the Internet, everyone is connected. One breach could lead to another – “your breach could be my breach”. Organisations share a collective responsibility with other stakeholders to secure the data ecosystem as a whole.1 For example: Vendors can help provide security solutions that make it easier to prevent breaches; Employees should generally protect their activities against hackers and accidental disclosure; Governments can help by creating an enabling environment for better security solutions; and other parties can play a critical role in providing independent standards and reviews at every stage of data security. Should one of these links not function, it could break the entire trust chain.
  • R1 – USER-CENTRIC
    Put users at the centre of solutions; and include the costs to both users and organisations when assessing the costs of data breaches.

The Internet Society has long advocated for a user-centric approach to Internet issues.2 A user-centric approach focuses on users and their needs.

In our work on this topic, we view users as the often overlooked subject of data breaches, even though they are ultimately the biggest victims.

Specifically, when there is a breach:

  • Users may not even be aware of a data breach, as many organisations do not notify them, in part because there are no disclosure requirements in many countries;
  • Even if they are aware, their options may be limited – Once disclosed, the data cannot be recovered. Users may have trouble obtaining financial compensation or damages, especially if they cannot show direct harm. They may also be exposed to an extended risk of identity theft and other harm. And, non-financial issues are difficult to remedy;
  • The impact of a breach on users is typically only studied as one of the costs to the organisation, in terms of compensating direct harms, credit protection, and impact on consumer loyalty, rather than in terms of the cost to users, and in turn to society.

This must change. The consideration of user impact should also extend to: time and costs spent on addressing fraud enabled by the data breach; nonfinancial harms; and future damage. Greater awareness of the full impact on users will help generate more user-focused approaches to data breaches.

More broadly, every breach has a ripple effect that spreads distrust from impacted users to all users. Less trust in the Internet results in less benefits for all of us.

The primary goal of data breach solutions should be to protect users and their data. Data breach risk assessments must include risks to the users whose personal data is at stake. Economic incentives should enable users to choose services that have better data security.

  • R2 – TRANSPARENCY
    Increase transparency through data breach notifications and disclosure.

We advocate for increased study of the evolving risk of data breaches, starting with more transparency about the incidence, causes and impact of data breaches worldwide. Our goal is for this increased awareness to create demand for the kind of solutions we highlight in the subsequent recommendations.

Data breach notification requirements increase transparency about data breaches – what are likely targets, what security works and what does not, what data is taken, how the breaches are carried out. Indeed, much of this report itself is based on existing data breach disclosures.

Sharing information responsibly has a number of benefits – it could help organisations globally improve their data security, help policymakers improve policies and regulators pursue attackers, and help the data security industry produce better solutions. All this can help protect the data ecosystem as a whole.

Transforming transparency to the level of action is part of the responsibility we must collectively take on, so that everyone can make informed choices, help prevent data breaches, and mitigate the impact when they do occur.

If the market does not provide organisations with incentives to take action to voluntarily disclose data breaches, leading to a situation of information asymmetry, government intervention may be needed.

Organisations should warn users when a breach has occurred so that they can also take action to protect themselves. Data breach notification requirements help increase awareness and should be the norm, and are consistent with the user-centric approach this report advocates.

  • R3 – PRIORITISE SECURITY
    Data security must be a priority. Better tools and approaches should be made available. Organisations should be held to best practice standards when it comes to data security.

As seen in the Issues section, many of the tools to prevent data breaches and to mitigate their impact already exist. However, these tools are not always used by organisations responsible for handling user data. Given the cost of data breaches, why are some organisations not using the tools? In part it may be a lack of awareness, or in part a lack of economic incentives. However, even with the best of intentions, these tools are not always easy to use. Progress must be made on usable security – how to make it easier, or automatic, to use the tools that can prevent or mitigate data breaches.

Here is a roadmap of the tools and approaches we advocate in this section.

A breach of just one organisation could expose users’ data held by multiple organisations – “your breach could be my breach” – we must share the responsibility to secure users’ data.

Usable Security

Data security has a human element. For instance, employee password use is always a security issue: Is the password strong enough? Is it changed periodically? Is it unique? Is it memorised? Is the password requirement robust? Is the password safe from social engineering? Personal experience tells us the answer to at least one of these questions is often no. That may be enough to enable a breach.

This is why security practices, tools and approaches should be designed with humans in mind, resulting in usable security. Security should be built into the design of data-handling tools from the bottom-up, rather than adding it as an afterthought. This is the concept of security by design. And, to account for our human nature, we should be nudged, where possible, into implementing security tools.

It is not the goal of this report to provide a playbook for how to prevent data breaches – that requires a multi-faceted and deep approach far beyond the scope of this report. Data security is a field of its own and implementing a security architecture takes significant resources and training. Instead, this report identifies certain improvements that can prevent data breaches regardless of the overall security architecture.

Many of these concepts are easy to understand, regardless of a user’s experience level – for instance upgrading our software and setting passwords. While these are practices organisations should use and support for employees, as part of our collective responsibility for security these are also practices that individuals can use on our personal devices and systems.

We will draw upon these principles throughout – to encourage organisations to take the more secure path, including providing tools for individuals – as employees, but also as users – to defend ourselves from breaches.

  • Data security is a necessity, not a luxury.
  • Data security should be a priority for everyone – from users to business to government.
  • Data security needs to be usable if organisations are going to use it.
  • Data security needs to be part of the design of systems (security-by-design) and business practices.

Security by Design

Many of the tools to help prevent and mitigate data breaches already exist and the barriers to adoption are largely economic. The reasons they are not widely used is likely because the tools are not optimal and may be hard to implement.

In particular, as users, we are all aware that human nature usually prevails– thus, it is much better to adapt technology to our needs than to expect us to adapt ourselves to technology.

Security by design generally means baking security into the technology from the beginning rather than trying to strap it on at the end after the shortcomings become clear.

We must recognise people do not always act in their own self-interest, as users or in organisations, and they may need to be ‘nudged’ to take a different approach.

Nudge Theory

Nudge theory draws on behavioural science and economics to influence decision-making among groups or individuals, in ways seen to be positive by the designers. The theory starts from the observation that humans do not always make rational decisions, and seeks to influence those decisions without making them obligatory.

The authors of the book that popularised the theory offered the following definition.Fn

…To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.

In our domain, with respect to password security, for instance, nudge theory might suggest the answer is to provide users with the option to change their password periodically together with a reminder; going further would require people to change their password periodically.

Security Architecture

We recognise security-by-design should start with a tailored security architecture. Making decisions about data security requires specialist expertise that organisations (especially smaller ones or start-ups) may not have, and the tools available may not always be simple to implement. Designing and maintaining such a security architecture entails up-front costs and may potentially delay speed to market.

A tailored security architecture should include security that is usable. In any well-designed security architecture there are users (employees in this case) that have to interact with the system. This is particularly relevant in smaller organisations without in-house IT security specialists: products and services that provide data solutions will need to take into account their non-specialists users might need to be nudged or forced towards certain behaviour.

Unfortunately, economic drivers work against implementing such a security architecture. While these are similar to the economic drivers we discuss in the rest of this paper, the entire security architecture is more complex, making the analysis more complicated, and outside the scope of this report.

Prevention of data breaches

Known vulnerabilities

As noted earlier, many data breaches could have been prevented if known security vulnerabilities had been patched. It is, therefore, important to address this issue to help prevent future data breaches.

The global accessibility of the Internet makes it vulnerable to attacks, but also helps to provide access to the tools, such as software patches, to prevent breaches and other security problems.

For example, to increase software updating, particularly for critical security patches, first Microsoft and then Apple began enabling business and private consumers to choose automatic updates, or to make the updates automatic by default.3 Many software vendors have also started to schedule their updates at specific times so organisations can prepare their own update schedules around them (such as Microsoft Patch Tuesday). Here, at least at the individual device level, the theory of nudge seems to be working well.

However, at the organisation system level, software updating is more complex – it may require pre-testing, internal scheduling, steps to address legacy hardware that may not support the updated software, as well as potentially incorporating employee-owned devices. The software patch itself may introduce new vulnerabilities while repairing old ones, and may have unintended consequences across different hardware and software systems that need to be considered.

There is no magic bullet to address known vulnerabilities – existing IT systems cannot be replaced overnight, and introducing new systems will continue to introduce new challenges. Going forward, increased awareness of the risks of data breaches should result in the application of security by design principles from the ground up, both by vendors as well as the organisations implementing the vendors’ solutions.

Organisations should secure data against known security vulnerabilities and be prepared to react against new threats. To improve data security, the marketplace needs incentives to produce usable data security tools.

Social Engineering

As seen in the Case Studies section, many successful data breaches are initiated through social engineering, such as phishing attacks. Addressing these threats requires instilling an awareness of the risks and employees’ collective responsibility to help protect their organisation, while also providing them with suitable technology and training.

Employees should be taught how to avoid a phishing attack by understanding threats, including how to recognise a fraudulent email, not to click on unknown attachments, and how to report something that seems suspicious.

More deeply, employees should understand the risks of such attacks for their organisation. The case studies provided here are a good a starting place, showing how an ISP employee can accidentally compromise the CIA Director’s email, and that social engineering and default passwords contributed to the Target breach.

Employees should understand the results of a data breach can be devastating, compromising users’ personal affairs (Ashley Madison), employee data (Office of Personnel Management) as well as salaries and embarrassing emails (Sony).4 Not to mention impacting the bottom line, with risks for compensation or further employment.

Technology (such as email spam filters and web filters) can help reduce the risk of social engineering attacks used to enable a data breach. Technology can also help protect systems from attacks using information obtained via a social engineering attack. For example, more advanced forms of authentication (e.g. two-factor) instead of simple passwords may prevent unauthorised access. These measures should be extended to employees who use their own smartphones or computers, also known as “bring your own devices”. Likewise, while it is important to train everyone not to plug-in unknown devices that may transfer an infection, such as USB sticks, one technical solution could be to prevent them from running automatically when they are plugged in.5

Our view on one important issue is that passwords (both those used by employees and those that are stored by organisations) have demonstrated security challenges. We support the principle that the tools for authentication should be improved, to address the known human and technical deficiencies that have been shown time and again. Further, any stored passwords should be encrypted securely.


For a lighthearted view of a serious topic, see this clip from a TV show asking passersby about their passwords.

Clearly password security still has a few hurdles to clear!


One common way to strengthen authentication is using strong, unique passwords stored in a trusted password manager; another is two-factor authentication.6 While they are common methods, as the case studies show, they are still underutilised. Neither is 100% secure, and organisations and users need to assess the pros and cons of these and other various ways to improve authentication and authorisation.

We discuss these as examples of the challenges of increasing security, not as the only, or best, solutions, for addressing social engineering attacks.

Organisations should apply trusted tools and best practices to preventphishing and block embedded malware. They should also train employees to help avoid social engineering attacks. Vendors should develop security features that nudge people to choose the more secure option.7

Password Manager

Many of us have tens, if not hundreds, of online accounts including social networks, email, shopping, and work. Each account requires its own password for authentication along with a user name, and no one can remember hundreds of passwords and user names. One human response is to re-use passwords. This raises the impact of a data breach of stored passwords or a successfulphishing attack leading to a data breach, because with just one password and user-id, the attackers may be able to get into many of the users’ accounts, including those of their employers’.

One commonly discussed solution to this is a password manager, which creates and stores unique strong passwords, and then uses them to log into online services. However, the decision to use a password manager is not so simple.

While many experts argue in favour, others note that a single master password is still required to use the password manager, and if that is cracked then everything is exposed.8

Second, if one decides to use a password manager, one must decide which one to use. Some are built into web browsers, others are standalone; some use cloud storage, others local. Some may be more secure than others. The choice is not necessarily an easy one.9

Anyone who has read this far in the report will not be surprised to learn that password manager services are themselves a target of hackers. At least one was recently hacked – although the encryption of the passwords apparently was not violated, all users had to change their master passwords.10

Here, in a microcosm, we see the two main economic challenges we have been discussing.

Cracking a password manager could expose a user’s entire online life – including professional, health, financial and sexual – and inflict untold damages on the user. However, the terms and conditions of one representative password manager seeks to limit the developer’s liability to USD 100 per user. As a result, the significant potential costs for the users of a password manager are externalities for the developer.11

Further, there is asymmetric information – a user would have no way of knowing what security tools are used for the password manager, and how well they are implemented, making it difficult to choose the safest one.12

Mitigation in the event of a breach

Even if some data breaches can be prevented, there is no such thing as 100% security or a risk free environment. A zero-day exploit can be used to access a system, a mistake can disclose data, or a computer can be stolen or lost by an employee.

However, a breach can only take data that is stored, and if the stored data cannot be read, it cannot be used.

The approaches we discuss here to help mitigate data breaches are known as data minimisation, to not gather or keep more data than needed, and second, encryption, to make the data that are stored unreadable. These approaches should be part of broader business and technical practices respectively.13

The OECD Privacy Guidelines14 state 7.

There should be limits to the collection of personal data …

Personal data should be relevant to the purposes for which they are to be used …

This is a general statement of the need for data minimisation, recognising that data may need to be collected and stored, but should not be collected or stored if it is not needed.15

Data Minimisation

We have seen some cases where collectors kept extraneous data that significantly increased the cost of a breach. The Office of Personnel Management held data on former employees no longer working for the government, while Ashley Madison kept data users had actually paid to have deleted.

There are a number of competing forces at work.

There is the clear commercial incentive to gather data that can be monetised, now or later, and little cost for keeping the data given the falling cost of storage. In some cases, for users as well, there can be savings in time and convenience if, for instance, the collector stores credit card details to facilitate future purchases or enable long-term subscriptions.

On the other hand, there are two downsides of casting a wide net for data: the intended uses of the data, and the unintended uses of the data.

In general, even for the intended uses of personal information, there are privacy concerns, as data sets grow and are combined with other data sets in ways that users may not be able to foresee or predict.

There are also many unintended uses of our personal data, that data minimisation would help avoid or mitigate. Aside from a data breach, our data may be subject to government surveillance or be subject to data misuse.16 As a result, there are good reasons for organisations to limit the amount of data collected even if it is never breached. At the same time, users should question when more data is requested than is needed for the specific purpose, such as a mobile flashlight app asking for permission to access location information.17

Organisations should take a clear and informed view of the risk of a data breach, and then consider if the value of each element of data might outweigh the additional cost if it is breached, and the corresponding impact on their users. For instance, in the US the social security number (SSN) is a key piece of information for identity fraud.18 If an organisation needs an identifier, the questions should include:

Is it necessary to use the SSN or other government-issued id number as an identifier?

If so, once identity has been established does the government id number need to be saved or could another identifier be used or created?

If the government id number must be used as the identifier, can it be partitioned from other personal information?

Such a review would broadly consider what data are relevant to collect and keep, how long such relevant data are to be stored, and when they are to be erased. It would also recognise the value of the data not just to the organisation, but also to the customer or employee, taking into account not just what benefit such data might provide from an intended use, but also what harm may come from an unintended disclosure or unintended use. This approach is part of data stewardship.

Where market forces for increased data gathering and retention are difficult to overcome, the principle of data minimisation may need to be incorporated in national privacy laws, together with guidance as to what practices are important.

Organisations should minimise the personal data they collect and store. Governments should encourage voluntary codes of conduct and other outcomes that favour data minimisation.

Encryption

The Internet Society believes encryption should be the norm for Internet communications and data.19 More specifically, organisations should use a level of encryption whose time and cost to crack, if at all possible, outweighs any possible benefits of an attacker potentially gaining access.

Many of the case studies highlight the cost of a lack of encryption – Target, the Office of Personnel Management, and others had no encryption, while TalkTalk, Korea Pharmaceutical Information Center, and others used insufficient encryption. Further, encryption is not a static prospect – the Ashley Madison hack highlighted that, if encryption is improved, it must be applied retroactively to existing accounts, and not just to new accounts going forward.20

The economic reasons for limited or no encryption are two-fold – the cost of properly implementing strong encryption is perceived to be high, while the benefits are not perceived to be high enough. However, the calculation seems to be changing in recent years.

Encryption

Encryption involves encoding data so that only the intended parties can read them. The idea of protecting information is as old as the need to keep secrets, and has always generated a race between those generating the secrets, and those trying to steal them. This has continued in the computing age, where computing power has increased the power of encryption, but also the power of those trying to break encryption.

The general challenges of encryption are: to ensure the encrypted data cannot be cracked; the keys are only accessible to the right user(s); establishing the authenticity of end-points (for encryption of network transactions); proper bug free implementation of encryption technology; and to accomplish this without adding a level of cost, difficulty (remember usable security), or computing power that renders the encryption unusable or useless.

In the past several years there has been a marked increase in the use of encryption, such as WhatsApp for messages, and Apple for data stored on devices and their cloud service.21 Partly, this has been in response to reports of pervasive government surveillance of data, and partly in response to the risks of data and other security breaches. Regardless of the motivation for the encryption, the benefit in terms of mitigating the impact of the data breach is the same.

The particulars of encryption are of a technical nature, and certainly beyond the scope of this report. Our principles, however, are clear – implement security-by-design that nudges, or defaults, users towards adopting sufficient encryption in the most transparent way possible. Encryption must be designed around the users rather than expecting users to work around encryption. It must be easily available, affordable and easy to apply to Internet communications as well as Web browsing, and for all devices and cloud services.

Given the increase in employees working from home and while travelling, along with using one’s own devices at work, organisations have a strong interest in ensuring employees use trusted encryption technologies. Employees in turn must understand the potential risk to their employer and to their customers from a lack of use of encryption, to make sure they are not the weak link that is breached.

  • Organisations should encrypt the data they hold.
  • Encryption needs to be strong, easy to use and implement.

Economic Incentives

Of course, as user-friendly as tools might become, they still cost time and money to implement, which not all organisations are willing to spend.

There is a market failure that governs investment in cybersecurity. First, data breaches have externalities not accounted for by organisations, limiting the incentive to invest. Second, even where investments are made, as a result of asymmetric information, it is difficult to convey the resulting level of cybersecurity to the rest of the ecosystem.

Here we focus on how these market failures can be addressed through economic incentives, with respect to both costs and benefits.

 

 

INCREASED ACCOUNTABILITY

By imposing more of the externalities of the data breach on the organisation holding the data, the costs of a data breach will go up, leading organisations to increase efforts to prevent data breaches and mitigate their impact. In economic terms, the goal is for organisations internalise the impact of a data breach.

SECURITY SIGNALS

By enabling organisations to signal they are less vulnerable, they will be able to better compete for business, increasing the rewards of investing in preventing a data breach. In economic terms, the goal is that organisations can credibly signal their level of cybersecurity.

 


Note that when a market failure exists, by definition a market solution is not available. Often government intervention is used to address the failure. However, that is not always needed as a non-government third party may also be able to help, or even self-regulation by the private players can solve some of the issues. The Internet Society supports a multi-stakeholder approach to Internet governance issues, and that holds here as well. While we will consider appropriate government interventions in our recommendations, it is neither the first place to start, nor the last resort. Rather, we will consider recommendations most likely to effectively address the issues.


 

Principles reinforcing economic incentives

Internalising externalities will increase the costs of a data breach to an organisation, and the corresponding incentives to avoid a breach. However, there are broader security and economic considerations that should also be considered, which we all have a collective responsibility to help to address. For instance, training employees to avoid social engineering not only helps to avoid a direct data breach, but also helps to protect against attacks on other organisations the employee may interact with, for which the employer has no direct responsibility or benefit in avoiding. This collective responsibility is a principle which is at the core of the Internet, and which should not be forgotten in the quest to prevent particular data breaches. It is the cornerstone for creating a virtuous data security circle.

While we believe diligent data stewardship is clearly in an organisation’s economic self-interest, given the cost of a data breach, we also believe each organisation has a broader social responsibility to make the Internet a safer place for everyone. For a corporation, for instance, preventing a data breach should become integral to corporate governance; broader efforts to make the Internet safer against data breaches should be part of corporate social responsibility. In the end, the more trust there is in the Internet, the greater the benefits of the Internet for all.

  • R4 – ACCOUNTABILITY
    Organisations should be accountable for their breaches.
    General rules regarding the assignment of liability and remediation of data breaches must be established up front.

As we have seen, the cost of a data breach may be borne by a variety of stakeholders. In the Target case, the banks bore a huge cost for replacing the exposed credit cards; in the Ashley Madison case, the cost was borne by the members, as well as inevitably their families; in the Sony case, a significant cost was borne by employees and their dependents; and in the Office of Personnel Management case, not just present employees, but the data for former and prospective employees among others was also breached.

In economic terms, there is too little incentive to avoid imposing these externalities, precisely because the externalities are borne by others. Ensuring organisations account for those externalities, in turn, increases the incentive to avoid them. With increased awareness, and higher potential costs, we expect organisations will elevate data security correspondingly, to become a key element of governance.

A number of issues may arise with efforts to internalise the economic externalities surrounding data breaches.

Overall, to have the most impact on incentives, the full extent of financial and non-financial impacts of data breaches need to be better understood. General rules regarding the assignment of liability and remediation must be established up front, and understood by all stakeholders, so that they take the desired corrective action. In some cases, some minimum standards for data handling may need to be mandated if not voluntarily adopted (such as data security and data minimisation provisions in law).

There are still practical issues, as the breach may involve a third party, like the refrigeration contractor whose system was used to infect Target. Organisations may have a variety of vendors of hardware and software who could play a role in a breach. Even if blame can be determined, liability may be assessed separately, such as financial institutions bearing the cost of replacing credit cards. These rules may not always be set in stone though, as lawsuits may shift the liability from one party to the other, in ways that may not be foreseeable up front.

Increased accountability imposes more of the externalities of data breaches on the organisations, which should cause them to increase efforts to prevent data breaches and mitigate their impact.

Given the significant challenges in broader liability issues, and in light of our goals, we focus on the liability for the impact of data breaches on users.

In general, users are at the heart of our mission to increase access and trust in the Internet, goals that are even harder to reach when users are subject to the impact of data breaches, but do not have direct control over how their data is protected. More to the point, users are often the ultimate victims of data breaches, whether for identity theft, credit card details, or medical information, but are underrepresented in considerations about how to prevent and mitigate data breaches.

As discussed above, end users are the currently “the missing link”. End users may not be told about breaches that impact them; may not be able to link a harm to a particular breach; and may have had to sign away their future liability claims to use the breached service. Further, where there is no direct customer relationship, they may have very limited recourse to recover money or benefit from measures such as credit monitoring services when there is a breach. Additionally, it can be very costly for users to take legal action against organisations.

We address several of these issues.

First, as discussed above, our position is that breach disclosure should generally take place. Breach notification is a step that helps establish increased accountability. It has the benefit of ensuring those whose data was involved know their data was taken, and can take action to protect themselves (and seek restitution, covered below). It also has the side benefit of causing appropriate reputational harm to the organisation breached, which should increase the incentive to prevent breaches. Given these costs, breach disclosure is most likely to occur in countries where it is required, as we have seen for the US.

We note that required breach disclosure has its limits – the organisation may not know that it was breached, or may not understand whether or what they need to disclose. The appropriate time to disclose may also be difficult to assess. The requirements to disclose are also difficult – too many notifications may lead users to feel helpless; too few and they feel left out.22 Also, while breach disclosure may provide information that can prevent future breaches of a similar nature, the disclosure should not provide information to enable breaches, a distinct likelihood when known vulnerabilities are not always patched. As countries gain more experience with notification rules, and more countries adopt them, we expect the right balance will be found over time.

Second, the terms and conditions of many online services seek to impose severe restrictions on liability and the ability to seek restitution. For instance, this from one password manager company (their caps):23

OUR TOTAL LIABILITY TO YOU FOR ALL CLAIMS ARISING FROM OR RELATED TO THE SITE OR THE SERVICES IS LIMITED, IN AGGREGATE, TO ONE HUNDRED DOLLARS (U.S. $100.00).

For example, a password manager may store hundreds of passwords, whose breach could inflict costs on users far greater than the maximum USD 100 that is covered – this is a prime example of an externality a company makes their users bear.24

There are also sometimes restrictions on the ability to join a class action lawsuit (where users in a similar situation would join into one suit), requiring instead individual binding arbitration.25

In other words, users may need to undertake a difficult and costly exercise to potentially recover a small amount of money or services such as credit monitoring.

There is no simple answer here – online service providers are free to offer these terms (subject to consumer protection laws), and users are of course free not to use these services, if they actually read the terms and conditions and do not find them adequate. Fixes could result from market forces that increase demand for more user-friendly and fairer terms such as higher liability thresholds (resulting from increased awareness), or at the other end of the spectrum, laws that do not allow terms signing away users’ rights, such as the ability to enter a class action suit.

In the example of a retail chain such as Target, the customers were not even using an online service. They swiped their credit card in a store, and it was only then the data was accessible to be breached. In that case, a class action suit by customers against Target was settled, but often such suits are dismissed for lack of demonstrable financial harm. This suggests the users have little rights over data about them unless they can show a direct quantifiable harm, instead of assuming the users have the intrinsic right to be protected from a breach.

It is not always clear what rights users have in the case of a data breach, and the deck today is stacked against the users. Nonetheless, some countries are already strengthening and clarifying the extent of individuals’ rights in the event of a data breach in their laws. In the final point, customers may have to prove immediate and direct harm to be compensated following a breach. This does not take into account that they may be at long-term risk of identity fraud, or the cost in time and money of preventing identity fraud. Additionally, compensation may not cover non-financial harm.

This situation runs counter to a reasonable expectation that users’ rights and interests will be protected if their privacy is breached. In addition to actual damages, both in terms of time and money, the increased risk of identity fraud and other potential future harms resulting from a data breach should be borne by the organisations who were breached, and not, as today, by the victims of the breach.

  • R5 – SECURITY SIGNALS
    Increase incentives to invest in security by catalysing a market for trusted, independent assessment of data security measures.

There is a fundamental informational asymmetry we all face as customers, users, employees, and even organisations, seeking to entrust our data to another party, and for the data holder seeking to be rewarded for their security levels.

In the issues section, we talked about the challenges for the seller of a used car to convey the quality of the car, and how this can result in a ‘market for lemons’ in cars, as the bad cars effectively drive the good cars out of that market. We also saw that even for a new car, there are a number of attributes of interest to us, and several ways we go about making sure the car meets those attributes.

How does this apply in our situation? Return to the example of the password manager. Users can search on certain attributes such as whether it is a cloud service and they can experience the service to see whether it is easy to use through a trial. But, they cannot determine in advance the security of the service. That, unfortunately, they may only learn the hard way through a breach.

Consider also the case of Target and other companies when choosing contractors. They are clearly choosing these contractors based on criteria related to the service they are offering, such as refrigeration or vending services. To the extent security is even considered before allowing the contractor to connect to, or access, their system, it is difficult to assess every contractor’s systems and practices without great expense. As we have seen, many companies have difficulty keeping their own systems protected, much less assessing the security attributes of each and every contractor whose system may connect in some way.

As discussed, organisations must be able to send a credible signal enabling users, contractors, and employees, to assess their security against data breach, as well as other aspects of security, including the security of Internet of Things devices. As noted above, there are three key ways that this can be done – ratings, certification, or mandate.

In a final note on the Ashley Madison affair, in late August the privacy authorities of Canada and Australia released a report, which notes the parent company confirmed the trustmarks on the Ashley Madison website were fabricated by Ashley Madison.27 This reinforces the idea that it is not enough to provide a signal – the signal must be credible, and that is typically provided by an independent trusted third party

Three Key Ways to send a Credible Signal

 

RATINGS

CERTIFICATION

MANDATES

 

 

Ratings. Consumer Reports has already begun to rate security software against a number of attributes, which can help users find the best software to protect their devices. While this is useful, it does not go directly to the question of data security. To our knowledge, no one has begun to do this yet for online services on behalf of consumers. This would be useful in deciding which online bank, medical service, or other sensitive service to entrust the custody of one’s personal information. At the same time, such a service could provide a useful information by rating data security terms and conditions to help users choose the ones that provide the greatest protection in case of an attempted or actual breach. This would hopefully spur online service providers to begin to compete on providing user friendly terms and conditions regarding data security.

In another example of ratings based on security, a new independent third party company has begun to provide ratings of organisations’ security, which helps insurance companies to underwrite cyber insurance policies.28

Certification. There is some activity towards a certification process for data security. UL, which already certifies a wide variety of electronic devices, is now certifying aspects of financial cybersecurity, such as point of sale terminals, and is beginning to develop a certification standard for IoT devices.29 At the same time, Peiter Zatko, a renowned cyber security specialist, is setting up a Cyber Independent Testing Laboratory, to certify the security of devices, as well as software and services. The results are meant to look something like a nutrition label, so not simply certifying, but offering details about various attributes of security.30 Also, the APEC Cross Border Privacy Rules system requires certifying bodies (accountability agents) to certify an organisation’s security safeguards against the program requirements.31

Certification processes are largely to help customers (whether organisations or end users) determine which services to use, which is very welcome. Additionally, they provide greater transparency across the industry.

Another approach is to encourage the implementation of industryrecognised best practice standards that can be certified or self-certified. For example, the US National Institute of Standards and Technology (NIST), working with stakeholders, developed a Cybersecurity Framework based on a Presidential Executive Order from 2013.32 These represent industry best practices, and are voluntary, not mandatory. The process of implementation helps to increase the security of the organisations; although compliance is self-assessed, it can be used as a signal between organisations that they meet certain standards before creating a connected value chain.33 Full compliance so far is limited, but it appears to be a promising approach.

Mandates. Finally, where outside rating or certification is not sufficient, or where adequate voluntary standards are not fully adopted, a government mandate may be needed. This is particularly true where the market failure is significant – either high externalities, or extreme asymmetric information. Privacy and data protection laws usually contain minimum data security requirements. As noted above, there are examples where mandates are most suited to resolving a market failure. In this case, our principle would be to mandate an outcome relating to data security (such as stored data should not be readable by unauthorised parties), rather than a tool or approach to achieve the outcome (such as a type of encryption), to allow organisations to innovate and find the most efficient way to meet the required outcome.

Finally, at the Internet Society we believe that ‘permissionless innovation’ has been a key driver of the Internet, where anyone can develop a new service or application, without prior approval from anyone. It is important to ensure any mandated requirements or certification processes do not conflict with this principle. They should only be a last resort and be designed not to create a barrier to entry.34

Catalyse a vibrant market for trusted independent assessment of data security measures so that organisations can credibly signal their level of data security. Credible security signals enable organisations to indicate that that they are less vulnerable than competitors, and increases the incentive to invest in better data security.

Data breaches are a growing concern worldwide. To mitigate this problem and its economic impact, we propose a shift in the approach to data breaches, involving all stakeholders.

As users increasingly move their lives online, to achieve the full benefits of the Internet worldwide there must be user trust. That trust is dependent on how users’ data are protected from a breach. Each data breach creates a new group of users whose trust has been betrayed, which spreads to their acquaintances through word of mouth, and more broadly through news reports, creating doubt, which undermines user trust at large.

While users are the ultimate victims of data breaches, and their trust is affected, currently, users, and their trust, are not the main focus of approaches to data breaches. For example, organisations may gather and keep more user data than they need, and take less precautions than they could. In the aftermath of a breach, users may find their rights are limited. In the meantime, studies of the costs of data breaches tend to focus on the costs to organisations, with users mainly factored in based on the cost of lost business as a result of the breach.

The Internet Society proposes a user-focused approach to data breaches, in which organisations adopt a model of data stewardship to protect users’ data, while embracing their collective responsibility to help make the Internet safer. Organisations should also be more transparent about the incidence of data breaches and their impact. This will help make data security a priority and create demand for the security tools and approaches that can help to prevent and mitigate data breaches. To provide incentives to use these tools, organisations need to be more accountable for the costs of data breaches than they are today. They should also bear more of the costs. But, organisations should also be given the ability to credibly send a security signal to the market that they have taken additional steps to prevent data breaches.

Law Enforcement

It is also important, in closing, not to put the entire focus on the potential and actual organisations being breached, but also to focus on the attackers themselves. In addition to greater efforts to prevent and mitigate data breaches, all efforts should be taken to reduce the benefits attackers are reaping and increase the risk of being caught. Law enforcement must have the proportionate means and resources to catch and penalise the attackers, while the attackers must be aware of the likelihood of being caught and the penalties, to reduce the perceived potential benefits from a data breach. Given the lack of digital borders for attackers to steal or transmit data, any such efforts must be international in nature to ensure maximum effectiveness.

An instructive parallel can be drawn with efforts to increase automobile safety over the past 50 years. As hard as it is to believe today, early cars did not have seat belts as standard, child safety seats only began to be introduced in the 1960s, and car companies fought airbag mandates. An early attempt to compete on increased safety by Ford, in the US, was perceived as a failure. These early tools provided passive safety, to protect passengers in case of a crash – in our terms, these are mitigation tools. Today these constraints are all standard, and car companies now invest and compete on safety, introducing a variety of active safety tools to avoid crashes, such as automatic braking – in our terms, these are prevention tools.

In between, a variety of familiar forces entered the market. First, increased awareness based on third parties, notably Ralph Nader’s 1965 book Unsafe at any Speed, exposing the reluctance to add safety features. Then, mandates on certain features such as airbags that companies resisted; third party companies rating cars; and government agencies testing cars, including the famous Swedish ‘moose test’ to see if cars can safely avoid approaching obstacles. These features have all led to significant reductions in crash fatalities over time (normalised for increased number of miles driven). Looking forward, many argue that new partially or fully autonomous cars will further increase safety by automatically avoiding accidents. This comes back full circle to the topic of this report.

Autonomous cars will, of course, be controlled by a computer, and have communications built-in to communicate with the owner, and possibly with other vehicles for safety. As a result, of course, the computer can be hacked remotely, as already seen with the Chrysler Jeep. This can lead to a significant breach of data about the location and activities of drivers, not to mention the possibility of one or more cars being hacked and taken over.

More broadly, many of our recommendations are valid for preventing or mitigating breaches of the full range Internet of Things devices. Not just for the data they are gathering with their sensors, but also for a security breach leading to personal or public safety risks, with autonomous cars a leading example of the risks. As such, we encourage the application of the findings of our report to the relevant issues arising from the Internet of Things.

Recommendations:

  1. Put users at the centre of solutions; and include the costs to both users and organisations when assessing the costs of data breaches.
  2. Increase transparency through data breach notifications and disclosure.
  3. Data security must be a priority. Better tools and approaches should be made available. Organisations should be held to best practice standards when it comes to data security.
  4. Organisations should be accountable for their breaches. General rules regarding the assignment of liability and remediation of data breaches must be established up front.
  5. Increase incentives to invest in security by catalysing a market for trusted, independent assessment of data security measures.

Underlying principles: data stewardship and collective responsibility.

In summary, our message to organisations is:

  • Personal data is precious and priceless – protect it!
  • Collect only what is absolutely necessary and encrypt what you keep
  • Destroy data when it is no longer in use
  • Restrict access to those who need to know
  • Signal the level of data security you provide
  • Be more transparent about data breach incidents
  • Be alert to breaches, prepare, notify and act immediately

While organisations holding data are central to efforts to combat data breaches, we believe collaborative multi-stakeholder efforts are necessary, and summarise our recommendations across five main stakeholders, as follows:

  • Organisations holding the data and subject to the data breaches
  • Users whose data as customers is the target of data breaches
  • Vendors of security equipment and solutions to help prevent and mitigate data breaches
  • Third party agents, who can help to study data breaches and review equipment and security standards
  • Government in the role of creating policy and laws that can address data breaches

We note again, in addition to the specifics of preventing and mitigating data breaches, all of us, as stakeholders in the Internet, have a collective responsibility to make the Internet a safer place for everyone. Actions to prevent data breaches of one organisation may help prevent them for others, and together we can all work to help restore and promote trust in the Internet. Further, as shown in the following diagram, these efforts should be user-centric, focused on protecting the privacy rights of users, in preventing a breach, and in addressing the impact on users following any breach.