Responsible Disclosure from a Collaborative Security Perspective Thumbnail
Building Trust 1 February 2017

Responsible Disclosure from a Collaborative Security Perspective

By Olaf KolkmanPrincipal - Internet Technology, Policy, and Advocacy

I recently wrote about an agenda to mitigate the threats of insecure devices on the Internet of Things. One of the requirements expressed in that agenda is “For every product sold, there is a way that security researchers can responsibly disclose vulnerabilities found”. In this post I want to reflect on the issue of responsible disclosure from the perspective of the Collaborative Security approach, how responsible security researchers are helping to make the Internet more secure, and to explore how collaboration around disclosure helps to improve trust in the Internet.

Responsible Disclosure

Responsible disclosure can be seen as a social compact between those who find new vulnerabilities — often called “zero-day” vulnerabilities — and those that fix them: the discovering party discloses knowledge of the vulnerability to the fixing party, and keeps the vulnerability confidential from the general public until the fixing party has the opportunity to roll out the fix to the market[1].

Security vulnerabilities in software and hardware are often the stepping stones for gaining access to the systems that contain them. Vulnerabilities often result in the installation of malware. The ability of researchers to find and report vulnerabilities is becoming more and more important, now that traditionally unconnected devices are getting connected to the Internet: cameras, cars, light bulbs, TVs, wristwatches, alarm systems, door locks, toilet seats, toy dolls. When they are compromised these devices can be used for attacks against the user of the device and against the core of the Internet. There have been a few stunning examples lately. Security research and testing is critical in protecting society from such harm[2].

Unfortunately, there are asymmetries in the responsible disclosure compact. Security researchers that are on the hunt for vulnerabilities often find themselves in areas where laws or regulations forbid or hinder tinkering with devices and software. Security researchers are at particular risk where copyrighted information is involved. For instance, in the USA the Digital Millennium Copyright Act (“DMCA”) prohibits circumvention of technological measures that control access to copyrighted works. And as our colleagues at the EFF put it: “If you circumvent DRM locks for non-infringing fair uses or create the tools to do so you might be on the receiving end of a lawsuit[3]. Among its resources the EFF has plenty of examples of intimidated bona-fide security researchers.

Two clarifications on the above: first, when I talk about security researchers, I am not necessarily referring to academics or engineers who have ‘researcher’ as a job title. I refer to those motivated by natural curiosity to tinker with devices and software, the non-criminal hackers. From personal experience I can confirm that the ability to tinker leads to deeper understanding and the ability to innovate. However, the difference between the tinkerer, researcher and the criminal hacker often lies in the intent, not in the activities they perform during their research. Second, I am not arguing against any laws. Laws are needed to provide certainty, and set boundaries. They are also necessary if we are to have those who engage in criminal behavior face the consequences. What I am talking about is the unintended side effects of these laws.

Collaborative Security

The collaborative security approach is a way of tackling security issues on the Internet[4].

The collaborative security perspective on responsible disclosure is that it is the collective responsibility of

  • producers of software and hardware
  • security researchers
  • law makers and law enforcers

to develop a shared expectation of mutual responsibilities.

Because law makers and enforcers may be involved, those mutual expectations often need to be coordinated at the national scale. However, with a global market and a global medium like the Internet, the security researchers may be located far away from the origin of the product. Therefore it is important that there is some compatibility in the approach and that mutual expectations are globally communicated.

Here are some examples, to make this less abstract.

Changing the law

There are examples emerging where laws and regulations with unwanted side effects on responsible disclosure can be modified to minimize those side effects.

This is a US specific example related to the DMCA. In October 2016 the copyright office of the US Library of Congress (the agency responsible for DMCA implementation) announced some grounds for exemption[5] that will allow security researchers to research software and hardware devices. The exemption is temporary and is restricted to devices owned by the researcher. This is a step in the right direction even though, as reported in this Wired article[6], security researchers could “still be sued or prosecuted under the Computer Fraud and Abuse Act if, for instance, they’re determined to be gaining “unauthorized access” to a computer they don’t own”.

Creating practices, guidelines and a responsible disclosure ecosystem.

Governmental, business, technical and civil society stakeholders can all help to share their expertise and experience.

For instance, the Global Forum for Cyber Expertise organized an expert meeting in Budapest, in March 2016. The reports of that meeting[7] indicated a number of challenges in the responsible disclosure ecosystem, including:

  • complexity in the supply chain (that is, the manufacturer of a product may be multiple steps away from the producer of the software embedded in it)
  • management of sensitive information
  • (in)compatibility of legal approaches.

The report points to a Dutch “Responsible Security Guideline” [8] (targeted at organizations and security researchers). The guideline is well worth a read, as it gives practical guidance which organisations and incident reporters can use

“… to facilitate responsible reporting and handling of vulnerabilities in information systems, software and other ICT products. Organisations can use the guideline to help them draft their own responsible disclosure policies. The security of information systems, software and other ICT products is principally the organisation’s responsibility. That said, however, incident reporters also have responsibilities, such as holding off on publication until the organisation has been able to remedy the problem”.

In practice, we see organizations taking responsibility for creating a responsible disclosure ecosystem in which the expectations are transparently stated. Network companies like Arbor Networks[9], platforms like Facebook[10], and retail companies like Walmart[11] publish their policies. Some of these may be inspiring examples of the instruments that an organization can put in place if it wants to organize its own responsible disclosure policy.

Then there are the commercial efforts to improve bug disclosure coordination. Examples of these are Hackerone and Bugcrowd, companies to which businesses can outsource the whole process of bug validation, interaction with the researchers, and the implementation of bounty programs.

These examples and the report from the Budapest meeting all reward security researchers for their responsible disclosures. While that reward does not always need to be monetary, an ecosystem in which there are carrots to lure researchers to come forward with their findings will be more effective than an ecosystem in which the researchers are under threat of lawsuit or criminal punishment. A reasonable reward will also help prevent the vulnerability being sold on the vast black markets for software vulnerabilities.

While talking about disclosure ecosystem, I want to give a shout out to the GDI Foundation. In their Project366, the protagonists (Vincent Toms and Victor Gevers) spent all their free time, during 2016’s 366 days, to find no less than 690 severe security vulnerabilities – which they reported to 590 organizations in 71 countries. Here we have volunteering individuals with an exemplary track record of coordination and shared responsibility.

Covenants

Another approach to implementing a responsible disclosure compact has been proposed in the context of W3C work around the standardization of Encrypted Media Extensions in HTML5. The Open Source Initiative published a reference example of such a covenant, which in essence would bind the signatories to not file suit against security researchers. The introduction of the proposal in a W3C[12] context surfaced the complexity of achieving consensus between the various stakeholders. However, the idea of introducing such covenants to standardization bodies, or between the members of trade associations or other overarching coordination bodies, is an interesting one.

A Complicated Ecosystem

So far I have been describing a relatively straightforward environment where there is a producer of ICT products with vulnerabilities, security researchers identifying those vulnerabilities, and a government that strives to have a generally healthy cyber security environment while at the same time retaining the ability to prosecute cyber criminals and to have a generally healthy cyber security environment. In reality the ecosystem is more complicated.

For instance, governments may have an interest in not disclosing vulnerabilities they have found, in order to use them for law enforcement, national security, or intelligence purposes. The complication is that the disclosure of a bug serves a public need in keeping the global cyber ecosystem healthy and secure, while the exploitation of the bug serves the public needs of being able to catch criminals, not necessary those of the cyber type.

In the US, the Vulnerability Equity Process (VEP) is an attempt to provide some clarity around the roles and responsibilities in that ecosystem. Schwartz and Knake of the Belfer Center argue that the VEP is by no means a perfect instrument. But their recommendation that “Clear high-level criteria that informs [sic] disclosure or retention decisions should be subject to public debate and scrutiny” is a generally applicable principle.

As you can see, there are somewhat contradictory interests at play, here: even when policies are in place to allow for hoarding and use of vulnerabilities by certain branches of government, it is also still the responsibility of those same governments to help develop the responsible disclosure compact between ICT producers and security researchers.

Thinking Global, acting Local

The Collaborative Security aspects of responsible disclosure are about making sure that local action has appropriate global impact, and that there is collaboration between security researchers and producers of ICT products, all in the absence of formal contracts but in the context of a social compact.

As remedies against bad actors are developed, we don’t want to sweep in responsible actors that are helping to make the ecosystem more secure. Also, governments should invite them, the responsible actors, to the table when developing policy because that will yield a better result.

For Security researchers, the social compact implies that they should act according to some well understood principles. For-instance those mentioned in the Dutch Responsible Disclosure Guideline:

The discloser must report the vulnerability as quickly as is reasonably possible, to minimise the risk of hostile actors finding it and taking advantage of it.

However, the discloser must do so in a manner that safeguards the confidentiality of the report so that others do not gain access to the information.

The discloser’s response must not be disproportionate

[…]

The other side of the compact is a set of expectations on products and software developers. They should have the appropriate mechanisms to deal with disclosures. In their IOT framework the Online Trust Alliance recently recommended the following:

Establish and maintain processes and systems to receive, track and promptly respond to external vulnerabilities [sic] reports from third parties including but not limited to customers, consumers, academia and the research community. Remediate post product release design vulnerabilities and threats in a publicly responsible manner either through remote updates and/or through actionable consumer notifications, or other effective mechanism(s). Consider “bug bounty” programs, and crowdsourcing methods to help identify vulnerabilities that companies’ own internal security teams may not catch or identify.

That is a very clear call to action, not only for IOT vendors, but for all organizations that produce computing devices and services that connect to the Internet. Between the Dutch guidelines and the OTA advice we have a way forward.

 


[1] In informing the market about a vulnerability there is also an aspect of responsibility. For instance rolling out the information in such a way that critical pieces of infrastructure have a chance to fix before the knowledge is disclosed to the large community. That aspect is out of scope for this piece.

[2] In addition to vulnerability research, it is important to have independent research and review of the privacy properties of these systems. Wouldn’t we like to know what data the newly-bought doll for our 4-year-old is collecting about the dear child?

[4] For details of the collaborative security approach read: http://www.internetsociety.org/collaborativesecurity

[12] See https://www.w3.org/blog/2016/06/perspectives-on-security-research-consensus-and-w3c-process/ and references therein for some of the nuances about the status of this discussion.

Disclaimer: Viewpoints expressed in this post are those of the author and may or may not reflect official Internet Society positions.

Related articles

Building Trust 21 February 2020

NDSS 2020: The Best in Security Research – For the Good of the Internet

On 23 February, the 27th consecutive Network and Distributed System Security Symposium (NDSS) kicks off in San Diego, CA....

Building Trust 11 February 2020

Every Day Should Be Safer Internet Day

Safer Internet Day is an opportunity for people and organizations around the world to join forces in a series...

Building Trust 28 January 2020

This Data Privacy Day It’s the Little Things That Count

Today we’re celebrating Data Privacy Day, which is all about empowering people and organizations to respect privacy, safeguard data,...