• أر
  • 中文
  • EN
  • FR
  • PT
  • РУ
  • ES

You are here

I want to share a perspective on system security.

My phone as a system

The Apple/FBI case revolves around a phone. Think of your own phone now.

When I look at my own phone I have rather sensitive information on it:

  • my calendar, which could be used to find out when to rob my house;
  • pictures of my kids that I do not want others to see;
  • a password manager which could be used to impersonate me anywhere in the virtual world;
  • content I do not want my wife, my boss, and my friends to know about;
  • a banking app and a Paypal app that could be used to make transfers to another account.

I need to be able to trust that thing. I am not alone in this - as an example, a report showed that 2,000 people in the UK each day realize that there is something on the phone that they just lost that could land them into some sort of trouble.

Let’s step away from phones - the arguments apply to all of our ICT devices, software or services. System for short.

From Encryption to System Security

Much of the focus of the debate so far has centered around law enforcement access to content, especially content that is stored on an encrypted phone. Yet, there is also a broader issue at stake: system security. Let’s take a device as an example of a system.

Device security is important because it protects against third parties gaining access to the device. Proper device security can prevent: modification of the software or firmware that makes the device work; and access to personal data stored on, and communications performed with, the device. It is intended to prevent any unauthorized party from using any of the functions of the device. Device security is enabled by many tools: encryption, authentication mechanisms (e.g. the passcode), specific forms of digital rights management, self-updating software, physical security such tamper proof chips, self destruction, etc.

Encryption is one of the more fundamental tools in the security toolbox because it is ubiquitously used to protect all sorts of communication and data, in phones, teller machines, car-keys, television boxes, DVD players, etc. Introducing vulnerabilities (back-doors) in encryption will negatively impact the ability to protect, eh, almost any ICT system. The Internet Society has taken a clear stance on the utility of encryption as a fundamental building block for security and for trust: it should become the norm for Internet communication and governments should not undermine encryption and secure communication tools and technologies.

Private Sector Responsibility and Public Sector Help

It is important that technology companies can assume a primary role and responsibility for the security of their products. They constantly close vulnerabilities in their products so that these cannot be exploited. They constantly reassess the threats and risks that impact the security, and the use of their products. They take action based on those assessments.

When making those assessments they should take into account that devices connected to the Internet are part of the Internet, and vulnerabilities in devices or systems can have an impact anywhere. That does not only apply to end-user devices like PCs that when hacked can be used to perform a denial of service attack on distant services, but also to remote-controlled power breakers that, if compromised, can have large societal impact, see Ukraine.

One of the main arguments in the encryption debate applies more generally: vulnerabilities will eventually be found, exploited and proliferate.

Any suitable vulnerability will be used in exploit kits, i.e. put to use to circumvent security mechanisms and to break and enter ICT systems. The defense against these vulnerabilities is mostly in the hands of the private sector. The public sector helps by greasing the wheels of fixing vulnerabilities through the support of CSIRTs, the facilitation of responsible disclosure, and funding security research. In other words, companies that produce devices and the software that runs on them should be rewarded for seeking out and removing security vulnerabilities, and enlisting the help of security researchers to identify any points of weakness. They should not be required to introduce vulnerabilities no matter how well-intentioned the motive.

Public Debate, not courtrooms

ICT systems are critical to the functioning of society and for the safety of people within that society. Technology companies should be empowered to keep up with state of the art technical security mechanisms to protect those systems and they may be held accountable if they ignore common security practices (the Internet of Things needs serious considerations here). Technical security mechanisms provide the baseline of trust and confidence for an important engine of our societies: ICTs in general and the Internet in particular. And yes, tools in that toolbox also provide hurdles to the job of law enforcement.

On the whole, the Internet Society believes that the balance weighs in favor of the availability of state of the art security mechanisms. They are the strongest enablers for commerce and social interaction, and they help protect human rights. We are of the opinion that governments should use all their available tools and mechanisms to help improve ICT system security, not weaken it.

With that principle as a baseline, there is still a debate to be had around the tools available for law enforcement purposes. There is a tension between the pervasiveness of encryption, the available tools for ‘breaking and entering’, working towards the minimization of vulnerabilities, and preventing crime. This post is not the place to resolve that tension, nor are courtrooms: these debates should be part of a well informed public debate.

Image credit: Lisa Risager on Flickr - CC BY NC ND

Disclaimer: Viewpoints expressed in this post are those of the author and may or may not reflect official Internet Society positions.


Back in the mainframe days we had B2-secure systems, one of which I used. They were easy to use, but in some ways very hard to administer, and fell out of use back around Solaris 7 or so. Today, a syadmin can walk out of the CIA with a thumb drive full of secret information. Somewhere between the two extremes is a sweet spot, where I can ensure my banking information isn't gobbled up by a <expletive deleted> PC virus.
Add new comment