• أر
  • 中文
  • EN
  • FR
  • PT
  • РУ
  • ES

You are here

Many people were shocked by the scale and pervasiveness of government monitoring and tampering with Internet communications. From a security point of view, technically many of these actions are man-in-the-middle, zero-day exploits, trojans, key compromises, etc. – they are known attacks. But the scale and the capabilities of the spying agencies turn this into a qualitatively different threat with a significantly higher risk.

This understanding has already prompted protocol designers, software and hardware vendors, Internet service providers, and content providers to re-evaluate prevailing security and privacy threat models and to refocus on providing more effective security and confidentiality.

But what makes the scale so big that it changes the equation significantly?

The key aspect here is that it is long-term, indiscriminate data collection of all communication flows available at a particular point, allowing correlation of data flows over a long period of time.

And not all collection points are equally valuable for that task.

The Internet was designed to avoid single points of failure – choke points – or to mitigate their impact. Originally, that meant resilience against failures at the IP layer. But as the Internet evolved, the concentration and centralization of certain functions at various layers of the Internet architecture has created new choke points and, consequently, new threats. Pervasive monitoring is one of them.

In our paper submitted to the W3C/IAB workshop on “Strengthening the Internet Against Pervasive Monitoring” (STRINT), we looked at the problem of pervasive monitoring from an architectural point of view. We identified some components of Internet infrastructure that provide attractive opportunities for wholesale monitoring and/or interception, and, therefore, represent architectural vulnerabilities.

Can their impact be mitigated? And how? Can the Internet evolve to reduce such vulnerabilities, and what would be the driving forces? What are the forces that could prevent this from happening? We pondered these questions, too, and encourage you to read our paper[/link], provide feedback in the comments below, and engage in the dialog that will be coming up at IETF 89 in London.

Disclaimer: Viewpoints expressed in this post are those of the author and may or may not reflect official Internet Society positions.

Add new comment