The Danger Of The New Internet Choke Points Thumbnail
‹ Back
Improving Technical Security 18 February 2014

The Danger Of The New Internet Choke Points

Andrei Robachevsky
By Andrei RobachevskySenior Director, Technology Programmes

Many people were shocked by the scale and pervasiveness of government monitoring and tampering with Internet communications. From a security point of view, technically many of these actions are man-in-the-middle, zero-day exploits, trojans, key compromises, etc. – they are known attacks. But the scale and the capabilities of the spying agencies turn this into a qualitatively different threat with a significantly higher risk.

This understanding has already prompted protocol designers, software and hardware vendors, Internet service providers, and content providers to re-evaluate prevailing security and privacy threat models and to refocus on providing more effective security and confidentiality.

But what makes the scale so big that it changes the equation significantly?

The key aspect here is that it is long-term, indiscriminate data collection of all communication flows available at a particular point, allowing correlation of data flows over a long period of time.

And not all collection points are equally valuable for that task.

The Internet was designed to avoid single points of failure – choke points – or to mitigate their impact. Originally, that meant resilience against failures at the IP layer. But as the Internet evolved, the concentration and centralization of certain functions at various layers of the Internet architecture has created new choke points and, consequently, new threats. Pervasive monitoring is one of them.

In our paper submitted to the W3C/IAB workshop on “Strengthening the Internet Against Pervasive Monitoring” (STRINT), we looked at the problem of pervasive monitoring from an architectural point of view. We identified some components of Internet infrastructure that provide attractive opportunities for wholesale monitoring and/or interception, and, therefore, represent architectural vulnerabilities.

Can their impact be mitigated? And how? Can the Internet evolve to reduce such vulnerabilities, and what would be the driving forces? What are the forces that could prevent this from happening? We pondered these questions, too, and encourage you to read our paper, provide feedback in the comments below, and engage in the dialog that will be coming up at IETF 89 in London.

‹ Back

Disclaimer: Viewpoints expressed in this post are those of the author and may or may not reflect official Internet Society positions.

Related articles

Building Trust 21 February 2020

NDSS 2020: The Best in Security Research – For the Good of the Internet

On 23 February, the 27th consecutive Network and Distributed System Security Symposium (NDSS) kicks off in San Diego, CA....

Improving Technical Security 23 October 2019

Securing the Internet: Introducing Oracle Internet Intelligence IXP Filter Check

Oracle is an Organization Member of the Internet Society. We welcome this guest post announcing a new tool that...

Improving Technical Security 4 October 2019

Network Operators in Latin America and the Caribbean Take Steps to Strengthen Routing Security

2019 has been a very good year for the Internet in Latin America and the Caribbean. In May, during...

Join the conversation with Internet Society members around the world