Scattered across the globe, a number of networking issues are being noted that can loosely be classed as stemming from bandwidth-intensive activities. These are applications and services that cause traffic that is higher in bandwidth, that follows paths neither anticipated nor desired by the network architect and operator, and that has an impact on other network activities traffic. As a result of the degradation in service the situation causes for other users, network providers often turn to bandwidth management in an attempt to curtail or restrict the excessive flow.
In the United States, one such situation emerged when network service provider Comcast and P2P file sharing service BitTorrent wound up at odds over whether Comcast’s bandwidth management activities were unduly restricting the BitTorrent service. The case was not unique, but it shed light on the situation and prompted a healthy discussion at the Real-time Applications and Infrastructure area’s peer-to-peer (P2P) Infrastructure Workshop, which was held in May 2008 (see page 1).
As the IETF considers the work it might undertake to address these types of issues, some of the questions on people’s minds are, What is the basic problem? Is it an issue with a particular application (P2P)? Or is it a question of network providers’ inability or lack of interest to reprovision network segments?
The answer is not simple. From a technology perspective, the Internet is activity agnostic. Therefore, in principle, any popular application or service could run traffic in directions that have nothing to do with the way actual networks are built. In fact, more data-intensive (andnetwork-topology-independent) applications and services are rolling out and causing these types of bandwidth issues. For example, virtual realities, such as Second Life, along with other video and audio streaming applications are also having noticeable impacts on networks. Nevertheless, some of the issues are exacerbated by the nature of P2P technology. By definition, peers and peer connectivity are not tied to network topology, so traffic does not necessarily follow expected network paths. Furthermore, P2P file-sharing services are designed to store and share large chunks of data, so they tend to use all available bandwidth in data transfer between peers.
These are not new problems. The Internet, in its global reach and local reality, has been dealing with, at least, pockets of exorbitant demand for bandwidth since its inception. Typically, the demand has been dealt with as a network operational issue, not a technological issue. To use an example from the all-but-forgotten past, FTP traffic was, at one time, one of the largest sources of expensive, transoceanic link traffic from Europe to the United States. Paying for a European Archie anonFTP archive index for the purpose of giving precedence to European sites mirroring the same files made sense as a way of reducing that traffic demand. See www.savetz.com/articles/ibj_bunyip.php.
The best path forward appears to be adual-pronged approach-in other words, finding ways to allow such applications as P2P technology-based ones to (1) fine-tune their use of network bandwidth availability and (2) identify more-palatable options for managing bandwidth in the face of overwhelmed network links. These are the types of activities the IETF considered in two BoF sessions at IETF 72 in Dublin (see page 1).
As we engage in IETF activities on the topic-including BoF sessions and, eventually, working groups-it’s important to keep a few simple realities in mind. First, it’s not strictly about P2P technology itself. Second, the network impact might be local (as in, “My P2P participation is wrecking my neighbour’s VoIP [voice over Internet protocol]“) or transit (such as spikes in peering costs due to unexpected load). Third, there are different classes of reasons that the network operator may have no reasonable incentive or ability to adjust the network to meet demand. These include the constraint of dealing with significant, unmatched expenses (such as no additional revenue to offset the capital cost) or dynamically changing bandwidth demands. As an example of the latter, network operators have little or no control over which peer becomes a supernode in a P2P network.
Note that none of this is to be confused with traffic unintended by any local customer (unwanted traffic, denial of service, etc.), which does not need accommodation so much as remediation.
In today’s Internet, the broader impact of dealing with existing problems through traffic shaping, with its unintended consequences, or through tiered Internet access has the potential for a chilling effect on openness and innovation.
The long view is that this sort of stretch in network demand is normal and, on a global level, healthy. By making a network to ship packets around, the model is about packet shipping; it’s not about making and maintaining highly specialized connections. To address the current issues, we need to consider approaches that not only solve the immediate problem but also are applicable beyond any particular application technology and that do not introduce such architectural complexity as to limit aspirant applications.