You are here

The ISP Column, June 2014 - A Final Word

This is to be the final ISP Column to be published by the Internet Society. I thought that I should take this opportunity to look back over the topics that have surfaced while writing this monthly column over the past 12 years, review what I've scribed about over this period, and perhaps briefly look to the future.

The column has covered many topics, including the economics of the Internet, and the evolution of Internet service business models. We’ve covered trust and security, as well as vulnerabilities and incidents. We’ve looked at the issues of identity, and the evolution of the architectural model of the Internet protocol suite. We’ve looked at Internet governance, and the underlying issues relating to the evolution of roles of the International Telecommunications Union (ITU), WSIS, ICANN and the US Government, as well as multi-stakeholderism. We’ve considered routing and addressing, network management, securing network protocols. We’ve looked at broadband deployments and the issues with wireless systems. The Internet has managed to provide a rich field of topics, covering many aspects of basic communications technologies up through the business models of exploitation of these technologies through to issues relating to the social and political aspects of the deployment of this technology. Communications is fundamental to the structure of any society, and the way we communication, the ease and cost of communication and the controls we place on what and how we communicate are ultimately reflected in the nature of our societies. And if there is just one thing above all else that the Internet has achieved in the past 10 years it is a fundamental change in the way we communicate. I’d like to think that the ISP column articles that I’ve written over this period have offered some insight into the nature of these changes in the communications model, as well as their implications at a societal level.

So let’s rummage in the closet of back issues of this column and look at some popular topics.

If there was one persistent theme over this period it would have to be the ongoing predictions of the exhaustion of the IPv4 address pool, and the associated topic of the transition of the Internet to IPv6. Well before the Internet became popular, and just as it was breaking out of the networking research community it became apparent that the protocol was not going to make it. While the 32 bit address field was massively larger than any other protocol in use in the late 70’s, by the late 1980’s it was apparent that the future of computers was no longer one of ever more exotic and ever more hideously expensive mainframes, but one of heading inexorably into a world of consumer electronics. And in this world the population of devices was readily conceived as one that numbered in the billions, if not in the hundreds of billions. Computers would be smaller, cheaper and ubiquitous, and the protocol that allowed them to communicate would need to encompass such a massive domain. So in the 1990’s the Internet Engineering Task Force developed a new version of the Internet Protocol, IPv6, where the major change was an extension of the protocol’s address fields from 32 bits to 128 bits. And then we stood back and waited for industry to leap upon this new technology and transform the Internet, amidst all of the euphoria and excitement of the boom of the Internet in the late 1990’s. But with the inevitable bust came a more tempered view. By 2002 it looked like the new profile of Internet growth was going to be more deliberate and far slower than the explosive expansion of the late '90s. Initial predictions of IPv4 longevity looked like heading out to 2030, and while exhaustion was still inevitable, it was hard to instill a sense of urgency into the calls for IPv6 adoption. But of course the pace of recovery picked up, and then with the excitement of the iPhone heralding the entry of the mobile world to the Internet in the middle of the decade, the Internet's pace of growth accelerated further. The predictions of IPv4 address exhaustion moved closer, and by 2007 it was clear that we had around four of five years left before we hit the wall. The central pool of unallocated addresses, operated by the IANA was fully depleted by early 2011, and within the Regional Internet Registries, APNIC in April 2011, the RIPE NCC in December 2012, and most recently LACNIC in June 2014. ARIN will be exhausted by the end of this year, leaving only AFRINIC with a significant pool of as yet unallocated IPv4 addresses.

Part of the fascination with the extended saga of IPv4 address exhaustion was that we had always assumed that the point when we ran out of further stocks of IP addresses would be the point when we really needed to have made some serious steps along the path of transition of the network to IPv6. But the future is nothing if not surprising, and instead of following the script drafted by the engineers, the Internet has careened along a rather different path. The once unthinkable has happened, and instead of heading down a path of rapid deployment of IPv6, we are persisting with an IPv4 network. We are now operating a network where addresses are a critically scarce resource. Not only has this lead to widespread deployment of technologies that share IP addresses across multiple conversations, but this scarcity has affected the manner in which addresses are distributed across the Network. The once unthinkable has happened, and are witnessing an aftermarket in IPv4 addresses, which not only casts IP addresses as property, but also introduces a market-based pricing discipline into the acquisition of addresses through this market. What happens to notions of a universal service social contract when pricing functions of the basic currency of the network’s address plan mean that not every consumer has the means to access all forms of network services? Not only is this being played out within national markets between wired and wireless services, business and consumer products, and premium and commodity retail services, but of course it is also being played out internationally, across the divides of the developed and developing world. It’s not that access to the Internet is being denied to consumers, but in sharing addresses there are inevitable compromises in the associated model of communications security. It seems that the quality of that universal access to a common communications environment may well differ between those who can afford higher quality services that use dedicated addresses and those that have to share their address with others.

I've written about IPv6 many times on this column. Few may recall that 2003 was to be the year of IPv6. At the time it was evident that there was a certain lack of enthusiasm for ISPs to embark on IPv6 deployment, so the promotion of IPv6 headed off into over-selling the story and coating IPv6 with some rather enthusiastic mythology. Closer inspection of IPv6 revealed that elements of the specification and the associated address plan had been left for further study, and there were some holes in the IPv6 picture that needed further definition. But the topic that took up much of the time was the nature of the transition of the Internet from IPv4 to IPv6. This was not a simple substitution. You can’t expect to push IPv6 out across your network and expect that you can then shutdown IPv4. The protocols are not directly interoperable, so that each service provider will need to continue to support IPv4 for as long as there are other parts of the Internet that continue to rely on IPv4. This implies that the so-called “Dual Stack” phase of this transition could potentially take some years. However, this approach appears to have assumed that the IPv4 network will remain viable throughout this protracted process. The problem was that we didn’t have enough IPv4 addresses. So the process was now a two-fold problem of introducing various forms of address sharing technologies into the IPv4 network while at the same time also introducing IPv6. Furthermore, at this stage it became evident that there were a panoply of technical choices as to how this could be achieved, and rather then winnowing down these choices through a standardization process we've decided to follow all of these approaches simultaneously. Each service provider who is embarking on this path is probably using a customized approach that may not be followed by any other providers. We have seen in France the free.fr ISP roll out an IPv6 service based on tunneling IPv6 across their IPv4 access network, while TMobile in the US is rolling out its mobile 4G service in precisely the opposite manner, tunneling IPv4 across an IPv6 access network. This transition was always going to be a challenge, but in creating a rich environment of technical choices for ISPs we’ve managed to make this a particularly forbidding set of challenges. Perhaps it should not come as a surprise to see so many ISPs still waiting for a clear lead to follow with IPv6.

We've also seen the inexorable rise of online attacks, which makes today’s Internet an extremely toxic environment. We've seen the rise of botnets and scanners, the use of route hijacking, the corruption of certification authorities and the undermining of domain name certificates, and much more. At the same time we've become extremely reliant on a single software library to provide security, and the discovery earlier this year of a pernicious memory leak in this software, the Heartbleed vulnerability in OpenSSL, has been very disturbing. It’s way too late to stop and reevaluate the basic framework of trust and integrity in today's Internet, as this juggernaut is just too big for that. Instead, we persist with what we have, with the unsettling feeling that all these passwords and lock icons on our browsers and such are just superficial paraphernalia intended to distract our attention from the underlying credibility gaps in the picture of online security. At the same time we are seeing the emergence of a new industry in assembling personal profiles of each and every online "subject", and more and more players are peering over our shoulders and attempting to sniff the fumes of our digital exhaust emissions in an effort to influence our future purchases. Not to be outdone, it seems that government agencies also indulge in similar exercises of tapping into the Internet, and while the suspicions of such activity have been around for years, the recent Snowden disclosures have not only confirmed these suspicions, but revealed a larger and even more disturbing picture of deliberate compromise of our basic instruments for individual privacy and security.

But its not just the corruption of privacy and security that is the issue here. We’ve also seen our own protocols turned and used against us. These days the Domain Name System is not only a protocol for translating names into protocol-level addresses, but a means of launching devastatingly effective attacks. More recently, we’ve also seen the network time protocol, NTP, being turned into one of the largest attacks so far seen on the Internet, with traffic loads being directed at the victim of some 400Gbs during the attack. Fixing this is going to be tough. The DNS attacks have made use of a pool of consumer premises equipment (CPE) that operates its DNS resolver promiscuously, and the population of these CPE open resolvers numbers around 30 million or so. At this scale nothing is going to be fixed in the near future! Our efforts to support BCP38, an approach to packet filtering intended to prevent source address spoofing in UDP, are so far relatively inconsequential, so the vulnerability persists, and by the look of it massive DDOS attacks are not going away any time soon.

That have been other themes have have been topics of the ISP Column over this period that have a far longer legacy than just the past decade. The tensions between carriage and content providers have been a major issue in this market, with perhaps a new twist that its now a three way tension between carriage, content and the delivery device. A decade ago the content industry was searching for sustainable business models in the face of consumer resistance to adopt pay-per-view and invasive banner ads, and there was a push from the content providers to impose a levy on access providers to fund the provision of content. The tables turned dramatically when the content providers turned to the advertising market and provided to the advertiser a very well informed view of the consumer. The search services started to track precisely what search terms were used by the consumer, in order to assemble a profile of the consumer. This profiling gathered momentum with email hosting and online document hosting services, refining these profiles into highly accurate pictures of each and every user of their services, which in turn allowed advertisers to bid up their payment per click based on their expectation that the ad is being delivered precisely to their chosen profile of receptive consumers for their product. As has been said many times, spam, whether its email or online ads, is just a failure of information. If the advertiser truly understood the consumer and their current needs, these advertisements would become timely helpful suggestions! The world of content services have moved inexorably into that realm, taking a large amount of revenue with them by tapping into advertising markets. The obvious losers in this evolving market appear to be the mainstream newspaper industry and broadcast television.

But the content industry is not having it all its own way, and others are also trying to wrangle a larger share of the revenue stream, and one of the notable entrants in this space is the device vendor. Back in the days of the mainframe computer, it was not unusual to have the computer’s software bundled into the cost of the hardware. It was also the case that the software was tightly bound to the particular environment of the hardware, so a change of hardware often implied a comprehensive change of software. With the evolution of hardware into smaller and chapter units though the desktop PC, the laptop and the pocket device, the costs of hardware shrank, and software became a separate commodity. This repricing of software lead to the vast wealth of Microsoft in the 1980’s and 90’s which was in essence a software house, while computers because undistinguished commodity platforms. Few hardware brands of the 1980’s are still around, but one noted survivor is Apple. Their iPhone was a restricted device, where the only software apps that could be loaded onto the device was sold through the Apple shop. Any purchases made by these apps also involved the Apple shop. And in every case Apple took a share of the action. Now not only was Apple receiving a payment from the sale of the hardware device, but also locking in a continuing revenue stream from the subsequent purchases of good and services that were made by apps running on that device. Happy days indeed for Apple, who currently vie with Exxon as the world’s most valuable publicly traded company.

And of course there are the content providers. These days the search engines, mail hosts and cloud service providers have been joined by the content streamers, and this is causing some new tensions to arise. With the advent of video streaming the carriage providers are claiming that it is no longer possible to provide a consistent service for all forms of traffic, and the so-called concept of “network neutrality” is unsustainable. The issue has headed to the US legal system where it seems that, in a reprise of the AT&T anti-trust arrangements in the 1980’s, we are once more seeing national telecommunications policy being determined by the judiciary. A convenient move by the FCC some years back that classified Internet Service Providers as value added providers, as distinct from common carriers, has meant that regulatory provisions that would apply to common carriers, including network neutrality with respect to content, don’t appear to apply to ISPs in that country. It’s a frighteningly ominous thought that we may be looking at the re-establishment of vertically bundled monopolies in this industry, and without clear constraints in place that impose limitations on both carriage and content providers, that may well be exactly what eventuates in the coming years.

You’d think that with all this happening there would also be a parallel story of continuing technical innovation in the world of communications protocols. Somewhat surprisingly, there is little that’s fundamentally different in the basic protocols of the Internet of 2014 as compared to that of 2002. The almost ubiquitous deployment of NATs in today’s network implies that the two widely used transport protocols, UDP and TCP, remain the only two viable transport protocols, and efforts to use other protocols can’t break out of the lab, simply because NATs don’t recognize them. But while there has been little in the way of further innovation in terms of protocol models to support novel communications services, what we used to be able to count on is reducing. UDP is under pressure because of IPv4 address exhaustion, and arbitrary TCP ports are being blocked by over-zealous firewalls. And ICMP appears to be a lost cause, which bodes ill for IPv6 and its ICMP-based fragmentation mechanism. It seems that if you want to create an end-to-end transport session in today’s Internet, a conservative choice would be to use the secure socket layer used by the web, and limit yourself to communicating over TCP port 443. The technology picture in routing is similar, in so far as not much has changed BGP still carries the load of the inter-domain routing task and the IGP is commonly handled by OSPF or ISIS.

The tension between virtual circuits and packets within the common carriage network persists, and we’ve seen the use of MPLS and various forms of network-based VPN technologies being touted over this period. In some ways this tension was always going to happen. The original Internet architecture is based on a simple architecture of switching elements, where each switching element performs a stateless forwarding decision based on the destination address contained in the packet header. But such simple networks do not necessarily “add value,” and in an attempt to increase their revenue share it should be no surprise to see the carriage sector attempting to redefine the customers’ requirements in such a way that only more complex, and necessarily more expensive, networks can meet. If all you have is a hammer, then everything looks like a nail, and if all you have is a network, then everything looks like MPLS!

Where we’ve seen the greatest change is in the area of networked services. The last ten years has seen the emergence of social networking apps that have made the Internet incidental rather than a dedicated activity. Taken a photo? Instagram it. Heard something? Tweet it. Taken a cute cat photo? Please, please, just keep it to yourself!

What can we expect in the next dozen or so years?

It’s easy to take a very pessimistic stance on this question. Will the industry re-establish itself into powerful bundled monopolies and resist further innovation and change, as we witnessed with the telephone companies in the latter half of the twentieth century? Is this period of disruption and change drawing to a close? Is IPv6 just too much of a change for this industry? It net neutrality now over? Have the worst excesses of an aggressive copyright lobby managed to find strong political resonance? Will all online content disappear behind pay walls? Apple’s iPhone is a good illustration of a rapacious business model lurking behind a wonder of modern technology and seductively beautiful design. Can we expect to see more of the same, where the underlying technology is hermetically sealed inside a pay-as-you-play Candy Crush exterior? Has the "free and open” Internet had its day in the sun?

However, it’s also possible to be an optimist about the coming years. The world of 2002 was a dramatically different world from that of today is so many ways A “smart phone” then was one that did WAP. Badly. The Internet did some things well, but for others it was of no use. Which bus should I catch? When is the next train coming? Is that the best price? Where are you? Where am I? All unanswerable questions 12 years ago. We’ve managed to perform some amazing feats in the last 10 years. We’ve got powerful computers down to just three buttons, powered by a lightweight battery, and all fitting into a form factor the size of a pocket. And we’re able to manufacture these devices for well under $100 per unit. And we’ve been able to connect these mobile devices to radio networks capable of delivering 10s or even 100’s of megabits per second. In 2002 such feats were considered pretty much impossible. These days we see such feats of technology as being ordinary. What about the next dozen years? Smaller, faster cheaper, certainly. But there is still an elusive goal out there which may be achievable. I don’t really want a “smarter” device, or a “smarter” car, or a “smarter” app. I really want a “wise” device. A” wise” car, and a “wise” app. I would like to be informed about what I need to know at the moment when its relevant to where I am. And nothing else. Is this too much to ask? Somehow I don’t think so, and perhaps in the next few years that’s exactly where this unique combination of information processing and communications will be heading.

 

This is the last ISP Column to be published on the Internet Society’s web site. However that’s not the end of the ISP column. You will be able to find more articles at http://www.potaroo.net/ispcol/

Date: 
Sun, 06/15/2014
Type: 
ISP Column
English