By and large, the argument over network neutrality tends to be dominated by two specific groups, says Tim Lee. Those coming from a technical background who talk about the important nature of the Internet’s open end-to-end principles, or those from an economic background who talk about how non-neutrality makes business sense.
Lee, a frequent contributor to Ars Technica and Techdirt, has recently written “The Durable Internet,” a paper published by the libertarian-leaning CATO institute. In it, Lee argues that both sides miss a key point – that is, the Internet’s open-ended architecture is not likely to vanish, despite the fears of net neutrality proponents, and despite the wishes of net neutrality opponents. For that reason, perhaps network neutrality legislation isn’t necessary – or even desirable from an open-networks perspective.
In “The Durable Internet,” Lee addresses the concerns in “plain English” but with enough technical detail to argue the point for someone familiar with the makeup of the Internet and familiar with the technical issues involved.
We sat down for a phone call interview with Lee, and you can find the podcast here, with a transcription below the fold.
NPD: First of all, thank you for talking with us. Second of all, could you tell me a little bit about who you are?
Tim Lee: Sure, well, my name is Tim Lee, and I am currently a graduate student in computer science at Princeton. Princeton has recently started a center for information technology policy that tries to do interdisciplinary work between computer science and public policy topics, and obviously, network neutrality is a good example of the kind of issues that we research here.
I’m also a scholar with the CATO Institute, for whom I wrote this paper. My background, I have an undergrad degree in computer science, and I also spent several years writing on public policy.
NPD: Could you tell me a little bit about the paper that you’ve written?
http://en.wikipedia.org/wiki/Tim_Wu A big part of what I was trying to accomplish, I think, is that the – the people who have been arguing against regulation tend to be lawyers and economists who don’t really understand how the Internet works so they make these arguments that just aren’t very credible to people that have a more sophisticated understanding of the network, and so I thought some arguments – the kind of arguments I make in this paper, I think – are more – maybe not what you would expect from the lawyers and the economists, but would take seriously the concerns that more tech savvy people have.
Yeah, so the debate over network neutrality has broken down between two camps. One on hand, you have obviously the pro-network neutrality people, who, for the most part believe that network neutrality is a good thing as a technical matter, but also that it would be a good idea to give the government – probably the FCC – authority to oversee the way the Internet is administered and mandate that certain kinds of network neutrality rules be enforced. On the other hand, you have anti-network neutrality folks who are not only against regulation but think that the whole concept of network neutrality is not necessarily a good idea – that they want to see service providers to do what they call “network management.” To have routers and other hardware in the Internet backbone or last mile ISPs do various kinds of filtering and traffic shaping and so forth.
What my paper does is take sort of a middle position, which is that I mostly agree with the network neutrality folks on the technical question of: “Do we want Internet Service Providers screwing around with traffic?” But on the regulatory side, I think there are good reasons to think that even if you support network neutrality as a technical matter, it’s not necessarily a good idea to give the government the power to mandate it.
NPD: In this paper, you make a distinction between “the end-to-end principle” and “network neutrality.” Could you explain a little bit about that distinction?
Lee: Yeah, well, basically what I was trying to do was be a little bit more rigorous than I think some people have been in the debate. Network Neutrality is a term that was invented by Tim Wu, who is a law professor at Columbia University, and it seems like different people who use the term have slightly different meanings. I think they’re all reasonable ways to use the term, but there’s not a lot of agreement about what it means. And so, I chose to use the term “end-to-end” which is, I think more often used by engineers, which is just the concept that the Internet’s purpose is to move bits from one end of the network to another without really doing anything to it other than faithfully translating them to their destination. And for the most part, what network neutrality people are talking about is the end-to-end principle. But I think the end-to-end principle is a little bit of a more precise concept.
NPD: So, you think that the “dumb network” – the end-to-end principle, network neutrality as a technical concept – whatever you want to call it – that that has worked? That it – that there really shouldn’t be – or really doesn’t need to be any real change to the network.
Lee: I think so. I mean, I really don’t want to say that there should be no change. What I would say is that the – historically, if you look at the development of the Internet and the development of other networks that were competing with the Internet in the 70s, 80s, and early 90s, the thing that was distinctive about the Internet was that it – it did less with packets than other networks. It didn’t try to provide any sort of reliability guarantee. It didn’t try to prioritize high priority or low priority packets. And what that meant was that it was very cheap to implement and very interoperable. And it put most of the decision makers directly in the hands of end-users.
For example, when Tim Berners Lee (who is not related to me) first came up with the idea for the Web, he didn’t have to go to some sort of “Internet Inc.” and negotiate for the right to deploy this new application. He just had to give his application – his client/server software – to various people around the Internet, and they could immediately start using it, because it didn’t require any support from the network infrastructure itself. And so that basic quality of the Internet explains a lot of the innovation we’ve seen on the Internet, and I think it’s important that it be preserved.
Now, I don’t think that means that there are necessarily no changes needed on the Internet. It might be that some sort of end-to-end friendly prioritization scheme could be made to work where end-users have control over the priorities and so forth – that’s a technical question to which I don’t have a strong opinion, but what I do have a strong opinion about is that it’s important that the control over the way packets are handled remains in the hands of end-users, and they’re not in a situation where before you can launch a new application or use the network in a new way, you have to negotiate with a bunch of different network owners and get their permission ahead of time.
NPD: Getting more specific – in the paper, you said that the physical ownership of Internet infrastructure does not translate into a practical ability to control its use. Could you explain a little bit about that?
Lee: Yeah, and this is probably the most complicated part of the paper. And I look at several specific scenarios. The example that I sort of use to frame this is – I don’t know if you’d call it – the Digg… Digg encryption key. So – BluRay HD-DVD had this encryption key that can be used to decrypt these high definition discs, that began to spread around the Internet. And, the cartel that controls this encryption scheme started sending takedown letters to various Web sites. Among them was Digg, and Digg started taking down when somebody posted a story that gave what the value of this key was. (Which was just this hex value that starts with “09 F9″). They would take it down, and what would happen is that the users of Digg regarded this as censorship and they reacted very strongly. They began posting thousands and thousands of copies of this story in a bunch of different formats then they posted images or videos or audio files with it. Then end result was that the Digg management tried as hard as they could to get rid of this key, and it had the opposite effect that it intended. There was a point in which the entire front page of Digg was filled with stories about this key. And the moral of the story is that Digg has complete control over the Web servers, over the code, over the network connection and all that software, but ultimately the users were in control. As long as Digg wanted to be the kind of Web site it was, it had to give end-users a certain amount of control.
I think there’s something – now that’s not entirely a network neutrality issue per se, but I think there’s an analogous sort of dynamic at work with the Internet itself. If an ISP wants to offer anything that resembles Internet service to the general public, it’s not going to be able to exert fine-grain control over the way the network is used because the architecture in there just doesn’t provide network owners with any hooks with which to do that.
Applications don’t tell ISPs what they’re doing. They don’t have any sort of knobs or levers that they can turn. So it’s easy to deploy a new application and it’s very difficult to block it effectively. The balance of power, I argue, is really with the users.
NPD: What about a scenario like, for example, BitTorrent and Comcast, where they were able to use Sandvine software to – for a while anyway – block BitTorrent.
Lee: Right – so, well one thing I should say just to be clear – I’m not claiming that it’s never possible for any ISP to interfere with any traffic. I mean, clearly they can use firewalls, or more sophisticated firewalls like Sandvine, and block particular categories of traffic. But I don’t think that’s the question. I mean, the question is – can they exert the kind of practical long term control that would serve their various business interests that network neutrality advocates are concerned that they would do. So in this case, can they actually keep BitTorrent off their network for a long period of time? I think the moral of the BitTorrent story is, “Probably not.” For two reasons.
First, you have a sort of business/legal/PR backlash against what Comcast did, and obviously there’s some debate over whether it was mostly pressure from the FCC, was it mostly bad PR, was it customers switching to competitors? Obviously we’re not going to be able to decide that one way or another because the fact is that it all happened at the same time and it contributed… but the other thing that is not commented on much about that incident is that at the same time a lot of BitTorrent users started using what’s called header encryption, which is a variant of the BitTorrent packet protocol where the header– BitTorrent packets are encrypted in such a way that it’s not easy to detect and filter them. And the result, I think, is that even if none of the other pressures had stopped Comcast from doing what it was doing, in the long run what would have happened is that BitTorrent users would simply all switched to this encrypted version of the protocol, and you still would not have had – Comcast would not have had any practical way to block BitTorrent. Within a couple of years, they would be back to square one, and they would have had a much harder time coming up with ways to block BitTorrent. So, I have a lot of criticism for what Comcast did, and I definitely don’t think that that’s an ideal situation, but it’s also not the end of the world.
NPD: Well, here’s the thing that gets me. Even though market forces might discourage a company from acting in a certain way – acting towards an Internet that contains the end-to-end principle. What I’m wondering is, what about all the times when companies just act stupid, and do things that are against their own self-interest? Either because the people running the companies don’t realize it — just because something is against the company’s self interest does not mean that the company won’t do it.
Lee: Sure, and that’s a great point. Two things I would say about that. First, this is why we want policies that will maximize competition as much as possible. Because the best case scenario is one in which the consumer has lots of other choices and can switch away from a given ISP. We’re not where we’d like to be on that. Most customers have two options, but it’s conceivable where you’d have an area where both of them – both ISPs are doing things customers don’t like and they don’t have a third option.
But the other thing that we have to remember is that when we’re comparing two options, or we’re comparing an option with network neutrality regulation, and one without, we have to be realistic about both of those. You’re right – in a world without network neutrality regulation, companies aren’t always going to make the correct decision or even the correct decision that’s in their self interest. It’s also true though that in a world with network neutrality regulation, the regulator is not always going to make the decision that’s best for the consumer, or the most rational, or whatever. So you have to compare the relative risks of companies screwing up versus the FCC screwing up, and so this is really the point of the final section of my paper, where I look at the track record of past efforts to create regulatory authorities to regulate offending industries. And the track record’s not very good. Once in a while, you’ll have an example of government regulation that works very well, but you also have a lot of cases where it creates a lot of unnecessary red tape, it restricts competition, and so forth. So I think that on balance, you’re more likely to have success with a system that relies on the self interest of network owners to do what’s in their interest, versus a system where relying on the FCC to ignore and resist the enormous lobbying pressure of special interest.
NPD: What I would like to know – you mentioned earlier when BitTorrent was – with the BitTorrent thing, there were encrypted headers developed. But, not everybody who uses BitTorrent – I mean, a savvy use will go for the encrypted headers. But what about the technically unsavvy market. That’s an important market for companies that make VoIP phones like Vonage or companies that offer online video that competes with the cable companies – like Hulu and Joost?
Lee: One of the good things about the Internet, and about computer technology in general is that it’s fairly easy to encapsulate technical know-how n the form of user-friendly applications. So, another example I look at in my paper is the perennial battle between the major Instant Messaging providers – Yahoo, AOL and Microsoft, and the makers of third party instant messaging applications. And what happened there was that these instant messaging providers don’t particularly want third parties developing software that allows users to connect to their network without running their software. And so they’ve periodically, in the early part of this decade, they periodically would make changes to the protocol in order to block people out. And what would happen is that these third party developers would very quickly find workarounds and they sort of share among themselves the code necessary to update their software to make it continue to work, and ultimately, the service providers just gave up, and said, “Fine, we’re not able to stop this without inconveniencing our legitimate users, so we’re just going to let this happen.”
And the interesting thing about that it doesn’t really require any particular technical expertise to do that. Most of these clients were just a single download, or a single click, to download the new version and install it. So there’s a small hurdle of needing to know how to install a new application, but there are millions of users that are able to do that.
Now, I think if you got into a situation where major ISPs were trying to block popular protocols, it would get even easier because, for example, three of the biggest backers of network neutrality are Google, Microsoft, and Apple. All of whom have large install bases of software and hardware that automatically pulls software from the Internet. So, for example, it’s not hard to imagine that if you had somebody try to block one of Microsoft’s applications, that Microsoft would just set Windows Update to automatically push out a new version of the software that has these kinds of workarounds. It’s hard to predict exactly how these things will play out.
But, yes, as the very beginning of the process it might only be the technically savvy users would be able to use these workarounds, but I think you would very quickly see these techniques turned into user friendly applications that would just get out of that without knowing how to use the computers.
NPD: Are there any limitations that ISPs could impose that can’t be circumvented by tech savvy users?
Lee: That’s a good question. I think probably the most important category is – ISPs can obviously restrict very base characteristics, so they can put a cap on total bandwidth or throughput at any particular time, and probably the hardest case is with latency. One of the things I talk a little bit about is ISP interference with Internet Telephony applications. One thing that ISPs can do is simply insert random amounts of jitter into their network infrastructure. So if you were sending an e-mail, say, in sending an e-mail is not a big deal – but if you have a phone call, a 1/2 second delay in transmitting the voice packets, that renders it almost unusable.
So I think in that case, it’s possible – ISPs can do that because they don’t have to identify what’s VoIP. They just do that to everything. The problem with that is I think that’s going to have a fair amount of collateral damage. For one thing, some of the most lucrative customers for ISPs are hardcore gamers who like to play World of Warcraft or Counter-Strike or whatever. And they’re also very sensitive to latency. And so I think you’ll have – that’s one of the problems ISPs will have.
The other difficulty is that this will be a particularly difficult area to regulate. Because, if an ISPs average latency goes up, it’s not going to be obvious – is this something they did on purpose? Is this something they could fix but are choosing not to? Or is this just – is that just the way the network works? And so, I think that’s probably one of the stronger cases where you can make an argument for regulation, but I think it would also be one of the places where it would be particularly hard to craft great legislation that would solve the problem.
NPD: What about bandwidth caps? Are bandwidth caps a violation of the end-to-end principle?
Lee: No they’re not, because they’re neutral with respect to the kind of packets that are — the end to end principle says “If you give us some packets, we will treat them the same regardless of what kind of packets they are.” Bandwidth capping just says: “We’ll only take this many packets, but we’ll treat them all the same.”
If you look at the recent proposals, like Snowe-Dorgan, for example, which is the most popular proposal in 2006, I don’t think that anything it said would have had anything to say about bandwidth caps. It’s an interesting question, sort of whether there are problems with bandwidth caps, but I don’t think the leading network neutrality proposals would have anything to say about those problems.
NPD: Well, do you think that there would be problems with bandwidth caps? For example, let’s say that you have a monopoly ISP provider in an area – and I know that competition is important, but in many places we do have monopoly ISP providers. If they institute a bandwidth cap on their service, that could limit online video providers such as Apple with their AppleTV service, and Netflix with their streaming on demand service, even if a tech-savvy user finds a way around that, you’re now talking about theft of services. So, basically, do you think there is reason for concern here?
Lee: Yeah, I think there’s some reason for concern there. I mean, the thing is that – as I said, I don’t think this is something that the leading network neutrality proposals would deal with, so it’s sort of a separate debate. But the thing to remember is that when ISPs invest in new infrastructure, they’re going to want to be able to get a return from it. And, at some point… Comcast just had 250GB, I believe, cap on bandwidth. If the technology keeps improving, there will be a point at which that just looks ludicrously low. Verizon or AT&T or somebody will have a 10TB cap, or something. And so, it’s conceivable that Comcast will keep 250GB and leave it there forever, but if they do, I think they’ll have increasing trouble getting consumers to sign up for premium services because it’ll just look ludicrously small compared with what’s available in other parts of the country. But yeah, I definitely agree that it’s an area of concern, and if that becomes a problem, maybe something that would be worth debating.
The other thing to keep in mind is that that’s sort of a long-term problem. Some of the scenarios we’re talking about with the network neutrality debate are scenarios like, for example, interfering with the ISPs where it would cause right away, where existing applications would stop working. 250GB is enough that you can do a reasonable amount of standard definition video downloading, so what we’re talking about is damage to future applications that don’t exist yet, and so if it takes a couple of years to have a debate about the best way to deal with that, I don’t think that’s something that’s a big deal. And so it’s something that I think we can wait and see how the market plays out, and if a few years from now it starts to become a real problem where some ISPs are sort of artificially capping the bandwidth available, and then we can talk about whether some sort of regulatory solution is appropriate – once it becomes clear that there’s a problem.
NPD: In your opinion, what should users who are interested in preserving the end-to-end principle on the internet be doing right now?
Lee: I think that – not a lot is probably necessary. The main thing they need to do is just be informed and savvy consumers, so they should read publications that focus on these kinds of issues, there’s lots of online news sites that I contribute to – a blog called Techdirt and a news site called Ars Technica, both of which cover these issues in some detail, and so I think that just being aware of these things happening, talking to their friends about it when it does happen, and sort of being ready to raise to raise a stink if and when it becomes a problem is probably all they need to do. So far, for all the predictions of really serious problems, I think its remarkable how [few] problems with the end-to-end principle we’ve actually seen. Comcast has interfered with one protocol for about a year so far, but Comcast is something like 10% of the market. The other 90% has seen very few problems.
So the vast majority of consumers have unfettered access to the Internet, and I don’t think there’s any reason to expect that to change, as long as consumers are paying attention and are ready to act as necessary, I don’t think it’s something we need to lose any sleep over.