On Friday, January 5, 2007, this blog published an article from an interview with Professor Christopher Yoo.
We have some important information regarding that article.
We originally conducted the interview with Professor Yoo on Wednesday, January 3, 2007, quickly edited a first-draft for a transcript, and sent it to him for his review – to make sure that his words were accurately transcribed, and that we were not doing damage to the substance or spirit of his words.
On January 4, 2007, Prof. Yoo sent us two e-mail messages pointing out flaws in the article, and making corrections – e-mail messages that we did not receive. We do not know what happened to those e-mail messages. We have looked in our e-mail archives, in our spam filters, even checked to see if the e-mails were somehow misrouted through one of our mailing list filters. The e-mail messages did not get to our network.
On our end, we heard complete silence by the time we had our press deadline on Friday. On Friday afternoon, January 5, 2007, we published the article as it stood. It was quickly picked up by Slashdot.org.
On Saturday, January 6, 2007, Prof. Yoo wrote back, telling us that he was very upset about the article’s publication, and wondered why we had not made the corrections that he requested. On Sunday, January 7, 2007, I checked my e-mail remotely and found Prof. Yoo’s January 6 e-mail.
After an e-mail exchange and a phone conversation, I agreed that though it was through no direct fault of our own, the article had done him a disservice and resolved to repair any inaccuracy or anything that would be unfair to his words or image.
Because the article was linked to on Slashdot, it is very unlikely that this correction will receive the same notoriety that the original one did. We are trying to correct this and have written to Rob Malda, editor of Slashdot, hoping that they can help correct the error by placing this correction notice in a “Slashback” post.
The corrected article appears below. We have also promised Prof. Yoo a right of reply to the blog – if he wishes to make a post explaining the situation in his own words, he needs only to send an e-mail to either of the e-mail addresses we have provided him, and we will post his words as they came to us.
– Brian Boyko
– Editor, Network Performance Daily.
(Corrected article appears below)
Prof. Christopher Yoo, Vanderbilt University School of Law
This article continues our series examining the issue of Network Neutrality.
Professor Christopher Yoo joined the faculty of the Vanderbilt University School of Law in 1999, and his research focuses primarily on how technological innovation and economic theories of imperfect competition are transforming the regulation of electronic communications.
In addition to clerking for Justice Anthony M. Kennedy and working at the law firm of Hogan & Hartson under the supervision of now-Chief Justice John G. Roberts, Jr., he has also published “Network Neutrality and the Economics of Congestion” [PDF] in Georgetown Law Journal, and “Beyond Network Neutrality” [PDF] in the Harvard Journal of Law and Technology.
We asked him to share his thoughts on Net Neutrality with us.
The Internet has undergone a rather amazing transformation over the last several years. The number of Internet users has exploded, and the number of possible connections has increased quadratically with the number of users.
Furthermore, the variety of ways in which consumers are using the Internet has exploded along with the number of users. The early Internet was dominated applications, such as e-mail and web browsing, which did not require significant amounts of bandwidth and were not particularly sensitive to delay. In fact, delays of half a second were essentially unnoticeable. Over time, consumers have begun to turn to newer applications-such as streaming video, and online gaming, and Internet telephony (also known as “voice over Internet protocol” or VoIP)-which are placing much greater demands on the network. Many of these new-style applications employ sophisticated graphics that require much more bandwidth than did previous applications. Equally importantly, many of these new applications are significantly more sensitive to delay. In fact, a delay of as little as a third of a second can render VoIP unusable under the global standards for voice communications promulgated by the International Telecommunications Union.
As a result, network providers are trying to meet these new demands by experimenting with new ways to manage congestion and reduce delay. Enhancing network owners’ ability manage traffic is essential if the full range of innovative content and applications that depend on guaranteed transmission speeds is to appear.
The problem is that the Internet of today is not well designed to manage the recent increases in congestion and to support these newer, more time-sensitive applications. Most Internet users and providers currently communicate through a set of nonproprietary protocols known as TCP/IP, which routes traffic on a “first come, first served” basis. Although such an approach was sufficient to support low-bandwidth, non-time sensitive applications like e-mail and web browsing, it cannot provide the guaranteed transmission rates that streaming video and VoIP need to survive. The current network’s inability to meet these new demands has prompted a number of leading technologists to regard TCP/IP as a thirty-year-old technology that is rapidly becoming obsolete.
One obvious solution would be to give traffic associated with time sensitive applications higher priority than traffic associated with non-time sensitive applications. Doing so would allow the network to guarantee transmission speeds to those applications that need it even when network capacity is overloaded. Without priority routing, many innovative services that depend on guaranteed throughput rates could not exist. Indeed, companies developing applications that depend on guaranteed transmission rates have indicated that they would willingly pay more to guarantee faster service, in much the same way people who absolutely, positively need to send a letter coast-to-coast overnight are more than willing to pay FedEx more for the additional costs needed to provide express service. Unfortunately, this is precisely the type of discrimination between types of applications and levels of service that network neutrality would condemn.
Another interesting innovation is the emergence of content-delivery networks like Akamai, which reportedly serves 15% of the world’s Internet traffic. Suppose that an end user in Los Angeles attempts to download a webpage from CNN.com. If CNN.com hosted the content itself, this request would have to travel thousands of miles to a server located in CNN’s headquarters in Atlanta and back, passing through any number of points of congestion along the way. Akamai takes a different approach. Rather than storing Internet content at a single location, Akamai caches content at over 14,000 locations around the Internet. Storing content closer to consumers reduces transmission costs and delay by allowing the network to route the requests for content to servers that are closer and less congested. In the process, it provides additional security against denial-of-service attacks and other types of malevolent activity that plagues the Internet. The catch from the standpoint of network neutrality is that Akamai is a commercial service available only to those who are willing to pay for it. In other words, CNN.com may be able serve customers more quickly than MSNBC.com so long as it is willing to pay Akamai for its services.
These examples underscore how the best design for yesterday’s Internet may not always be the best design for today’s or tomorrow’s. The Internet must be allowed to evolve to meet the times. Well-intentioned efforts to preserve the benefits of the past threaten to prevent the Internet from adapting to shifts in technology and changes in what people want from the network. These examples also underscore the extent to which the term “network neutrality” represents something of a misnomer. Every network architecture inevitably favors some applications and disfavors others. As a result, mandating the use of any particular architecture would be anything but “neutral.”
Rather than locking the Internet into any particular architecture, Congress should instead embrace a principle that I call “network diversity,” which would allow network providers to experiment with different ways to meet the needs of today’s users. Allowing networks to experiment in this manner would provide valuable information about different ways to manage congestion while increasing the variety of services available to consumers.
Allowing broadband providers to use different protocols can also expand the number of dimensions along which networks can compete with one another for business. Employing different protocols might permit smaller network players to survive by targeting sub-segments of the larger market, in much the same way that specialty stores do when confronted with competition from a low-cost, mass-market retailer. For example, network diversity might make it possible for three last-mile networks to coexist: one optimized for traditional Internet applications (such as e-mail and website access); a second designed to facilitate time-sensitive applications (such as streaming video and VoIP), and a third incorporating security features to facilitate e-commerce and to guard against viruses, spam, and other undesirable aspects of life on the Internet. By mandating that the entire Internet operate on a single protocol, network neutrality threatens to foreclose this outcome and instead force networks to compete on price and network size-considerations that reinforce the advantages already enjoyed by the largest players.
The last twelve months also demonstrate how imposing network neutrality could slow the deployment of new last-mile technologies. The Supreme Court’s June 2005 Brand X decision made clear that content and applications providers could no longer count on regulation to guarantee access to cable modem and DSL systems. When faced with the prospect of losing access to the existing network, companies such as Google, Microsoft, Earthlink, and Intel began to pour money into alternative last-mile technologies, such as wireless broadband and broadband over powerline (BPL), demonstrated most dramatically by Google’s agreement to build a wireless broadband network in San Francisco for free. Guaranteeing content and applications providers access to the existing network would destroy their incentives to undertake these beneficial investments. In other words, access regulation threatens to deprive would-be builders of alternative last-mile networks of their natural strategic partners, thereby having the perverse effect of cementing the existing last-mile oligopoly into place.
Te examples I have laid out demonstrate how deviating from network neutrality can actually benefit consumers and promote economic welfare. At the same time, network neutrality proponents point to the notorious Madison River case, in which a rural local telephone company blocked its customers from accessing VoIP, as evidence that deviations from network neutrality may actually harm consumers. In so doing, they overlook the fact that Madison River does not justify the kind of general nondiscrimination mandate favored by network neutrality proponents. Network owners like Madison River may have some incentive to block access to websites and applications that compete with services they already offer. They have no incentive to block access to innovative services with which they do not compete, since doing so would simply lower the value of their network and thus lower the amount that they can charge for access to it. In other words, a DSL provider that does not operate an auction site of its own has no plausible incentive to block access to eBay. At most, then, concerns about website blocking would support limited regulatory intervention that would only prohibit vertically integrated network owners from blocking content and applications that competed directly with their own offerings. It would not support the type of blanket restrictions on discrimination associated with network neutrality.
It is thus quite plausible that circumstances exist in which deviations from network neutrality might be beneficial and that circumstances also exist in which deviations from network neutrality might be harmful. How should our nation’s Internet policy react when confronted with this uncertainty? Fortunately, the Supreme Court’s antitrust jurisprudence, which embodies the conventional wisdom on competition policy, provides some useful guidance. The Supreme Court’s precedents suggest that if a practice would always be beneficial, the government should mandate it in all cases. If, on the other hand, the practice would always be pernicious, the government should prohibit it in all cases. When it is impossible to tell whether a practice would promote or hinder competition, the accepted response is to permit the practice to go forward until those challenging it can show actual harm to competition.
Supreme Court precedent would thus counsel against adopting regulations that would make ambiguous practices like access tiering categorically illegal. Instead, it would seem to favor taking the middle course embodied in my “network diversity” proposal, by allowing different networks to pursue different approaches unless and until they are shown to harm competition. Requiring a showing of actual, rather than hypothetical, harm provides the room for experimentation upon which technological and economic progress depends. It would guarantee that any intervention would be restricted to the precise scope of the threat to competition and would not spill over into activities that could not plausibly harm consumers. It would also exhibit appropriate humility about our ability to predict the technological future and would instead allow consumers’ choices to determine the shape of the Internet of tomorrow.