Saturday, September 26, 2009
NN Papers at TPRC
Nevertheless, some papers are more expressly directed at the NN debate. Here’s a list (and a one- or two-sentence summary by me).
The Evolution of Internet Congestion
- Steve Bauer, David Clark, William Lehr: Massachusetts Institute of Technology
(How does TCP manage bandwidth demands, and what alternatives are being developed. Calls for policies that continue to allow experimentation with alternative protocols.)
Congestion Pricing and Quality of Service Differentiation in Internet Traffic
- Guenter Knieps, Albert Ludwigs Universitat Freiburg
(Develops a pricing model for QoS tiers on services that accounts for externalities created by higher-tier quality on the congestion present in lower tiers.)
Peer to Peer Edge Caches Should be Free
- Nicholas Weaver, ICSI
(Proposes deployment of freely-available P2P caches by local ISPs, which will decrease costs and congestion by keeping P2P traffic local. Develops an authentication mechanism to address ISP concerns about hosting ‘bad’ content.)
Invoking and Avoiding the First Amendment: How Internet Service Providers Leverage Their Status as Both Content Creators and Neutral Conduits
- Rob Frieden, Penn State
(ISPs seem to have qualities of both neutral conduits and speakers. As ISPs as conduits exercise of traffic management may cause ISPs to lose safe harbors that conduits generally enjoy. ISPs may respond by separating their operations.)
Free Speech and the Myth of the Internet as an Unintermediated Experience
- Christopher Yoo, University of Pennsylvania
(Free speech has historically been furthered by granting editorial discretion. The exercise of similar discretion by intermediaries is inevitable, and helps free speech – so NN regulation doesn’t help speech.)
How to Determine Whether a Traffic Management Practice is Reasonable
- Scott Jordan, Arijit Ghosh: University of California, Irvine
(Provides an analytic structure for determining whether a traffic management practice is reasonable. The framework could allow ex ante guidelines/decisions to be made, instead of relegating decisions to case-by-case ex post analysis.)
Friday, September 25, 2009
TPRC
Wednesday, September 23, 2009
Examples of Discriminatory Protocols
Consider a few examples of ways that a protocol might discriminate. Many forwarding protocols tend to favor applications that generate large packets over applications that generate small ones. Other forwarding protocols favor applications that generate traffic in steady streams over applications that generate traffic in bursts even if the total amount of traffic is the same. Almost every protocol provides different service based on roundtrip times (and hence distance). There are protocols that mitigate or eliminate some of these effects. It will be interesting to see whether the FCC can craft principles nuanced enough to strike the right balance.
One might regard these forms of discrmination as weak or indirect. Even more interesting are routing policies that explicitly discriminate on the basis of source. To quote the four examples used in a leading textbook on computer networking:
· Never put Iraq on a route starting at the Pentagon.
· Do not transit the United States to get from British Columbia to Ontario.
· Only transit Albania if there is no alternative to the destination.
· Traffic starting or ending at IBM should not transit Microsoft.
Whenever these policies are invoked, they will necessarily force certain traffic to pass through more hops (or in the case of BGP more autonomous systems) or otherwise deviate from whatever the router’s protocol is trying to optimize. From one perspective, this would constitute degradation on the basis of source or destination. And this isn’t even getting into the Type of Service flag already embedded in the IP layer or the efforts like IntServ, DiffServ, and MPLS that propose alternative means for implementing quality of service.
Bear in mind that many routing policies attempt to improve network performance by prioritizing on the basis of application. Some users unable to get this functionality out of the network are purchasing overlay networks that perform the same functions in ways that represent even larger deviations from the seamless web of the Internet that middleware is making ever less seamless all the time.
The Chairman’s speech did embrace the case-by-case approach that Jim, Phil Weiser, I, and others have been advocating (although we differ in some important ways on the details). To work, the FCC should give industry actors given enough advance guidance so that innovation and investment is not chilled while we are waiting until there are enough cases to provide sufficient guidance about what is permissible and impermissible. Otherwise the case-by-case approach will be destined to become another iteration of what Jeremy Bentham called “dog law” (that is, you house train your dog by waiting until it pees on the carpet and then wallop it while it stares at you in confusion until the doggy “accidents” happen enough times for it to figure out what is going on).
Tuesday, September 22, 2009
Protocol Complexity and Nondiscrimination Standards
Wireless vs. Wireline: Technical Differences
Beyond market structure, I am currently working on a book that discusses many of the technical differences between the wireless and wireline worlds. One nice illustration is AIMD, which is the Internet’s primary mechanism for managing congestion and can be understood most easily in terms of how a host should respond to packet loss. Packet loss may occur for two reasons: packet degradation (through link failure or corruption) or congestion. If the problem is degradation, the optimal response is for the host to resend the packet immediately. Slowing down would simply reduce network performance without providing any corresponding benefits. If the problem is packet loss, failure to slow down is a recipe for a repeat of the congestion collapses that plagued the network in the late 1980s. Instead, the host should exponentially cut the rate with which it is introducing packets into the network.
Van Jacobson realized that because wireline networks are so reliable, packet loss could be taken as a fairly reliable sign of congestion (rather than degradation) and should be taken as a signal that the host should slow down. This inference is now required to be incorporated into every TCP implementation.
The problem is that this inference is not valid for wireless. Wireless networks drop and degrade packets for reasons other than congestion much more frequently than wireline networks. Because wireless bandwidth is so limited, slowing down needlessly can be disastrous. As a result, engineers are now working on alternative forms of explicit congestion notification customized for wireless networks instead of the implicit approach taken by Jacobson. Some of these deviate from the semantics of TCP/IP. All of them deviate from the end-to-end argument.
There are other technical differences as well that I may explore in later posts, but the one I discuss here illustrates my broader point that many changes to the network architecture may simply represent the network’s natural response to the growing heterogeneity of transmission media. It also suggests that simply mandating adherence to the status quo or applying one-size-fits-all solutions across all technologies would have significant costs in terms of network performance and cost.
The Structure of the Argument (and Qs about Wireless)
Monday, September 21, 2009
Surprisingly Big News
Its often said that law is a prediction of what judges will do. Similarly, regulation is a prediction of what the FCC or Congress would do.
My sense has been, for a while, that the FCC or Congress would punish any serious NN violation. The rule-making announced today is a fortification of that sense.
Of course, and perhaps we'll discuss this, the big deal is wireless or mobile NN, which I'd love to start discussing in earnest.