Saturday, September 26, 2009

NN Papers at TPRC

The TPRC has a huge number of papers that bear on the net neutrality debate. That is because, to some extent net neutrality stands at the intersection of issues of competition, technology, economics, first amendment – and others. Take a look at the program, if you aren’t familiar with TPRC, which is one of the best telecom conferences in the country every year. (Disclosure: I’m on the program committee, and will chair it for next year’s conference.)

Nevertheless, some papers are more expressly directed at the NN debate. Here’s a list (and a one- or two-sentence summary by me).

The Evolution of Internet Congestion
- Steve Bauer, David Clark, William Lehr: Massachusetts Institute of Technology
(How does TCP manage bandwidth demands, and what alternatives are being developed. Calls for policies that continue to allow experimentation with alternative protocols.)

Congestion Pricing and Quality of Service Differentiation in Internet Traffic
- Guenter Knieps, Albert Ludwigs Universitat Freiburg
(Develops a pricing model for QoS tiers on services that accounts for externalities created by higher-tier quality on the congestion present in lower tiers.)

Peer to Peer Edge Caches Should be Free
- Nicholas Weaver, ICSI
(Proposes deployment of freely-available P2P caches by local ISPs, which will decrease costs and congestion by keeping P2P traffic local. Develops an authentication mechanism to address ISP concerns about hosting ‘bad’ content.)

Invoking and Avoiding the First Amendment: How Internet Service Providers Leverage Their Status as Both Content Creators and Neutral Conduits
- Rob Frieden, Penn State
(ISPs seem to have qualities of both neutral conduits and speakers. As ISPs as conduits exercise of traffic management may cause ISPs to lose safe harbors that conduits generally enjoy. ISPs may respond by separating their operations.)

Free Speech and the Myth of the Internet as an Unintermediated Experience
- Christopher Yoo, University of Pennsylvania
(Free speech has historically been furthered by granting editorial discretion. The exercise of similar discretion by intermediaries is inevitable, and helps free speech – so NN regulation doesn’t help speech.)

How to Determine Whether a Traffic Management Practice is Reasonable
- Scott Jordan, Arijit Ghosh: University of California, Irvine
(Provides an analytic structure for determining whether a traffic management practice is reasonable. The framework could allow ex ante guidelines/decisions to be made, instead of relegating decisions to case-by-case ex post analysis.)

Friday, September 25, 2009

TPRC

We are off to the Telecommunications Policy Research Conference this weekend, where there are a number of net neutrality papers on the program -- and sure to be discussion of the FCC's coming rulemaking. I'll keep you posted from there.

Wednesday, September 23, 2009

Examples of Discriminatory Protocols

Consider a few examples of ways that a protocol might discriminate. Many forwarding protocols tend to favor applications that generate large packets over applications that generate small ones. Other forwarding protocols favor applications that generate traffic in steady streams over applications that generate traffic in bursts even if the total amount of traffic is the same. Almost every protocol provides different service based on roundtrip times (and hence distance). There are protocols that mitigate or eliminate some of these effects. It will be interesting to see whether the FCC can craft principles nuanced enough to strike the right balance.

One might regard these forms of discrmination as weak or indirect. Even more interesting are routing policies that explicitly discriminate on the basis of source. To quote the four examples used in a leading textbook on computer networking:

· Never put Iraq on a route starting at the Pentagon.
· Do not transit the United States to get from British Columbia to Ontario.
· Only transit Albania if there is no alternative to the destination.
· Traffic starting or ending at IBM should not transit Microsoft.

Whenever these policies are invoked, they will necessarily force certain traffic to pass through more hops (or in the case of BGP more autonomous systems) or otherwise deviate from whatever the router’s protocol is trying to optimize. From one perspective, this would constitute degradation on the basis of source or destination. And this isn’t even getting into the Type of Service flag already embedded in the IP layer or the efforts like IntServ, DiffServ, and MPLS that propose alternative means for implementing quality of service.

Bear in mind that many routing policies attempt to improve network performance by prioritizing on the basis of application. Some users unable to get this functionality out of the network are purchasing overlay networks that perform the same functions in ways that represent even larger deviations from the seamless web of the Internet that middleware is making ever less seamless all the time.

The Chairman’s speech did embrace the case-by-case approach that Jim, Phil Weiser, I, and others have been advocating (although we differ in some important ways on the details). To work, the FCC should give industry actors given enough advance guidance so that innovation and investment is not chilled while we are waiting until there are enough cases to provide sufficient guidance about what is permissible and impermissible. Otherwise the case-by-case approach will be destined to become another iteration of what Jeremy Bentham called “dog law” (that is, you house train your dog by waiting until it pees on the carpet and then wallop it while it stares at you in confusion until the doggy “accidents” happen enough times for it to figure out what is going on).

Tuesday, September 22, 2009

Protocol Complexity and Nondiscrimination Standards

I take Christopher's point about developing protocol complexity. My question is how it relates to the administrability of nondiscrimination rules.

On the one hand, a source-based nondiscrimination rule should be able to tolerate a fair bit of protocol diversity, and wouldn't be triggered by deviation from the end-to-end rule. A carrier could use different protocols for different applications, just couldn't use different protocols (or different implementations) based on who was running a particular application. Of course, that's not what the FCC has proposed. Nevertheless, even the FCC's applications-based nondiscrimination rule could tolerate protocol diversity so long as the process is sensitive to the technical advantages of different protocols. Analytically, the rule of reason like that sketched in the speech (and my earlier post) has no trouble with it.

The other hand, then, is the inevitable institutional problems. What sorts of process will the FCC employ to decide particular cases? The process in the Comcast matter was not, shall we say, a model of deep technical inquiry. Will the FCC be able to establish a process not influenced by company lobbying for an advantage that could come by getting some protocols ruled in and others ruled out? Some of the suggestions for FCC Reform do seem to be taking root at the agency, but the commitment to adjudication under ALJs with technical experts would require a big change from anything the agency has done to date. I share Phil Weiser's view that self-regulatory strategies can operate effectively when backstopped (lightly) by an agency.

In all events, I did not take from the speech an exclusion of protocol diversity but rather an argument that the FCC wants to supervise it to determine pro- and anticompetitive implementations. Over the years, my problem with this has been that the need for such supervision did not seem great enough (because generally carriers have incentives towards openness) to justify the inevitable regulatory costs. But, if that balance were different, a set of sensitive institutions should be able to apply the standards to cases.

Wireless vs. Wireline: Technical Differences

Wireless raises a whole host of interesting issues. As Jim points out, the market structure of the wireless industry is very different.

Beyond market structure, I am currently working on a book that discusses many of the technical differences between the wireless and wireline worlds. One nice illustration is AIMD, which is the Internet’s primary mechanism for managing congestion and can be understood most easily in terms of how a host should respond to packet loss. Packet loss may occur for two reasons: packet degradation (through link failure or corruption) or congestion. If the problem is degradation, the optimal response is for the host to resend the packet immediately. Slowing down would simply reduce network performance without providing any corresponding benefits. If the problem is packet loss, failure to slow down is a recipe for a repeat of the congestion collapses that plagued the network in the late 1980s. Instead, the host should exponentially cut the rate with which it is introducing packets into the network.

Van Jacobson realized that because wireline networks are so reliable, packet loss could be taken as a fairly reliable sign of congestion (rather than degradation) and should be taken as a signal that the host should slow down. This inference is now required to be incorporated into every TCP implementation.

The problem is that this inference is not valid for wireless. Wireless networks drop and degrade packets for reasons other than congestion much more frequently than wireline networks. Because wireless bandwidth is so limited, slowing down needlessly can be disastrous. As a result, engineers are now working on alternative forms of explicit congestion notification customized for wireless networks instead of the implicit approach taken by Jacobson. Some of these deviate from the semantics of TCP/IP. All of them deviate from the end-to-end argument.

There are other technical differences as well that I may explore in later posts, but the one I discuss here illustrates my broader point that many changes to the network architecture may simply represent the network’s natural response to the growing heterogeneity of transmission media. It also suggests that simply mandating adherence to the status quo or applying one-size-fits-all solutions across all technologies would have significant costs in terms of network performance and cost.

The Structure of the Argument (and Qs about Wireless)

The argument made in the speech interestingly tracks an antitrust-like, foreclosure story. I say "interestingly" because not all arguments for network neutrality base the case on foreclosure that injures consumers. (Of course, that's the best argument, and one we know how to deal with, as I argued here.)

Thus, the speech says (1) that the broadband market is concentrated (p. 3), (2) that broadband providers have economic incentives to engage in foreclosure (p. 3), (3) that discrimination is anticompetitive and hurts consumers (throughout), and (4) that procompetitive/pro-consumer justifications for discrimination (while they exist) do not outweigh the costs. I find the speech too grudging on (4), but recognize that this argument is built as an antitrust argument.

But, to begin to engage with Tim on wireless, the argument in that market seems to me to be more difficult, for two reasons. First, I think it is more difficult to make argument (1) -- that the market is concentrated in a way that suggests foreclosure strategies are rational. (The speech is clearly referring to wireline.) We have four nationwide carriers seemingly engaged in serious competition, and more spectrum coming to market. Second, I think the balance on (4) might be harder to make -- that because of bandwidth and other technical limitations, the benefits of more active management might be greater.

There is also an important policy argument above this framework: to the extent the wireline market is concentrated, the FCC might not have any other policy levers (other than competition regulation) to deal with that. It cannot change the economics of density that are so important, or the cost of digging trenches (or at least not much). But, in wireless, the FCC (or, more accurately the government as a whole) does have an additional policy lever -- and that is getting more spectrum into the market so that any concentrated market structure evaporates (or at least relevantly evaporates). Faulhaber and Farber have offered the view that this could change market structure, and it seems preferable to regulation. UPDATE: Of course, if wireline and wireless broadband are in the same market (are effective substitutes), then releasing more spectrum is a policy lever that could be used if wireline is concentrated.

For those looking for references, here are a couple: Tim's great "Wireless Carterfone" paper and to a very good response by Marius Schwartz and Federico Mini.

Monday, September 21, 2009

Surprisingly Big News

My reaction is less to JG's speech and more to the reaction to JG's speech.

Its often said that law is a prediction of what judges will do. Similarly, regulation is a prediction of what the FCC or Congress would do.

My sense has been, for a while, that the FCC or Congress would punish any serious NN violation. The rule-making announced today is a fortification of that sense.

Of course, and perhaps we'll discuss this, the big deal is wireless or mobile NN, which I'd love to start discussing in earnest.