Saturday, September 26, 2009

NN Papers at TPRC

The TPRC has a huge number of papers that bear on the net neutrality debate. That is because, to some extent net neutrality stands at the intersection of issues of competition, technology, economics, first amendment – and others. Take a look at the program, if you aren’t familiar with TPRC, which is one of the best telecom conferences in the country every year. (Disclosure: I’m on the program committee, and will chair it for next year’s conference.)

Nevertheless, some papers are more expressly directed at the NN debate. Here’s a list (and a one- or two-sentence summary by me).

The Evolution of Internet Congestion
- Steve Bauer, David Clark, William Lehr: Massachusetts Institute of Technology
(How does TCP manage bandwidth demands, and what alternatives are being developed. Calls for policies that continue to allow experimentation with alternative protocols.)

Congestion Pricing and Quality of Service Differentiation in Internet Traffic
- Guenter Knieps, Albert Ludwigs Universitat Freiburg
(Develops a pricing model for QoS tiers on services that accounts for externalities created by higher-tier quality on the congestion present in lower tiers.)

Peer to Peer Edge Caches Should be Free
- Nicholas Weaver, ICSI
(Proposes deployment of freely-available P2P caches by local ISPs, which will decrease costs and congestion by keeping P2P traffic local. Develops an authentication mechanism to address ISP concerns about hosting ‘bad’ content.)

Invoking and Avoiding the First Amendment: How Internet Service Providers Leverage Their Status as Both Content Creators and Neutral Conduits
- Rob Frieden, Penn State
(ISPs seem to have qualities of both neutral conduits and speakers. As ISPs as conduits exercise of traffic management may cause ISPs to lose safe harbors that conduits generally enjoy. ISPs may respond by separating their operations.)

Free Speech and the Myth of the Internet as an Unintermediated Experience
- Christopher Yoo, University of Pennsylvania
(Free speech has historically been furthered by granting editorial discretion. The exercise of similar discretion by intermediaries is inevitable, and helps free speech – so NN regulation doesn’t help speech.)

How to Determine Whether a Traffic Management Practice is Reasonable
- Scott Jordan, Arijit Ghosh: University of California, Irvine
(Provides an analytic structure for determining whether a traffic management practice is reasonable. The framework could allow ex ante guidelines/decisions to be made, instead of relegating decisions to case-by-case ex post analysis.)

Friday, September 25, 2009

TPRC

We are off to the Telecommunications Policy Research Conference this weekend, where there are a number of net neutrality papers on the program -- and sure to be discussion of the FCC's coming rulemaking. I'll keep you posted from there.

Wednesday, September 23, 2009

Examples of Discriminatory Protocols

Consider a few examples of ways that a protocol might discriminate. Many forwarding protocols tend to favor applications that generate large packets over applications that generate small ones. Other forwarding protocols favor applications that generate traffic in steady streams over applications that generate traffic in bursts even if the total amount of traffic is the same. Almost every protocol provides different service based on roundtrip times (and hence distance). There are protocols that mitigate or eliminate some of these effects. It will be interesting to see whether the FCC can craft principles nuanced enough to strike the right balance.

One might regard these forms of discrmination as weak or indirect. Even more interesting are routing policies that explicitly discriminate on the basis of source. To quote the four examples used in a leading textbook on computer networking:

· Never put Iraq on a route starting at the Pentagon.
· Do not transit the United States to get from British Columbia to Ontario.
· Only transit Albania if there is no alternative to the destination.
· Traffic starting or ending at IBM should not transit Microsoft.

Whenever these policies are invoked, they will necessarily force certain traffic to pass through more hops (or in the case of BGP more autonomous systems) or otherwise deviate from whatever the router’s protocol is trying to optimize. From one perspective, this would constitute degradation on the basis of source or destination. And this isn’t even getting into the Type of Service flag already embedded in the IP layer or the efforts like IntServ, DiffServ, and MPLS that propose alternative means for implementing quality of service.

Bear in mind that many routing policies attempt to improve network performance by prioritizing on the basis of application. Some users unable to get this functionality out of the network are purchasing overlay networks that perform the same functions in ways that represent even larger deviations from the seamless web of the Internet that middleware is making ever less seamless all the time.

The Chairman’s speech did embrace the case-by-case approach that Jim, Phil Weiser, I, and others have been advocating (although we differ in some important ways on the details). To work, the FCC should give industry actors given enough advance guidance so that innovation and investment is not chilled while we are waiting until there are enough cases to provide sufficient guidance about what is permissible and impermissible. Otherwise the case-by-case approach will be destined to become another iteration of what Jeremy Bentham called “dog law” (that is, you house train your dog by waiting until it pees on the carpet and then wallop it while it stares at you in confusion until the doggy “accidents” happen enough times for it to figure out what is going on).

Tuesday, September 22, 2009

Protocol Complexity and Nondiscrimination Standards

I take Christopher's point about developing protocol complexity. My question is how it relates to the administrability of nondiscrimination rules.

On the one hand, a source-based nondiscrimination rule should be able to tolerate a fair bit of protocol diversity, and wouldn't be triggered by deviation from the end-to-end rule. A carrier could use different protocols for different applications, just couldn't use different protocols (or different implementations) based on who was running a particular application. Of course, that's not what the FCC has proposed. Nevertheless, even the FCC's applications-based nondiscrimination rule could tolerate protocol diversity so long as the process is sensitive to the technical advantages of different protocols. Analytically, the rule of reason like that sketched in the speech (and my earlier post) has no trouble with it.

The other hand, then, is the inevitable institutional problems. What sorts of process will the FCC employ to decide particular cases? The process in the Comcast matter was not, shall we say, a model of deep technical inquiry. Will the FCC be able to establish a process not influenced by company lobbying for an advantage that could come by getting some protocols ruled in and others ruled out? Some of the suggestions for FCC Reform do seem to be taking root at the agency, but the commitment to adjudication under ALJs with technical experts would require a big change from anything the agency has done to date. I share Phil Weiser's view that self-regulatory strategies can operate effectively when backstopped (lightly) by an agency.

In all events, I did not take from the speech an exclusion of protocol diversity but rather an argument that the FCC wants to supervise it to determine pro- and anticompetitive implementations. Over the years, my problem with this has been that the need for such supervision did not seem great enough (because generally carriers have incentives towards openness) to justify the inevitable regulatory costs. But, if that balance were different, a set of sensitive institutions should be able to apply the standards to cases.

Wireless vs. Wireline: Technical Differences

Wireless raises a whole host of interesting issues. As Jim points out, the market structure of the wireless industry is very different.

Beyond market structure, I am currently working on a book that discusses many of the technical differences between the wireless and wireline worlds. One nice illustration is AIMD, which is the Internet’s primary mechanism for managing congestion and can be understood most easily in terms of how a host should respond to packet loss. Packet loss may occur for two reasons: packet degradation (through link failure or corruption) or congestion. If the problem is degradation, the optimal response is for the host to resend the packet immediately. Slowing down would simply reduce network performance without providing any corresponding benefits. If the problem is packet loss, failure to slow down is a recipe for a repeat of the congestion collapses that plagued the network in the late 1980s. Instead, the host should exponentially cut the rate with which it is introducing packets into the network.

Van Jacobson realized that because wireline networks are so reliable, packet loss could be taken as a fairly reliable sign of congestion (rather than degradation) and should be taken as a signal that the host should slow down. This inference is now required to be incorporated into every TCP implementation.

The problem is that this inference is not valid for wireless. Wireless networks drop and degrade packets for reasons other than congestion much more frequently than wireline networks. Because wireless bandwidth is so limited, slowing down needlessly can be disastrous. As a result, engineers are now working on alternative forms of explicit congestion notification customized for wireless networks instead of the implicit approach taken by Jacobson. Some of these deviate from the semantics of TCP/IP. All of them deviate from the end-to-end argument.

There are other technical differences as well that I may explore in later posts, but the one I discuss here illustrates my broader point that many changes to the network architecture may simply represent the network’s natural response to the growing heterogeneity of transmission media. It also suggests that simply mandating adherence to the status quo or applying one-size-fits-all solutions across all technologies would have significant costs in terms of network performance and cost.

The Structure of the Argument (and Qs about Wireless)

The argument made in the speech interestingly tracks an antitrust-like, foreclosure story. I say "interestingly" because not all arguments for network neutrality base the case on foreclosure that injures consumers. (Of course, that's the best argument, and one we know how to deal with, as I argued here.)

Thus, the speech says (1) that the broadband market is concentrated (p. 3), (2) that broadband providers have economic incentives to engage in foreclosure (p. 3), (3) that discrimination is anticompetitive and hurts consumers (throughout), and (4) that procompetitive/pro-consumer justifications for discrimination (while they exist) do not outweigh the costs. I find the speech too grudging on (4), but recognize that this argument is built as an antitrust argument.

But, to begin to engage with Tim on wireless, the argument in that market seems to me to be more difficult, for two reasons. First, I think it is more difficult to make argument (1) -- that the market is concentrated in a way that suggests foreclosure strategies are rational. (The speech is clearly referring to wireline.) We have four nationwide carriers seemingly engaged in serious competition, and more spectrum coming to market. Second, I think the balance on (4) might be harder to make -- that because of bandwidth and other technical limitations, the benefits of more active management might be greater.

There is also an important policy argument above this framework: to the extent the wireline market is concentrated, the FCC might not have any other policy levers (other than competition regulation) to deal with that. It cannot change the economics of density that are so important, or the cost of digging trenches (or at least not much). But, in wireless, the FCC (or, more accurately the government as a whole) does have an additional policy lever -- and that is getting more spectrum into the market so that any concentrated market structure evaporates (or at least relevantly evaporates). Faulhaber and Farber have offered the view that this could change market structure, and it seems preferable to regulation. UPDATE: Of course, if wireline and wireless broadband are in the same market (are effective substitutes), then releasing more spectrum is a policy lever that could be used if wireline is concentrated.

For those looking for references, here are a couple: Tim's great "Wireless Carterfone" paper and to a very good response by Marius Schwartz and Federico Mini.

Monday, September 21, 2009

Surprisingly Big News

My reaction is less to JG's speech and more to the reaction to JG's speech.

Its often said that law is a prediction of what judges will do. Similarly, regulation is a prediction of what the FCC or Congress would do.

My sense has been, for a while, that the FCC or Congress would punish any serious NN violation. The rule-making announced today is a fortification of that sense.

Of course, and perhaps we'll discuss this, the big deal is wireless or mobile NN, which I'd love to start discussing in earnest.

Nondiscrimination: Not as Easy as It Seems at First Blush

The highlight of Chairman Genachowski’s speech launching the FCC’s new network neutrality initiative is the proposal to add two new principles to the four initial principles announced in the FCC’s 2005 Internet Policy Statement.

The latter of the two new principles is the less controversial. It states that providers of broadband Internet access must be transparent about their network management practices. Clear disclosure is a principle with which essentially everyone agrees. Even the most ardent supporter of deregulation recognizes that markets cannot work unless consumers have clear information about the services they are buying. To say that it is less controversial is not to say that it will be easy. Network providers have long cautioned that too much disclosure about their precise network management practices may simply provide hackers with a roadmap of how to short circuit safeguards designed to prevent a small group of users from overloading the system. I suspect that the details of exactly how much disclosure is required will prove more difficult to work out than many expect.

The other new principle should make Tim happy, as it is one for which he has been advocating for a long time. It states that broadband providers cannot discriminate against particular content or applications.

Although principles such as nondiscrimination appear simple and possess a broad superficial appeal, a closer analysis that bears in mind the history of previous attempts to enforce nondiscrimination suggests that it will prove extremely difficult to define and enforce.

As an initial matter, scholars and policymakers universally accept that charging different prices is not discriminatory when those prices reflect real differences in cost. A classic example is transportation costs. If customer A is located farther from the manufacturer than customer B, we would expect the total delivered price for these customers to vary, with customer A also having to pay the difference in shipping costs. Similarly, we expect that the price paid by a customer who buys in truckload quantities to reflect lower per-unit transportation costs than would a a customer who buys smaller amounts.

Congestion represents a more subtle source of cost differences than transportation. Consider the congestion costs that arise in a restaurant. If an additional customer arrives at a restaurant that is empty or nearly empty, the impact is negligible. The restaurant has plenty of extra tables and staff to accommodate the additional load. The situation is quite different if the customer arrives at a time when the restaurant is already quite crowded. The customer may well have to wait for a table to open up. In addition, the customers who are already in the restaurant may have to wait longer for their server to check on their table, put up with greater proximity and noise from other diners, wait in a line for the restroom, and endure other costs. In short, the arrival of the customer imposes congestion costs on everyone in the restaurant.

Restaurants have an easy solution to this problem: they offer early bird discounts, thereby inducing customers to dine at times when they would create fewer congestion costs. Although it may not be as intuitive as transportation costs, this is a real cost-based differential that nondiscrimination regimes should find unproblematic.

In addition, scholars and policymakers also recognize that charging different prices is not discriminatory when the products that the customers receive are of different quality. This too is quite intuitive. Anyone who receives a superior quality meal expects to pay more for it.

Both of these considerations already render the nondiscrimination analysis quite murky. As anyone who has studied the history of telephone regulation knows, determining how much a service costs has long proven incredibly problematic, requiring difficult assessments in changes in production technologies and business conditions (including classically the choice between historical and replacement cost) and the analytically thorny problem of allocating of the cost of assets that are shared by more than one service. Moreover, determining which prices represent valid cost differentials is particularly difficult when production technologies vary from firm to firm and when industries are undergoing rapid technological change. In addition, as the Supreme Court recognized in Trinko, the number of ways in which the quality of access to Internet service can vary are myriad, which will further complicate any attempt to enforce a nondiscrimination mandate.

The restaurant example also suggests another type of discrimination that I suspect that the FCC will have trouble addressing. Many restaurants offer senior citizen discounts. These discounts are not cost based. Senior citizens don’t cost any less to serve than other diners eating at the same time. Nor are these discounts quality based. The senior citizens are ordering off the same menu and receiving the same food as other diners. The real difference is that senior citizens as a group are typically more price sensitive than other types of diners. Therefore, rather than charging everyone the same price, which would lead many senior citizens not to dine at all, restaurants instead charges a lower price to the class of customers it knows to be more price sensitive. This is not based on a difference in cost, but rather in a difference in the elasticity of different groups of customers’ demand. Those familiar with regulated industries will recognize this as the logic underlying Ramsey pricing. If executed properly, this form of discrimination can permit more people to enjoy the benefits of the Internet and allow more last-mile competitors to survive than would forcing every network to offer only a single class of service.

There is much to like in the Chairman’s speech. It recognizes that important innovation occurs in the core of the network as well as the edge. It accepts the propriety of curbing the bandwidth consumed by heavy users during times of congestion. It acknowledges that any nondiscrimination should not preclude filtering content for spam or illegal content. Most importantly, it notes that managed services can play an important role that benefits consumers.

That said, the devil will be in the details. In particular, it remains to be seen how the FCC will solve the problem of defining what represents justifiable and nonjustifiable discrimination in an industry in which cost and quality vary widely. In addition, the FCC will have to create rules that preserve the ability of network providers to engage in demand-side discrimination of the type suggested by the senior citizen discounts offered by restaurants. The extent to which the FCC’s proposed rules take these considerations into account will go a long way to determining whether they will ultimately benefit consumers or harm them.

First Reaction - The Nondiscrimination Rule

To some extent, perhaps because these ideas have been circulating for so long, the speech has the feel that not much is new. (Although, of course, the idea of the FCC's actively regulating the Internet is something new. We can debate later where this constitutes "active regulation.")

But let me start by sussing out the articulated nondiscrimination rule. One of the difficult questions in the network neutrality debate is setting out the nondiscrimination rule. The speech says two things: “broadband providers cannot discriminate against particular Internet content or applications”; and “Nor can they disfavor an Internet service just because it competes with a similar service offered by that broadband provider.” (p. 5) The first nondiscrimination rule goes beyond some weaker versions of nondiscrimination, such as a nondiscrimination rule that just forbids “source” discrimination but allows the carrier the ability to manage its network based on the traffic’s application. I and others (Christopher among them) have said that some reasonable network management might be application-based, taking into account that some applications need to be in real time (VoIP, video conferencing, gaming) and some can tolerate greater degrees of delay (web page serving, email, ftp). A source-based nondiscrimination rule would simply say that the carrier cannot pick and choose the sources of the application being managed (delayed, degraded, whatever you prefer). So, if the carrier manages “video” traffic or “peer-to-peer” traffic, then it must manage all such traffic in exactly the same say no matter where it comes from. This sort of nondiscrimination rule would handle the problem (which the speech says that it sees, p. 3) of the carrier choosing its own or affiliated content over unaffiliated content. It would not, however, (and this is the argument for net neutrality) allow entirely new applications to take advantage of the quality of service being provided to an old application.

But this also limits (to the mirror degree) the level of network management allowed to the carrier.

The Chairman's Speech

The text is on the FCC's Website.

Brookings had a live webcast of the speech; their event is still ongoing. I think the video will be archived there.

Our first reactions later today/tomorrow morning.

About Us, This Blog, and (A Little) About Network Neutrality

According to reports, this morning FCC Chairman Julius Genachowski is going to give a speech calling for the FCC to adopt rules describing network neutrality. Of course, the FCC already has an Internet Policy Statement, but the proposal is to include an express nondiscrimination principle (sometimes called the Fifth Freedom, because the current policy statement has four and does not specifically mention nondiscrimination). And, again according to reports, Chairman Genachowski will propose that the FCC adopt rules to codify these principles.

We write on telecommunications and Internet law. Tim Wu coined the term “Network Neutrality” and has written extensively in favor of net neutrality principles. He is a Professor at Columbia Law School. Christopher Yoo has prolifically challenged the case for Network Neutrality regulation, arguing that network differentiation may be beneficial and generally that the case for regulation has not been made. He is a Professor at the University of Pennsylvania Law School and Director of its Center for Technology, Innovation, and Competition. Tim and Christopher have a great dialogue that could serve anyone looking for a primer on the issues. Jim Speta wrote a series of articles when the debate was framed as “open access” to cable modems, taking the side opposite the pioneering work of Larry Lessig and Mark Lemley (without of course knowing initially that he was doing so), and more recently has written on institutional questions such as the FCC’s regulatory power over the Internet and the preferred institutions for controlling Internet market power.

This blog will follow and comment on the FCC’s rulemaking proceeding. We will offer initial observations on the Chairman’s speech (when the text is made available). In the coming weeks, before the speech becomes an FCC Notice of Proposed Rulemaking, we will post a series of “backgrounders” on issues central to the coming proceeding – such as the FCC’s legal authority, technical aspects of net neutrality, the economics of net neutrality, and so on. Comments are open (although moderated); suggestions are welcome.

One big caveat, however: We don’t intend to blog on everything net neutrality related, including all of the work being done by companies, advocacy groups, and think tanks. That universe is too big. Pointers and suggestions, again, are welcome. But, depending on the FCC’s speed (or lack), occasionally time may pass without this blog growing. So, subscribe or check back frequently. Thanks.