Tuesday, September 22, 2009

Wireless vs. Wireline: Technical Differences

Wireless raises a whole host of interesting issues. As Jim points out, the market structure of the wireless industry is very different.

Beyond market structure, I am currently working on a book that discusses many of the technical differences between the wireless and wireline worlds. One nice illustration is AIMD, which is the Internet’s primary mechanism for managing congestion and can be understood most easily in terms of how a host should respond to packet loss. Packet loss may occur for two reasons: packet degradation (through link failure or corruption) or congestion. If the problem is degradation, the optimal response is for the host to resend the packet immediately. Slowing down would simply reduce network performance without providing any corresponding benefits. If the problem is packet loss, failure to slow down is a recipe for a repeat of the congestion collapses that plagued the network in the late 1980s. Instead, the host should exponentially cut the rate with which it is introducing packets into the network.

Van Jacobson realized that because wireline networks are so reliable, packet loss could be taken as a fairly reliable sign of congestion (rather than degradation) and should be taken as a signal that the host should slow down. This inference is now required to be incorporated into every TCP implementation.

The problem is that this inference is not valid for wireless. Wireless networks drop and degrade packets for reasons other than congestion much more frequently than wireline networks. Because wireless bandwidth is so limited, slowing down needlessly can be disastrous. As a result, engineers are now working on alternative forms of explicit congestion notification customized for wireless networks instead of the implicit approach taken by Jacobson. Some of these deviate from the semantics of TCP/IP. All of them deviate from the end-to-end argument.

There are other technical differences as well that I may explore in later posts, but the one I discuss here illustrates my broader point that many changes to the network architecture may simply represent the network’s natural response to the growing heterogeneity of transmission media. It also suggests that simply mandating adherence to the status quo or applying one-size-fits-all solutions across all technologies would have significant costs in terms of network performance and cost.

No comments: