Open Spectrum Resource Page

This web exists to link all of the online resources I want to share that relate to Open Spectrum ideas. I am particularly interested in this intersection of architecture, information theory, technology, economics, and policy. Please contact me with any others that you find interesting. – David P. Reed

My writings, presentations, and talks

Why spectrum is not property – an early, short rant on the case against treating spectrum as property, based on the idea that cooperative wireless networks create more value.

How wireless networks scale: the illusion of spectrum scarcity (slides and abstract) – talk I gave at ISART 2002 as part of a panel discussion on spectrum management issues raised by advanced radio technologies. [ISART agenda recovered from Wayback Machine at www.archive.org]

The future of spectrum management: Private property rights or open standards? – debate between me and Gerry Faulhaber (former chief economist, FCC) focusing on the question of whether a market based on property rights or an open spectrum approach is appropriate for the evolution of spectrum management. The video of our discussion, as well as my slides for it are on the site.

I spoke at the FCC Technological Advisory Council in DC on 4/26/2002 on this topic – the RealVideo version of the presentation is available – my talk starts about 48 minutes into the meeting. The slides for my presentation are here.

Also, I joined Larry Lessig on a panel at the O’Reilly Emerging Technology Conference in San Jose on 5/16/2002. The whole conference was pretty interesting. Dan Gillmor wrote a nice column about Open Spectrum in the San Jose Mercury News that captured my points.

On 7/8/2002 I filed comments as part of the FCC Spectrum Policy review being conducted by the FCC Spectrum Policy Task Force. A slightly revised version is available in MS Word and HTML format.

Yochai Benkler and I were invited to a panel at the Cato Institute on “Telecom Policy after the Broadband Meltdown” on 11/14/02, where we joined Gerry Faulhaber, Tom Hazlett and Rudy Baca in a discussion of Open Spectrum, Secondary Markets in Spectrum, and other approaches to wireless policy. A RealVideo of the jousting match is here.

On 12/4, I participated in “An Evening with David Reed”, giving a talk entitled “Bits aren’t Bites” at the MIT Wireless Forum. Tim Shepard joined me. Several people liked the slides, so I put them online here.

Larry Lessig put on a great conference called Spectrum Policy: Property or Commons? this spring (2003). I gave the overview on radio technology and spectrum, and a number of others debated the two approaches to spectrum policy, followed by a moot court focused on whether Coase (see below) would choose a model based on property rights in spectrum or giving rights to devices.

David Weinberger interviewed me, creating a Salon article in 2003, titled The Myth of Interference. From Salon, 2003. We discuss why radio “interference” doesn’t destroy information transmitted. This upends the legal and Constitutional basis for treating a spectrum band as an exclusive property right in regulation. Instead, the legal construct of interference justifies creating an artificially scarce “economic input” called spectrum rights. Looking at what we now know about information theory and electromagnetic wave physics, the information transmission capacity of radio systems can scale as the number of transceivers in a fixed region increases, without degradation.

Viral Communications Research at the Media Lab

Andy Lippman and I co-lead a research project at the MIT Media Lab that is called Viral Communications. The key idea of viral communications is research into network architectures that need little or no infrastructure, but which grow and adapt to meet the needs of users in their natural environment. These architectures are both technically efficient and economically desirable. We wrote a white paper about the concept.

Communications Futures Program at MIT

With Charles Fine and David D. Clark at MIT, Andy and I have created a cross-MIT program called the Communications Futures Program. Viral Communications work is partly supported by this program. CFP is focused on the broad future of communications, mapping the territory ahead of us based on technological architecture, business architecture, and policy architectures, on the assumption that all of the traditional “stovepipe” boundaries have been erased. We are also cooperating in this with parts of Cambridge University via the Cambridge-MIT Institute.

Other resources: Policy

R. H. Coase wrote two very interesting articles in The Journal of Law and Economics, called “The Federal Communications Commission” (Oct. 1959) and “The Interdepartment Radio Advisory Committee” (Oct. 1962). These articles focus on the two key agencies in the United States that manage spectrum allocation policy. He argues that because the set of frequencies is limited, the proper method for allocation is to create a market in frequencies. I am a great admirer of Coase’s clear systems thinking, especially his work on transaction cost economics, for which he won the Nobel. The problem with his argument is that his understanding of information theory and communications is pre-Shannon – when we begin measuring the utility of the spectrum in terms of its information capacity and options to connect, rather than the number of frequency channels, the scarcity argument does not apply. However, these are excellent articles, which everyone involved in the debate must read – I hope to find online versions.

Larry Lessig and Yochai Benkler wrote Will technology make CBS unconstitutional? in The New Republic. Here they argue that if spectrum’s information capacity is seen to be unlimited, the US First Amendment would conflict with our current spectrum allocation process.

Yochai Benkler wrote a comprehensive argument for a “commons” spectrum policy in 1997, called Overcoming Agoraphobia (abstract). Though the technology and approaches that lead to scalable capacity were yet to be articulated clearly, I really like this article. I wish I had seen it when it came out.

Kevin Werbach wrote An Open Letter to the FCC on Spectrum Policy and a Release 1.0 article called Open Spectrum: The Paradise of the Commons (abstract) in the fall of 2001. Kevin’s very interested in this issue, and since he’s a former FCC staffer, he’s got an excellent perspective.

The Information Law Institute at NYU (headed by Yochai Benkler) held a workshop last year on Developing a new spectrum policy, which I participated in.

Dewayne Hendricks’ web site http://dandin.com has a lot of useful insights in this area. In particular, his interest in “software radio” allows for the dynamic adaptation of radios to local conditions, which makes the architectures above easily implementable. There was a fun article in Wired that captures Dewayne’s views called Broadband Cowboy. More seriously, he gave a talk at Pacificon (an ARRL conference) that captured his views of SDR.

Paul Baran gave a great talk at the 1994 NGN Symposium that covers some of these issues and their implications for policy.

FYI, the current intellectual trend influencing the FCC is based on the thinking of Thomas Hazlett of the American Enterprise Institute – it’s worth reading his The wireless craze, the unlimited bandwidth myth, the spectrum auction faux pas, and the punchline to Ronald Coase’s “Big Joke” – An essay on airwave allocation policy, which, unlike Coase’s writing, has the polemic style typical of conservative think tanks, but is being taken seriously nonetheless. The technical foundation of his position has the same weakness as Coase’s position above, but he adds attacks against claims that new architectures might create unlimited capacity (spread spectrum, SDR, and UWB in particular). Whether this school of thought is open to including technological innovation in its thinking is not clear.

Other resources: Technology

Tim Shepard’s thesis – Decentralized Channel Management in Scalable Multihop Spread-Spectrum Packet Radio Networks (and the more concise paper based on it) – which demonstrates that one can build a practical network whose capacity increases the more stations you add. The rate of increase is square root of N, for N stations. The key idea is to build a network of cooperative repeaters.

P. Gupta and P.R. Kumar wrote a good paper on The Capacity of Wireless Networks that demonstrates the same capacity increase, with a different architectural approach. In addition, they show that the class of architectures they consider follow a scaling law such that capacity grows with the square root of the number of active stations in a region.

Another paper, Internets in the Sky by Gupta and Kumar cover 3D spatial distribution of user stations, and come up with a law that says capacity grows with N2/3 where N is the number of stations. Interplanetary Internet should exploit this.

Fans of 802.11 should realize that 802.11 does not in practice scale very well at all. Gupta, Gray and Kumar published an empirical paper called An experimental scaling law for ad hoc networks that showed this in a real world experiment. The problem is in the MAC protocol, which is not adaptive, and requires that all stations be able to hear each other (so they can’t be repeaters or benefit from multiuser diversity).

On the other hand, Aggelos Bletsas, Andy Lippman, and I are working on RF-level scalable networks including some work using OFDM-based techniques with RF-layer relaying that might fit into 802.11a at some point. One of the neat things about the technique is that it scales well without increasing end-to-end latency. See Collaborative (Viral) Wireless Communication Networks .

Foschini and Gans of ATT Labs have been working on the fundamental capacity limits when multiple antennas are used to send and receive information. Their BLAST project, using space time coding, and several papers, notably Limits of Wireless Communications in a Fading Environment when using Multiple Antennas, are extremely interesting in suggesting a way to achieve capacity scaling that is linear in N. To summarize simply, these results demonstrate that multipath fading helps, rather than degrades, system capacity. (The physical and information theory basis that underlies BLAST and MIMO techniques has been explained in a nice Physics Today article called Communications in a Disordered World, which is based on work reported in Science by the authors). See also Communications through a diffusive medium: Coherence and Capacity

Towards an information theory of large networks: an achievable rate region is a new result from Gupta and Kumar that demonstrates that there are cases where transport capacity can scale linearly in N. They show this for networks that work by combining ideas from BLAST and cooperative repeater networks.

Martin Grossglauer and David Tse describe another fascinating and counterintuitive result about capacity scaling namely Mobility increases the capacity of adhoc wireless networks (ps file of IEEE Trans on Networks submission; also as pdf from citeseer). They show that mobile wireless nodes that cooperatively repeat traffic have transport capacity that scales linearly with N. If you asked most cellular designers, you’d have gotten unanimous opinion that mobility hurts the capacity. But mobility is a kind of diversity, just like multipath.

Greg Wornell and Nicholas Laneman did some very interesting theoretical work on distributed space-time protocols and cooperative diversity. See Laneman and G.W.Wornell, Distributed Space-Time Coded Protocols for Exploiting Cooperative Diversity in Wireless Networks, and Laneman, Tse, and Wornell, Cooperative Diversity in Wireless Networks: Efficient Protocols and Outage Behavior .

Minsky, Zippel, and Trachtenberg recently showed a constructive way to minimize the communication complexity for reconciling the information sets at independent nodes. Combining their result, which shows that the messages exchanged are bounded by the difference in information content, with cooperative algorithms would seem to show that intuitively the capacity of a net could scale linearly with N, or even perhaps superlinearly.

The idea of “digital fountains” also creates opportunities for dramatic scaling efficiencies in cooperative wireless networks that multicast content. See A Digital Fountain Approach to Reliable Distribution of Bulk Databy Byers, Luby, and Rege for the basic idea combining layered coding and tornado codes. Applying this to scalable repeater networks would dramatically reduce the need for “clear channel” radio.

A fascinating area related to scalable spectrum capacity is found in modern optics. I just finished (May 24) reading a neat article in IEEE Proceedings entitled Synthesis of Three-Dimensional Light Fields and Applications, by Piestun and Shamir (Proc IEEE 90, 2, Feb 2002, pp. 222-244). I quote from their abstract:

The possibility of synthesizing light fields satisfying given requirements within a three-dimensional (3-D) space domain was proposed and demonstrated during recent years. In this paper, we present fundamental physical properties characterizing 3-D fields and propose analytical and numerical procedures to synthesize them. These methods solve the proper wave equation under 3-D constraints.

In other words, rather than thinking of optics as “beams of light”, these techniques synthesize an arbitrary shaped energy field by using a simple collection of elements. Doing this with RF wavelengths rather than light is analogous, and demonstrates the basic intuition that electromagnetic field-based signaling (network radio) is best thought of as a manipulation of a 4-D field (in space and time) so that the right signals come out at the right places.

More techniques related to electromagnetic waves, related to quantum optics, such as Orbital Angular Momentum are very interesting. It seems that multiple co-channel signals can coexist independently by using the OAM dimensionality of propagating 3D waves. I briefly wrote about this in Some thoughts on Orbital Angular Momentum (OAM) for future radio.