The New York Times doesn't get to the heart of conference Wi-Fi problems: I can't tell you how frustrated I am about this rather facile article on problems with thousands of people all trying to connect at once to a Wi-Fi network (or networks) at dense public venues, such as keynote addresses at technology conferences. As someone who has spent a decade writing in depth about Wi-Fi, often for mainstream audiences, the Times piece disappoints me as it spreads myths and doesn't cast new light. It also ignores a couple key factors important in 2010. (Let's not even get into the fact that the picture with this article makes Steve Jobs look as if he's about to have an emetic event onstage.)
We have to go nine paragraphs into the article before we get to the "nut" paragraph, the one that states the reason it's being written at all. First, we wade through anecdotes of specific conferences, and quotes from tech smarty guy Jason Calacanis, who does not advertise himself as a Wi-Fi guru:
The problem is that Wi-Fi was never intended for large halls and thousands of people, many of them bristling with an arsenal of laptops, iPhones and iPads.
That's not quite true, although it's not completely incorrect. Even the first Wi-Fi flavor, 802.11b, was designed to be aggregated into "infrastructure" networks in which many access points with the same network name (Extended Service Set) could be roamed among by client devices. The 802.11g spec clearly recognized that wireless networks could be used by dense crowds. And 802.11n, one could argue, specifically deals with heavy usage by allowing multiple antennas to "beamform" or steer signals directly to clients, and "hear" more clearly by using multiple antennas to sift through competing signals.
(More technically, 802.11g split a network signal into many subchannels, any of which can be garbled and the rest get through; 802.11n multiplies the number of unique data streams that can be sent at once, as well as taking advantage of 802.11g's subchannel approach.)
Two grafs later, the reporter shifts to backhaul and wiring, noting that infrastructure in hotels may contribute. Then, in the next paragraph, finally gets to the heart of the problem:
Companies that install Wi-Fi networks sometimes have only a day to set up their equipment in a hall and then test it. They must plan not only for the number of attendees, but also the size and shape of the room, along with how Wi-Fi signals reflect from walls and are absorbed by the audience.
This is true. Not all companies that install conference Wi-Fi know how to build such networks well, but many do; they are hampered by constraints of time, equipment, and venue issues. However, many firms repeatedly install Wi-Fi networks in the same locations, so you would think that they would be able to learn from this, either in setting expectations or improving networks. (Please also read MuniWireless's post from a year ago on this topic, which includes an interview with Tim Požar about conference Wi-Fi. Tim was the troubleshooter brought in by TechCrunch in the 2008 conference Wi-Fail cited in the NY Times article.)
What's not mentioned until the penultimate paragraph (and then in a backhanded way) is the rise of 5 GHz networking. It's a gaping hole in this article, even though it's on the edge of being too techie to mention—except that the writer goes into a parenthetical about 2.4 GHz. Most laptops and some mobile devices can use 802.11n over 5 GHz. In the United States, there are 23 clear 802.11n 5 GHz 20 MHz-wide channels, 8 to 12 of which are commonly available in base station hardware. (The other 11 can be used, but require signal sensing that monitors for relatively unlikely military use in the vicinity. This sensing recognizes a lot of false positives, which makes the channels less usable.)
If you're one of tens of millions of people with a dual-band 802.11n router, you're using 5 GHz in your home or office. You might know (or have found out) that 5 GHz signals, because they are higher up the spectrum, don't travel as far. They attenuate more rapidly, which means that the signals becomes lost in noise faster than 2.4 GHz. In a convention hall, however, with line of sight to most access points, distance is less of an issue. 802.11n also contends well with signal bouncing, allowing it to work better than earlier Wi-Fi flavors using a unique path through space.
Thus, any conference Wi-Fi service firm that's not sticking in a sizable proportion of 5 GHz capable base stations, preset to nonoverlapping channels across the keynote auditorium or conference hall, is starting out at a deficit. Client devices that can use 5 GHz will preferentially switch to it if there's a strong enough signal. (Base stations currently don't have a spec that lets them tell clients to switch channels.)
There will be plenty of congestion in 2.4 GHz's three mostly nonoverlapping channels, because most smartphones can only use that band. (I'm not sure if any smartphone has 5 GHz built in yet, only tablets and slates, like the Samsung Galaxy Tab and Apple iPad.) Older laptops will also use that band. And the MiFi, which is also mentioned in passing despite being another key potential problem in convention keynote Wi-Fi mishigas.
The MiFi—for those who haven't heard of it—is a cellular router, the most popular on the market, that connects both to a cellular network for Internet access and operates as a Wi-Fi router. This allows a MiFi owner to connect from any device with Wi-Fi. It's a neat bypass. Sprint, T-Mobile, and Verizon also offer certain phone models that can act as portable hotspots in the same fashion.
All of these cell routers and mobile hotspot phones use 2.4 GHz, and create unique networks. The more unique Wi-Fi networks in the same area, the more trouble, because Wi-Fi uses different strategies to avoid conflicting with networks on the same and adjacent channels. This reduces overall throughput.
But it shouldn't be that big an effect, even with the hundreds in use at tech events, like the ones this year that Apple and Google had trouble with. The MiFi uses relatively low power, the backhaul is relatively low-bandwidth compared to the 802.11g standard (about 1 to 2 Mbps of cell backhaul compared to 20 to 25 Mbps of real Wi-Fi throughput), and the 802.11 specs actually do a fairly smart job of sorting things out.
One final problem: DHCP. This sounds even more obscure, and I was reminded of this re-reading the MuniWireless article from last year. As Tim Požar noted, some wireless service providers don't configure the server that hands out temporary IP addresses to wireless devices correctly. I've seen this many, any times. Some outfits rely on the Wi-Fi access points, a terrible idea; most of those can hand out a maximum of 253 addresses, if that many. An access point might be able to handle several hundred connections, but simply can't give out addresses.
In a correctly configured network, access points pass through DHCP assignment from a central server, but those servers can be misconfigured to limit to 253 addresses or fewer, too. A simple change could allow over 16,000 addresses from one server. (Technically, you'd modify the subnet mask to increase the pool from a /24 to a /16 on a private address range, as one strategy.)
What's most likely the problem is tech companies and conferences cheaping out. I don't mean spending very little, but less than what would solve the problem. I'm sure the firms that unwire events come in with bids that are as cheap as they can make them to be the low bidder, or have the conference organizer or sponsoring company ask, "How can we knock this price down?"
With the level of Wi-Fi use we're seeing, it's not impossible to build a good network for thousands of people in a small space. It may just cost more than anyone wants to spend. The line item in the budget for Wi-Fi needs to be connected up with the expected return on good publicity.