Receive new posts as email.
This site operates as an independent editorial operation. Advertising, sponsorships, and other non-editorial materials represent the opinions and messages of their respective origins, and not of the site operator. Part of the FM Tech advertising network.
Entire site and all contents except otherwise noted © Copyright 2001-2011 by Glenn Fleishman. Some images ©2006 Jupiterimages Corporation. All rights reserved. Please contact us for reprint rights. Linking is, of course, free and encouraged.
2010 seems to be the year that Wi-Fi became part of the air we breathe: This blog is an unbelievable 9 years, 8 months old. And it's almost unnecessary. Don't cry for me: I have plenty of other writing to occupy my time. But I'd tie the drop in volume of posts here, and the declining traffic to this site over the last three years, to the fact that Wi-Fi generally works well, is built into to nearly everything, and is available in most public places, as well as service being free—or bundled (in the US, Canada, and parts of Europe and Asia) into most smartphone mobile service plans.
When I started writing this blog, 802.11b was the only standard in wide use, the Wi-Fi Alliance had the wonky name of Wireless Ethernet Compatibility Alliance (WECA), and an 802.11b base station cost at least $300. That for a whopping 10 Mbps Ethernet port, to push a few Mbps over the air. No laptops came with Wi-Fi built in (Apple was selling an add-on internal card for some laptops and desktops), and you had to mess with driver installation and tweaking.
Here's what I wrote on 9 April 2001:
The proliferation of public space wireless access may transform how people work. It will provide an almost seamless high-speed link between office, home, and road—from home to airport to in flight to airport to hotel to conference center.
Is this good? Will it make folks happier and more efficient? Probably not. But it's a reality that I want to track.
Now, of course, a base station can be under $50 and perform 50 times better, gigabit Ethernet is the rule (with a few exceptions), and you'd be hard pressed to buy smartphones, handhelds, slates, netbooks, and laptops without Wi-Fi soldered right in.
That's a good thing. I've spent an inordinate amount of time in the last 9+ years writing about stuff that didn't work, instead of things that did. I documented products that failed, standards that were released before being fully baked, incompatible approaches that could ruin Wi-Fi, and the near-complete collapse of privately funded municipal wireless networks.
The New York Times doesn't get to the heart of conference Wi-Fi problems: I can't tell you how frustrated I am about this rather facile article on problems with thousands of people all trying to connect at once to a Wi-Fi network (or networks) at dense public venues, such as keynote addresses at technology conferences. As someone who has spent a decade writing in depth about Wi-Fi, often for mainstream audiences, the Times piece disappoints me as it spreads myths and doesn't cast new light. It also ignores a couple key factors important in 2010. (Let's not even get into the fact that the picture with this article makes Steve Jobs look as if he's about to have an emetic event onstage.)
We have to go nine paragraphs into the article before we get to the "nut" paragraph, the one that states the reason it's being written at all. First, we wade through anecdotes of specific conferences, and quotes from tech smarty guy Jason Calacanis, who does not advertise himself as a Wi-Fi guru:
The problem is that Wi-Fi was never intended for large halls and thousands of people, many of them bristling with an arsenal of laptops, iPhones and iPads.
That's not quite true, although it's not completely incorrect. Even the first Wi-Fi flavor, 802.11b, was designed to be aggregated into "infrastructure" networks in which many access points with the same network name (Extended Service Set) could be roamed among by client devices. The 802.11g spec clearly recognized that wireless networks could be used by dense crowds. And 802.11n, one could argue, specifically deals with heavy usage by allowing multiple antennas to "beamform" or steer signals directly to clients, and "hear" more clearly by using multiple antennas to sift through competing signals.
(More technically, 802.11g split a network signal into many subchannels, any of which can be garbled and the rest get through; 802.11n multiplies the number of unique data streams that can be sent at once, as well as taking advantage of 802.11g's subchannel approach.)
Two grafs later, the reporter shifts to backhaul and wiring, noting that infrastructure in hotels may contribute. Then, in the next paragraph, finally gets to the heart of the problem:
Companies that install Wi-Fi networks sometimes have only a day to set up their equipment in a hall and then test it. They must plan not only for the number of attendees, but also the size and shape of the room, along with how Wi-Fi signals reflect from walls and are absorbed by the audience.
This is true. Not all companies that install conference Wi-Fi know how to build such networks well, but many do; they are hampered by constraints of time, equipment, and venue issues. However, many firms repeatedly install Wi-Fi networks in the same locations, so you would think that they would be able to learn from this, either in setting expectations or improving networks. (Please also read MuniWireless's post from a year ago on this topic, which includes an interview with Tim Požar about conference Wi-Fi. Tim was the troubleshooter brought in by TechCrunch in the 2008 conference Wi-Fail cited in the NY Times article.)
What's not mentioned until the penultimate paragraph (and then in a backhanded way) is the rise of 5 GHz networking. It's a gaping hole in this article, even though it's on the edge of being too techie to mention—except that the writer goes into a parenthetical about 2.4 GHz. Most laptops and some mobile devices can use 802.11n over 5 GHz. In the United States, there are 23 clear 802.11n 5 GHz 20 MHz-wide channels, 8 to 12 of which are commonly available in base station hardware. (The other 11 can be used, but require signal sensing that monitors for relatively unlikely military use in the vicinity. This sensing recognizes a lot of false positives, which makes the channels less usable.)
If you're one of tens of millions of people with a dual-band 802.11n router, you're using 5 GHz in your home or office. You might know (or have found out) that 5 GHz signals, because they are higher up the spectrum, don't travel as far. They attenuate more rapidly, which means that the signals becomes lost in noise faster than 2.4 GHz. In a convention hall, however, with line of sight to most access points, distance is less of an issue. 802.11n also contends well with signal bouncing, allowing it to work better than earlier Wi-Fi flavors using a unique path through space.
Thus, any conference Wi-Fi service firm that's not sticking in a sizable proportion of 5 GHz capable base stations, preset to nonoverlapping channels across the keynote auditorium or conference hall, is starting out at a deficit. Client devices that can use 5 GHz will preferentially switch to it if there's a strong enough signal. (Base stations currently don't have a spec that lets them tell clients to switch channels.)
There will be plenty of congestion in 2.4 GHz's three mostly nonoverlapping channels, because most smartphones can only use that band. (I'm not sure if any smartphone has 5 GHz built in yet, only tablets and slates, like the Samsung Galaxy Tab and Apple iPad.) Older laptops will also use that band. And the MiFi, which is also mentioned in passing despite being another key potential problem in convention keynote Wi-Fi mishigas.
The MiFi—for those who haven't heard of it—is a cellular router, the most popular on the market, that connects both to a cellular network for Internet access and operates as a Wi-Fi router. This allows a MiFi owner to connect from any device with Wi-Fi. It's a neat bypass. Sprint, T-Mobile, and Verizon also offer certain phone models that can act as portable hotspots in the same fashion.
All of these cell routers and mobile hotspot phones use 2.4 GHz, and create unique networks. The more unique Wi-Fi networks in the same area, the more trouble, because Wi-Fi uses different strategies to avoid conflicting with networks on the same and adjacent channels. This reduces overall throughput.
But it shouldn't be that big an effect, even with the hundreds in use at tech events, like the ones this year that Apple and Google had trouble with. The MiFi uses relatively low power, the backhaul is relatively low-bandwidth compared to the 802.11g standard (about 1 to 2 Mbps of cell backhaul compared to 20 to 25 Mbps of real Wi-Fi throughput), and the 802.11 specs actually do a fairly smart job of sorting things out.
One final problem: DHCP. This sounds even more obscure, and I was reminded of this re-reading the MuniWireless article from last year. As Tim Požar noted, some wireless service providers don't configure the server that hands out temporary IP addresses to wireless devices correctly. I've seen this many, any times. Some outfits rely on the Wi-Fi access points, a terrible idea; most of those can hand out a maximum of 253 addresses, if that many. An access point might be able to handle several hundred connections, but simply can't give out addresses.
In a correctly configured network, access points pass through DHCP assignment from a central server, but those servers can be misconfigured to limit to 253 addresses or fewer, too. A simple change could allow over 16,000 addresses from one server. (Technically, you'd modify the subnet mask to increase the pool from a /24 to a /16 on a private address range, as one strategy.)
What's most likely the problem is tech companies and conferences cheaping out. I don't mean spending very little, but less than what would solve the problem. I'm sure the firms that unwire events come in with bids that are as cheap as they can make them to be the low bidder, or have the conference organizer or sponsoring company ask, "How can we knock this price down?"
With the level of Wi-Fi use we're seeing, it's not impossible to build a good network for thousands of people in a small space. It may just cost more than anyone wants to spend. The line item in the budget for Wi-Fi needs to be connected up with the expected return on good publicity.
Canada's licenseholder for air-to-ground in-flight Internet has set mid-2011 launch date: The service was supposed to be ready in late 2010, but SkySurf Canada Communications is now targeting mid-2011. Because of Canadian spectrum rules, US provider Aircell, which operates its Gogo Inflight Internet service on over 1,000 aircraft while they pass over the continental US and Alaska, couldn't bid on Canadian service. Instead, it's partnered with SkySurf.
Washington Dulles and Reagan National will drop fees for Wi-Fi access in the spring: Contractual details remain to be worked out, this report says in the Washington Examiner. Dulles and National add to the growing list of major US airports that have dropped fees, starting with Denver as the largest.
Carl Bialik, the Wall Street Journal's Numbers Guy columnist, talks to the sources behind the incendiary Wi-Fi radiation kills trees reports: Thank you, Carl, for finding the sources, and revealing how nuts some of the information is. I was troubled that a single report could ricochet around the world with no real statistically valid or peer-reviewed published information behind it. But it's even worse than that.
Niek van 't Wout, the green space chief in the Dutch city of Alphen aan den Rijn, checked out a small number of the town's trees, found "abnormalities" in 70 percent, and van 't Wout extrapolated this with no additional research to all of Europe. There appear to have been no lab tests or pathology, or an attempt to determine the cause, nor to survey more broadly even in the city. Bialik dug up a published email by van 't Wout in which he speculated in 2007 that electromagnetic fields were responsible before having a single shred of evidence.
The study of trees in a controlled environment was also commissioned by the city and independent of the tree survey. The testing regime hasn't been released (under what conditions were plants and trees kept), nor does there appear to have been any controls—trees and plants in the same environment with shielding to block EMF. The exposed vegetative material had six Wi-Fi access points running nearby, which is not the proximity of exposure nearly any trees would receive. As with all EMF, signal strength decrease with the inverse square of the distance from the transmitter with a standard omnidirectional antenna; the formula is a bit different for a directional antenna, but then there's less exposure in the vicinity, too. (I wrote a critique of what was revealed of the study for BoingBoing.)
Bialik has one paragraph I'll quibble with:
His town did fund an experiment seeking to investigate whether Wi-Fi signals might harm trees. The experiment used Wi-Fi routers not because these were suspected as the major culprits — cellphone network signals generally are stronger — but because experimenters aren’t allowed to use cellular network transmitters, and besides it is difficult to find an environment without any cellular wireless signal as a control. It also isn’t clear why trees would be suffering only recently, while cellphone networks have existed for decades.
This must have been stated by van 't Wout or another interview subjectd, as it's all wrong. First, Wi-Fi access points would be further away and at vastly lower power than cellular base stations, and thus vastly less likely to be the "culprit." Second, researchers may test cellular signals in Europe. I have read dozens of studies in which cell transmitters are used in clinical settings in Sweden, Britain, Germany, and elsewhere. I'm sure there's red tape, and it may simply have been cost prohibitive.
Finally, you can find an environment without EMF: a shielded room. Since the plants were being tested indoors, two rooms could have shielded: one for controls, and one for exposure only from signals within the room. Again, the expense may have been too high.
This seems quite clearly that there was an agenda at work and little science involved.
Carrier-grade operations are supposed to be carrier grade: In its enthusiasm to have LTE operating in multiple markets before year's end, Verizon Wireless let a few gears slip. That's unfortunate, because now they've set the expectation that the service isn't ready for prime time as a result. Reports of performance have been quite excellent on an unloaded network.
The problem? Computerworld reports that a handoff from 3G to LTE can take up to two minutes. A spokesperson told the reporter, "Hand-offs can take up to a couple minutes, but that was expected and a fix is in the works."
If it simply were an inherent problem, that's one thing. But it's clear this can be fixed in software, and is considered a bug. That makes it far less acceptable. In the olden days, products weren't shipped broadly until bugs that would frustrate your early adopting, high-paying customers were worked out. Bragging rights were more important here.
The Minneapolis city-wide Wi-Fi network is the only successful example of its kind for that scale of network: The next largest networks are far smaller or represent just part of a city. Even better, the Star Tribune reports that US Internet's operations are profitable four years into operation with 20,000 customers. The paper reports a $1.2m annual profit.
But why is it profitable? Because the city of Minneapolis agreed to pay $12.5m over ten years for services—services the city is hardly taking advantage of yet, even though departments are billed internally for them as part of their budgets. The city also prepaid some of these funds. This meant US Internet never ran out of necessary capital, as all its competitors more or less did, but the firm also didn't make new technology choices. It started with BelAir Networks gear, and it continues to use that vendor's equipment.
The failure to use prepaid services sounds much worse than it is. Having a viable additional broadband choice for service in a duopoly market, as well as one that's far cheaper than 3G cell for roaming within the city, has likely saved citizens millions of dollars over four years. Wherever there's the least broadband competition, cable and telephone companies drop prices, often better services, or have extended "introductory" offers you can renew by threatening to switch. It's hard to threaten if there's no second or third choice.
US Internet also pays into a fund to bridge the digital divide ($563K so far), and provides free Wi-Fi at 44 community centers.
As is usual with such efforts, the applications have followed the installation, and it's likely first-generation pilot projects didn't take off between early deployments of technology that wasn't ready and the economic collapse, which put some companies out of business or into retrenchment.
The city is starting to gear up, and within the 10-year contract, unused fees paid in previous years can be rolled over.
I've read the bill and I still don't understand this: I don't quite understand why senators Snowe and Warner find it necessary to allot money ($15m) and force installation of Wi-Fi networks in federal buildings, starting with facilities run by the General Services Administration (GSA). The bill talks about offloading use from cell networks to Wi-Fi, but Warner's statement about the benefits is sort of insane:
"By starting with the nearly 9,000 federal buildings owned or operated by the General Services Administration, we will be able to provide appreciable improvement in wireless coverage for consumers while also reducing some of the pressure on existing wireless broadband networks."
The bill doesn't call for any free access, only neutral host systems typical for the cellular industry in which one firm operates a base station in an airport or other publicly accessible buildings, and charges a cost-recovery rate to other operators.
I wonder if carriers and providers have been unable to install Wi-Fi networks in federal buildings, and this is an override to GSA policies? There's clearly a constituency here that I'm missing.
AT&T's CTO has a blog post indirectly critiquing Verizon Wireless's early LTE launch: I pretty much agree entirely with this John Donovan post. Verizon's commitment to CDMA left it without a reasonable path to future higher speeds in 3G because Qualcomm's EVDO path wasn't compelling enough, and Verizon clearly wanted the worldwide advantage of converging on GSM.
That leaves Verizon stuck at about 3 Mbps downstream with EVDO Rev. A. Verizon Wireless clearly and testably has the most robust and most thorough 2G and 3G network coverage in the US. That's still an advantage and will remain one on the voice side and for a large number of users for whom consistency is more important than speed.
But its early launch of LTE is driven by a need to have a higher speed number to push to businesses and consumers while AT&T and T-Mobile complete rolling out HSPA 7.2 and HSPA+ (21 Mbps), respectively. These evolutionary 3G HSPA flavors provide most of the advantage of first-generation LTE, including somewhat reduced latency, while preserving full backwards compatibility all the way down to GSM rates.
AT&T CTO is pushing the message that moving from LTE speeds to EVDO Rev. A rates will be jarring to customers in terms of what's possible. I agree. The difference is so huge that they are effectively different networks—this is a similar problem Clearwire and Sprint have with 3G/4G converged service plans.
However, Donovan doesn't mention the three other advantages of LTE: capacity, coverage, and latency. Higher bandwidth doesn't just mean that everyone gets greater speed; rather, it means that there's more potential to serve simultaneous users at greater speeds. That's often just as important as peak data rates. Coverage is a factor, because the 700 MHz networks can reach further and penetrate indoors better than 850, 1700, 1900, and 2100 MHz networks.
And latency is huge: lower latency makes networks appear faster because the time for each initial connection for every transaction is reduced. LTE promises very low latency, and HSPA delivers a decent part of that. Reduced latency equates to better video streaming, crisper phone calls, and more responsive Web browsing.
AT&T will benefit from the coverage and capacity issues, based on customer complaints, more than Verizon. But an early LTE deployment focused on speed doesn't provide the full picture of LTE's potential, and it hides the gap Verizon will have for at least three years, if not longer, between current 3G speeds and its LTE promise.
Update: Clearwire's chief commercial office weighs in with a swipe on Verizon's LTE pricing.
The 5–12 Mbps downstream 4G service will launch 5 December 2010 in 38 US markets and 60 airports: Verizon is still engaged in ridiculous pricing. The service will cost $50 per month for 5 GB or $80 per month for 10 GB of data transfer. Given that the cost per bit should be enormously cheaper for Verizon Wireless, and that they should be pricing this competitively with wired broadband carriers in the same market, that's absurd.
Clearwire's hybrid Sprint 3G/Clear 4G pricing makes much more sense. Unlimited usage on 4G Clear network, and same 5 GB limit on Sprint's home 3G EVDO network.
Carriers and ISPs continue to try to retain same limits even as services bump up faster. Comcast has the same 250 GB monthly usage cap on its cable service, whether you're at 15 Mbps or 100 Mbps.
LTE is required to serve next-generation mobile devices with streaming media, low latency, and heavy interactive use straining under CDMA 3G speeds today, although AT&T and T-Mobile move into faster HSPA rates alleviates that in part. But LTE will also become an alternative in some markets to fixed broadband, if Verizon offers sensible pricing.
You can check on which markets are covered at Verizon Wireless's 4G coverage map. I'm hoping to get review gear to test, as Seattle is a launch market.
T-Mobile customers get substantially improved airport access, plus ferries: A new agreement between Boingo Wireless and T-Mobile gives T-Mobile's subscribers a lot more access in transit. T-Mobile adds 53 Boingo Wireless airport locations; Boingo is the largest North American Wi-Fi airport operator.
T-Mobile users can now also surf on the Washington State Ferry system at no additional cost. For the tens of thousands of daily ferry commuters--WSF handles over 50 percent of the country's daily ferry trips--T-Mobile just became a lot more attractive.
Boingo gets a little bit in exchange: its subscribers can use T-Mobile's airline club lounge and hotel locations. T-Mobile–operated airports were previously included in roaming.