Receive new posts as email.
This site operates as an independent editorial operation. Advertising, sponsorships, and other non-editorial materials represent the opinions and messages of their respective origins, and not of the site operator. Part of the FM Tech advertising network.
Entire site and all contents except otherwise noted © Copyright 2001-2010 by Glenn Fleishman. Some images ©2006 Jupiterimages Corporation. All rights reserved. Please contact us for reprint rights. Linking is, of course, free and encouraged.
Japanese researchers develop improved version of last year's WPA with TKIP exploit: (PDF) The researchers build on the work of Eric Tews and Martin Beck, in which those two German grad students figured out how to falsify short packets when the TKIP method of encryption was employed. Their method didn't crack a TKIP key, but relied on a weakness of TKIP's backwards compatibility with the thoroughly broken WEP security. For a thorough rundown of the Beck and Tews approach, see my Ars Technica article, Battered but Not Broken, and an article on this site, Don't Panic over WPA Flaw, But Do Pay Attention (both from Nov-2008).
I've had a chance to absorb the paper, A Practical Message Falsiﬁcation Attack on WPA, by Toshihiro Ohigashi and Masakatu Morii, and I'm not convinced of its efficacy as an attack vector, but it's darned clever. It's been reported as WPA broken in under a minute! by some news sources. In fact, it requires a lot of pieces to be in the right places, and doesn't allow recovery of a WPA encryption passphrase.
The following gets reasonably technical, but I'll give you the conclusion upfront: if you have any concerns about network integrity, move to AES-CCMP, which requires WPA2 Personal for home and small office networks or WPA2 Enterprise for larger networks. Using AES-CCMP requires that all network equipment be from 2003 or later, more or less. Earlier equipment, if still in use, should either be upgraded to newer Wi-Fi adapters, switched to Ethernet only, or retired.
Now the technical bits.
In brief, Beck and Tews rely on a weakness from WEP that lets them substitute bytes in very short, well-known payloads, such as ARP (address resolution protocol) messages by testing changes in the checksum to first solve for the existing bytes, and then sending a falsified packet. Their method relies on 802.11e (Quality of Service), because that protocol establishes separate queues that can duplicate the sequence used in the initialization vector (IV) that's part of the cryptographic process. Clients (or stations) reject lower-numbered IVs than the current point in the sequence.
Ohigahi and Morii use a physical man-in-the-middle (MitM) as part of their solution. Instead of relying on QoS, the Japanese academics employ a directional antenna that lets them intercept and reuse an IV: the station only receives the falsified packet, and thus doesn't receive an out-of-sequence number. This mostly likely requires a directional antenna which can overpower the broadcast of the access point for a given client; it might also work with a distant omnidirectional access point if the attacker had a more powerful omni. The attacker typically acts as a signal repeater; most data is relayed with no changes, as in the classic MitM approach.
The 802.11 security protocols combined with a secure EAP flavor (such as PEAP) only defend against an MitM attack in which a malicious party is attempting to establish encrypted connections masquerading as a client to an access point and an access point to a client. With third-party certificates, that's impossible. However, a station that relays packets without being part of the encryption chain should work perfectly well.
The Ohiagi/Morii approach has other refinements, such as monitoring the network for periods of low usage and then switching from the pure repeater mode into a key recover mode in which the malicious party attempts to recover the encrypted checksum, blocking communication between the station and access point during this time, which then eliminates the incremental IV problem because the intermediate IVs aren't sent, and thus the QoS queues aren't needed.
They reduce the time necessary for a crack by making additional assumptions about ARP packets that let them solve for the checksum about 37 percent of the time, and check whether they've recovered the key. This reduces the time for what they call a communication blackout--no AP to client transmissions--to about a minute. If they fail to recover the checksum key, they don't send a falsified packet, and thus don't start triggering the checksum key reset.
By reducing the time necessary for an attack to succeed on average and eliminating the requirement of QoS being enabled, the researchers have made this process less academic and far more real. But it's important to remember that:
ARP forgery could allow an attacker to convince a client to use it as a gateway and perform DNS resolution through addresses that the attacker provides. Poisoning DNS would allow redirection, phishing, and some forms of interception.
However, the primary issue with this attack is that it requires close proximity and the right circumstances to intercept and relay communications. That makes it hard to generalize, and hard to apply in more than a limited fashion. We'll see how this continued hammering on TKIP continues, and whether further weaknesses enable an even simpler or faster approach.
Apple unleashes Mac OS X 10.6 Snow Leopard reviewers: While the next release of Mac OS X doesn't appear til Friday, Apple ended its embargo for reviewers and others who gained early access to the operating system update.
There are three notable changes related to Wi-Fi, none of them terribly significant.
First, the AirPort menu--the place from which you select networks--now shows signal strength for each nearby network. That's useful if you have choices or are troubleshooting coverage. Hold down the Option key before clicking the AirPort menu and Apple now reveals more information than the same option in Leopard, including channel, band, transmit rate, and the obscure MCS Index (an entry that defines encoding choices in use).
Second, the sleep mode in Snow Leopard is integrated with Bonjour discovery, the network protocol Apple uses to advertise services available on a computer. For Mac models released in 2009 (as far as I can tell), you can wake a computer remotely by connecting to a service like file sharing over Wi-Fi or Ethernet--so long as the Mac was connected to an AirPort Extreme Base Station or Time Capsule before it went to sleep. The base station acts as a proxy while the Mac is sleeping. (Macs from 2008 and before appear to only have the option to be woken over Ethernet.) You can read about all the details in an article I wrote this morning for the Mac publication TidBITS, AirPort Menu Improves in Snow Leopard.
Third and finally, you can have your current location and time set in the world via Wi-Fi. The Date & Time preference pane's Time Zone view has an option to set the time zone via the current location. This requires an active network connection, and almost certainly uses Skyhook Wireless data, as Apple already relies on that firm for iPhone Wi-Fi lookups.
Snow Leopard has a $30 price tag for those with Leopard installed; $170 for a bundle of Snow Leopard, iWork '09, and iLife '09 for Tiger users. However, the $30 updater will work with Tiger, too; it violates the user agreement, but Apple uses the honor system for enforcement since it already collected its profit margin when it sold you the computer.
The WSJ writes of the low rate of adoption, interest in femtocells: I've long been a bear about femtocells, short-range indoor base stations designed to extend cellular networks to the home or small office, allowing the use of unmodified mobile handsets. Femtocells seem to be a way for carriers to bring their business into your home, instead of you gaining more control over your calling.
T-Mobile steered a different course years ago, signing on to the unlicensed mobile access (UMA) standard, which allows a handset to negotiate seamless during-a-call handoffs between a mobile network and a Wi-Fi network. T-Mobile had to introduce new handsets that include UMA software and Wi-Fi radios; the firm now has 10 such models which are priced like models without UMA.
Femtocells require no handset updates. A customer obtains the base station, plugs it into their broadband connection (just like UMA, the carrier doesn't pay for the call backhaul), and then unwinds up to 30 feet of GPS antenna. Femtocells have to have a precise location both to use the correct licensed frequencies for that area and to assist in meeting E911 call location requirements. (In fact, femtocells may help carriers meet those obligations well enough to offset worse performance elsewhere.)
But where T-Mobile paired UMA with a cheap, unmetered calling plan--now costing just $10 per month for 1 or more lines--Sprint's femtocell costs $100 and $5 per month plus a $10 per month fee for a single unmetered line. Verizon charges $250 with no monthly fee nor calling discounts. (AT&T's femtocell is still in testing in limited markets, and may have an unmetered plan associated.)
Further, T-Mobile counts all calls that originate on a Wi-Fi network under its unmetered plan, and allows you to use any qualified hotspot: any one for which you have access or a password, or that's part of its large aggregated HotSpot roaming footprint. If you receive a call or place a call over Wi-Fi, you can walk away onto the cell network and not have minutes apply. For Sprint, minutes are unmetered only when at the femtocell, and, as noted, Verizon doesn't engage in that at all.
Because T-Mobile relies on Wi-Fi for data, the speed that your handset can access the Internet is only limited by your broadband connection and the quality of the Wi-Fi network. Verizon and Sprint are shipping 2G-only femtocells, which means that handsets with 3G but no Wi-Fi would be severely cramped. AT&T will offer 3G service with its femtocell--but 3G drains a battery far faster than Wi-Fi does on a mobile device. You'll need to keep your iPhone or other phone plugged in to use it effectively in your home as a landline replacement. (AT&T's devices should be able to switch to Wi-Fi for data while making 3G calls, however.)
The cost of femtocells, where we're now a good year into real worldwide availability, is still far too high relative both to their utility and substantial deployment. Yes, that can drop via volume, but Om Malik points out that with only 20m femtocells predicted to be sold worldwide in 2012 (and 800K worldwide this year), the amount of investment in femtocell makers is far outstripped by the potential for revenue. That's a recipe for consolidation and closure.
Femtocells benefit a carrier by allowing customers to get coverage where they cannot, and offloading cell tower usage to a device that the customer has paid for or leases. Some reports have suggested that carriers should give away femtocells because the reduction in infrastructure buildout through heavy in-home use would be far cheaper than the cost of the femtocells.
Honestly, given the costs, limitations, and complexity, I'd rather simply use Skype on my iPhone over my home Wi-Fi network rather than a femtocell.
Southwest Airlines will install Row 44's in-flight Internet service on all planes: The airline has been testing the Ku-band satellite-backed Internet service for several months. It will continue to test prices this year, and start to deploying next year on all its planes. Southwest has a fairly uniform fleet, making it possible to get just a couple of certifications in order to roll out.
Satellite access has been perceived as more expensive than ground-based service, both for gear and operating expenses, but Row 44 has consistently said that it has used a combination of off-the-shelf items, technology that's improved since Boeing's Connexion service days, and techniques to eke out the most efficient use of spectrum to make the service affordable.
Alaska Airlines has also tested Row 44's equipment, and earlier this year started an advertising campaign that included a statement that implied Wi-Fi access would be widely available, but the airline hasn't yet stated its plans publicly.
The competing provider, Aircell, just celebrates its one-year anniversary of the first equipped planes in the air yesterday; that was a pilot project with American Airlines. The commercial launch on Virgin America was last December, and Aircell has passed the 500 mark between Virgin, AirTran, Delta, and American. As many as 1,000 planes will have Aircell's Gogo service installed this year, and as many as another 1,000 next year.
With Southwest's commitment, it's likely that between 50 and 60 percent of all mainline (non-regional) aircraft routes will have Internet coverage by the end of 2010.
Slate notices tired theme of WSJ's Wi-Fi cafe squatters article: Jack Shafer, media critic, compares the WSJ's summer 2009 story on how cafe owners are tired of people nursing a cup of coffee for 8 hours while bogarting Wi-Fi to my 2005 New York Times piece. I wrote a non-trend trend piece back in 2005, looking at why some cafe owners were turning off or restricting Wi-Fi, but also noting contrary trends, which have proven true. (I wrote a bit more about this on 5-August-2009 when the Journal article first appeared.)
Meanwhile, QSR Magazine, the trade journal for fast-food or "quick-service" restaurants, chimed in with a short report that Wi-Fi brings in bodies to buy stuff. Right on.
San Francisco bus stops will generate juice, Wi-Fi signals: Popular Mechanics covers a prototype covered bus stop that (when all are deployed) would generate 43,000 kWh per year--the equivalent of a few thousands dollars worth of non-renewable power, but often paid at a much higher rate for renewable. I'm not clear if the city can get a higher rate for feeding the meter backwards, or if it's only available to private citizens. The shelters also use less power for lights, and will include Wi-Fi access points. The plan is to roll out 360 shelters by 2013 at $30K a pop. Clear Channel Outdoor will pay for deployment and keep ad revenue.
College uses WiMax for network coverage: Northern Michigan University will hand out laptops--included in tuition since 2000--to students with WiMax cards for network coverage this fall. The intent is to provide secure and high-speed service over the hilly terrain of the school, and to students and staff off campus. This is the first move of the kind I've heard, and it'll be fascinating to check in with them in a few months.
Technology Review tutors us in white space spectrum: There's a lot of interest in using the guard bands, or empty space, between adjacent channels so long as it doesn't interfere with legitimate licensed uses. This could actually be Wi-Fi on steroids, allowing higher power levels and wider channels. There are a number of hurdles yet to overcome to make "White Fi" practical.
Meraki releases survey of device use on its networks: Meraki has observed over 200,000 unique devices on its collection of customers and self-run networks in 2009, and says that Apple equipment use grew year-over-year by 221 percent (laptops, iPod touch, and iPhone), while the devices it observed grew just 41 percent (from 150,000 unique devices in 2008). Apple equipment represents 32 percent of all devices seen by the networks, up for 14 percent in 2008. The company uses a software as a service (SaaS) centralized backend for its customers' administration, allowing it to track these kinds of statistics; it looked at usage over a 24-hour period in June 2008 and June 2009 across 10,000 access points.
The Wi-Fi Alliance explains four optional 802.11n elements for future certification: The Wi-Fi trade group has over the last 10 years kept together the notion that every device with Wi-Fi on the label should work at the greatest point of agreement with one another. This has continued in spite of new elements and enhancements to the 802.11 family of standards, including 802.11n.
The recent news that the IEEE had approved 802.11n within the 802.11 Working Group, and ratification was likely a few months away, led the Wi-Fi Alliance to explain its roadmap for adding more steps to the certification process. When the Wi-Fi group certifies a device, it runs it through tests that are supposed to ensure that the equipment responds in a standard manner. (The group also does plugfests in which equipment makers bring lots of gear together outside of lab conditions.)
When the word hit, the alliance identified four optional areas of certification that it would add. I knew about some of these areas, but I spoke with the group today to clarify what this meant for both equipment makers and end users. The Wi-Fi Alliance said it would offer tests for coexistence in 2.4 GHz, space-time block coding, transmit MPDU, and three spatial streams. Scratching your head? After 8 years of covering Wi-Fi, I admit I was in that position over a couple of those.
Let's go through them with the help of Greg Ennis, the alliance's Technical Director, who--along with Kelly Davis-Felner, the group's marketing director--was kind enough to lead me through it.
Coexistence. I first wrote about 802.11n coexistence mechanisms in depth back in Feb. 2007, when I interviewed Atheros's CTO Bill McFarland when the Draft 2.0 approval was imminent (see "How Draft N Makes Nice with Neighbors; 5 GHz Averts Tragedy of the Commons," 16-Feb-2007).
Coexistence has to do with the use of double-wide channels--40 MHz instead of the roughly 20 MHz regular channels--in both 2.4 and 5 GHz bands. The 5 GHz band isn't a problem, because 20 MHz channels don't overlap; Wi-Fi selectable channels in 5 GHz are staggered by intervals of 4 band channels (5 MHz each), such as 36, 40, 44, and 48. In 2.4 GHz, channels are staggered only by a single 5 MHz band channel, meaning that the use of 40 MHz will nearly always conflict with other existing networks.
Ennis said that 2.4 GHz coexistence terms weren't fully settled until recently, even though manufacturers have built in some methods of using 40 MHz in 2.4 GHz. The Wi-Fi Alliance discouarged the use; Apple, for one, doesn't allow its gear to use wide channels in 2.4 GHz.
In the new testing regime, "not everybody is required to support 40 MHz operation--but if they do support 40 MHz operation, they must go through the testing that we've defined," Ennis said.
The mechanisms that require an access point backing off to 20 MHz channels are so broad and severe that it's unlikely you could use a wide channel in any environment in which other Wi-Fi networks operate. Still, Ennis says, it may be of use in enteprise situations, or with future gear that's all 802.11n with these modes enabled that can be more respectful of each other automatically.
Space-time block coding. This term makes my head hurt every time I read it. I go off to the Web and read up on the principle, and it's above my paygrade. All wireless communication has to allot slots in some fashion--through contention or scheduling--for bits to go through. That's the basis of all wireless standards.
What STBC does is extend that beyond time into the domain of space. An access point can, through some complicated encoding, send different information simultaneously using multiple spatial streams so that receivers (stations in Wi-Fi parlance) that have single-spatial stream receivers can separately but at the same time decode their unique package.
The utility of this complicated feature is that we're likely to start seeing lots of single-stream N devices, as I've written about in the past year. (See, for instance, "Does the iPhone Need 802.11n?", 26-March-2009.)
Chipmakers are most likely now delivering quantities of these lower-powered, cheaper 802.11n chips that can't offer two streams--and thus double the bandwidth--as laptop and desktop 802.11n modules can. With STBC, an access point can utilize the full available 802.11n bandwidth by splitting it spatially between two devices instead of halving bandwidth by speaking to a single-stream device solely.
Ennis noted that STBC also improves the signal-to-noise ratio, which makes faster rates and farther distances possible. "I think this is going to be a popular optional feature," he said.
Aggregation MPDUs (MAC Protocol Data Units). While sounding obscure, this is yet another way by which 802.11n can eke out improved speeds. For long sequences of data, aggregation MPDUs lets a Wi-Fi system create a long frame, reducing all the overhead required to send a packet. (Every packet has origin and destination information, a preamble, and other data that adds overhead.)
For video, for instance, Ennis says that this kind of aggregation can improve throughput, although probably not by double-digit percentages. "It's not as dramatic an improvement as say using more spatial streams, or using 40 MHz channels," he said.
Currently, the Wi-Fi Alliance tests aggregation only if a manufacturer's access point sends these aggregated frames; it checks that a station can properly receive such frames, which can be interpreted under earlier 802.11n drafts. The new optional certification tests for aggregated frames sent by both stations and access points. (If included, it must be tested.)
Three spatial streams. This last one is quite simple. The Wi-Fi Alliance can now test for devices that send three streams of data across space up from two streams of data. Ultimately, we should see devices that can handle four, with a maximum raw symbol rate of 600 Mbps with wide channels in 5 GHz.
Those are the technical bits. I asked Kelly Davis-Felner, marketing director, how all the above plus other specifications already available and other elements coming down the pipe would be presented to buyers. The a/b/g/draft n labeling can only go so far. She said that's her primary focus right now, and there should be more news on that front soon.
This story ties unemployed folks to higher rates of longer squatting in cafes: The Wall Street Journal reporter writes,
Amid the economic downturn, there are fewer places in New York to plug in computers. As idle workers fill coffee-shop tables -- nursing a single cup, if that, and surfing the Web for hours -- and as shop owners struggle to stay in business, a decade-old love affair between coffee shops and laptop-wielding customers is fading.
Oddly, I believe I wrote this same story with the same concerns at the top of the market in 2005, when cafe owners were, well, already having seen the love affair dim. Taking a hint from a Seattle cafe that turned off Wi-Fi on the weekends, Victrola in Capitol Hill, I wrote in the New York Times four years ago:
...there was also a disadvantage [to offering free Wi-Fi], staff members said: the cafe filled with laptop users each weekend, often one to a table meant for four. Some would sit for six to eight hours purchasing a single drink, or nothing at all.
(I also wrote about Victrola in more detail on this blog.)
This conflict between squatter and cafe owner has been true since Wi-Fi started to become heavily used as it became a standard feature in laptops or available through a cheap add-on card back in 2002 to 2003. Cafes that had attached an AirPort router to a DSL connection suddenly found themselves a bit at sea.
I have heard repeatedly (as the WSJ article notes) that there are folks who are either shameless enough or feel entitled enough that they bring in their own food or coffee, or purchase nothing, and then complain when asked to make a purchase or leave.
There's nothing new here, but it's interesting to see an old trend get hooked to the latest problem that brings people into "third places," away from home and work--especially given that they may have no work.