Receive new posts as email.
This site operates as an independent editorial operation. Advertising, sponsorships, and other non-editorial materials represent the opinions and messages of their respective origins, and not of the site operator. Part of the FM Tech advertising network.
Entire site and all contents except otherwise noted © Copyright 2001-2010 by Glenn Fleishman. Some images ©2006 Jupiterimages Corporation. All rights reserved. Please contact us for reprint rights. Linking is, of course, free and encouraged.
So you might have heard about this thing Apple released on Saturday: I've had one for a day, and while it's marvelous--certainly the best computing device ever produced of its size or nature--there's nothing under the hood to do with networking that's worth reporting on. The iPad handles 802.11a/b/g/n with 2.4 and 5 GHz support for the appropriate standards.
The flavor that adds 3G and a GPS receiver is due "in late April," according to Apple. With the no-contract deal Apple snagged for 3G use with AT&T, I'm curious to see what non-US carriers agree to as 3G iPads are launched in other countries.
The Wi-Fi Alliance noted that 10 mobile phones have certified 802.11n built in: I've been waiting a long, long time for 802.11n to appear in mobile phones, and the time has finally come. What the Wi-Fi Alliance didn't mention is that six of the 10 phones are made by Samsung and the other four by LG (you can search by protocol and device type on the alliance's site). It's not a problem; rather, this isn't a sudden industry movement (10 major phone makers each with an 802.11n phone), but it's part of an ongoing trend to make mobile devices faster and more efficient on Wi-Fi networks.
The Wi-Fi Alliance also noted that over 500 handsets have some form of Wi-Fi certification, 141m of which were shipped in 2009 (out of 580m Wi-Fi devices shipped that year). The alliance quotes ABI Research's prediction that 90 percent of smartphones will include Wi-Fi by 2014, as well as a total of 500m handsets with Wi-Fi shipping in 2014.
That's too conservative, is my take. ABI Research knows of what it speaks, but I recall several years ago when predictions were that 75 to 90 percent of laptops would have Wi-Fi built in by some year (2007? 2008?). In fact, the number was well over 95 percent; only a few bizarre outlying devices lacked Wi-Fi. The 90-percent figure for smartphones will likely be hit sooner in the U.S., Europe, and parts of Asia; if inclusion lags, it will be because China won't allow it in smartphones, not because manufacturers and carriers aren't keen to have it built in.
Apple's 3G iPad models will come with two unique aspects: only unlocked, no-contract services: It's not surprising that Apple will have Wi-Fi only and Wi-Fi plus 3G variants of its new iPad mobile device. Rather, it's that Apple finally got its demands met about how consumers will control the relationship with cellular carriers.
The iPad will come with a micro SIM, a new tiny form factor for SIM in mobile devices that's not yet in real use, as far as I can tell. (I had never heard of it before today, even though it's a settled 3GPP format.) Steve Jobs said it will be simple to swap out SIMs from other carriers, so that the US version of the 3G iPad will "just work" in most cases outside the US. It won't be until June or July that Apple has carrier relationships for direct sales and data plans other than in America.
The unlocked iPad will be coupled with two data plan options from AT&T, neither of which requires a contract or (as far as I know so far) any cancellation penalty. AT&T has some services now that you can turn on or off on demand, such as navigation.
The 250 MB/mo. plan is $15/mo; the unlimited plan is $30/month. While you might scoff at 250 MB, the iPad will have the same limitations as the iPhone in terms of downloading and storing stuff over the Internet, so outside of purchasing movies, the biggest 3G drain will be streaming video. Because the iPhone OS doesn't support Flash, streaming video must all be embedded H.264 format or accessed via the YouTube app or other applications.
I'm calling the 250 MB/mo plan "your mother's plan," because it's most likely to appeal to people who won't be heavy 3G users, and will mostly use the device over Wi-Fi at home or at hotspots. However, they will want the flexibility of having 3G available wherever when they carry the device with them.
The iPad still is slated to have the disappointing pairing of UMTS for upload (384 Kbps) with HSDPA for download (ostensibly HSPA 7.2 as with the iPhone 3GS); this detail is noted on the Tech Specs page for the iPad. The iPad will likely be a heavier producing device, especially given that there's a camera connection kit (USB or SD card reader) that will let you suck photos directly into the iPad. These will sync with iPhoto when you return to a Mac (or through other means specified in iTunes on a Mac or under Windows), but uploading photos during a trip will certainly be desirable, and limited over 3G networks to the paltry 384 Kbps rate.
I should note, of course, that the iPad will have 802.11n support, but it's unknown to me yet whether this will be a single-stream radio, which would use less juice and thus be more sensible in a device intended to have a long battery life, or a two-stream 802.11n adapter, which will drain it faster. Apple uses USB for syncing large amounts of content, and doesn't provide over-the-air sync for anything directly. (You can use its MobileMe service to sync calendars and contacts.)
That means that the gating factor on most networks will be the Internet connection, not the wireless LAN. Having a 50 Mbps or so top rate with 802.11n single stream won't really be a clog on the iPad's abilities.
Gizmodo spurs sudden buzz about 802.11n in the next iPhone: Gizmodo spots a job listing from Apple for an iPhone engineer who needs 802.11a/b/g/n implementation knowledge, and leaps to the notion that the next model of the iPhone may include 802.11n.
I haven't written about this, despite what seem like thousands of posts at various blogs, because my reaction was twofold.
First: Well, duh!
Second: You don't hire a handheld engineer today with 802.11 experience that doesn't include 802.11n. It's contemporary technology.
I didn't get this buzz when I wrote in March 2009 about why and when the iPhone might get 802.11n. I thought Apple might put single-stream N into the iPhone 3GS, which it did not. It will clearly arrive in the next iPhone model as the chips are ready to go.
Single-stream N doesn't magically make the iPhone's data transfers much faster. It will definitely speed them a bit. Rather, it improves range and eliminates coverage holes, while allowing better network neighbor behavior with other 802.11n devices.
Gizmodo noted in September that the latest iPod touch revision included Broadcom's 802.11n chip that also had an FM receiver built in.
Apple's October revision to its AirPort Extreme Base Station and Time Capsule lets these units run at up to 450 Mbps, 50 percent faster: I knew that Apple had put a 3x3 antenna array into these devices, which could be argued was intended to improve speed-over-range, a common reason to add antennas. (Read the background in my 20 October 2009 article, "Apple Slipstreams 3x3 into Wi-Fi Base Stations.")
But I had heard that Apple had built three streams in, making these devices capable of a raw 450 Mbps operation, or 50 percent faster than nearly everything else on the market. Each stream in 802.11n can carry more data than 802.11g--more than twice as much, in fact, when wide (40 MHz) channels are used. Apple makes wide channels available only in 5 GHz, which isn't unusual; 2.4 GHz is quite crowded and full of competing uses. Thus 450 Mbps is the raw rate in 5 GHz, 225 Mbps in 2.4 GHz.
How do I know this when Apple won't reply to calls and emails on the topic? I was tipped to it by several wireless industry folks who wanted to remain anonymous--nobody likes to offend Apple--but didn't have a definitive label until today.
I regularly check the Wi-Fi Alliance's site for new certifications from Apple, because more information often appears there than the details Apple provides. Sure enough, based on a query from a colleague tonight, I did a search and found that on 3 December 2009, Apple's two Wi-Fi gateways had been labeled with three-stream support for transmit and receive in 2.4 GHz and 5 GHz.
[Update: The inestimable Tim Higgins of SmallNetBuilder wrote in to note that he had discovered, disassembled, and documented all this a month ago! I'm sorry to have missed it. Read his article in which he also found that Apple had connected a couple of antennas incorrectly. He confirms that it's a Marvell radio; Marvell started shipping three-stream chips nearly two years ago.]
This aggregate of 675 Mbps of raw data (something closer to 400 Mbps of TCP/IP and over 300 Mbps of net throughput) between the two simultaneously available bands is still theoretical: Apple's and most manufacturers' Wi-Fi adapters still are only sport 2x2 antenna arrays with two-stream configurations. (Apple updated its AirPort Extreme certification on 30 November to reach full 802.11n compatibility, too, but only for two streams.)
It's already been asked, but gear from before October with 2x2 arrays cannot be updated to 3x3; it's a hardware change, as is the jump from two streams to three streams.
In the short run, Apple base station buyers will get the benefit of greater range and greater speed over short ranges than with older gear. In the long run, though, there's a big speed boost to come.
Atheros announced its 2010 family of three-stream, high-data-rate and rate-over-range chips: The AR9300 XSPAN line up has a three-stream, 3x3 format for up to 450 Mbps raw (300 Mbps TCP/IP) 802.11n traffic. But speed is critical only at close distances: the chips have been designed to keep data rates high as devices move further and further from an access point.
Pen Li, senior product marketing manager at Atheros, explained that the company's goal with what it's calling SST3 technology is to "maintain signal reliability across the entire link." To that end, it's employing four features.
At short ranges, maximum likelihood demodulation (MLD) employs a massive amount of calculation to figure out the best of a matrix of potential encoding systems to use. Li said this could effectively increase antenna gain by 6 dB over the current technique. "Up to this point, the industry has been using this sub-optimal scheme called zero forcing." That was because the necessary CPU cycles weren't available in earlier generations. Atheros says this extends higher rates (up to 200 Mbps of TCP/IP throughput) 100 percent further than current tech.
At medium distances, where maximum speeds can't be maintained, higher rates can still be ensured with transmit beamforming, a well-known technique of varying signal strength to steer a beam to a receiver based on its understood location. However, Li says Atheros takes this a step further by beamforming on each subcarrier of an OFDM signal. (OFDM breaks a channel into many subchannels each of which sends data much more slowly than a monolithic channel would. This allows better signal reconstruction, and allows subchannels to be interferred with without degrading other subchannels. It's fundamental to 802.11g, 802.11n, and, in a slightly modified form, WiMax.)
This transmit beamforming boost keeps rates higher--at around the 100 Mbps TCP/IP data rate--50 percent further.
For the longest distances, Atheros will use maximal ratio combining (MRC), which uses some magic to pull signals from different paths, relying on a certain amount of redundancy, to push range by 20 percent further than current systems. MRC in a more limited form was used in Atheros's SST technology. With a 3x3 antenna matrix, it can be used to greater advantage.
Across all three methods, Atheros will use low density parity check (LDPC), a binary forward error correction with very low overhead to reduce error rates. Forward error correction encodes additional data to allow a receiver to fix errant bits without asking for a packet retransmission.
Atheros is pusing this chip line-up as its flagship brand, with suggestions for applications for home and mobile computing (better range), media (set-top boxes, gaming, multiple HD streams), and business (better performance in dense environments or less expensive deployments with fewer APs).
The chips are slated to be sampled in the first quarter of 2010. Atheros didn't offer guidance about when its OEM partners would have products available based on the designs, but it's likely by the end of 2010 at least consumer devices would appear.
While three-stream devices are already on the market, there's only one piece of client hardware for laptops, meaning that only range and reliability can be improved with a three-stream device, not throughput.
Apple offered a quiet update note to its two main base station models today with a big boost in speed and coverage: The company put in a note on the data page (see "Even faster performance") and mentioned in passing to media who were briefed that its AirPort Extreme and Time Capsule base stations would see a boost of up to 50 percent in data throughput and an increase in range of up to 25 percent over the immediately preceding models.
How? 3x3. Engadget found the FCC documents that supports that statement before the announcement today, although the writer didn't explain what this means.
In the MIMO (multiple in, multiple out) antenna system that's used in 802.11n, designers have lots of choices in how to build in range and resiliency, and those choices have increased as silicon and antennas have become cheaper.
Most consumer 802.11n access points use a 2x2 MIMO array, which is two receiving and two transmitting antennas. Each antenna pair is typically handled by a separate radio chain. Each radio chain can transmit unique data for higher data rates, or the same data as other radio chains to increase redundancy, and thus provide better reception at lower rates.
These radio chains use spatial multiplexing, which allows a kind of "body english" in which varying power fed through antennas steers a beam so that it travels a unique path through space, using reflection of objects as one of the characteristics that forms the beam. Multiple receiving antennas decode these individual chains and reassemble data into what was sent in the first place.
In 802.11n, each spatial stream in the highest-rate mode can act like a separate full-speed connection. Since roughly 75 Mbps is the raw rate for 20 MHz channels and 150 Mbps for "wide" 40 MHz channels, a two-stream device maxes out at 300 Mbps of raw throughput.
Nearly all 802.11n base stations sold to date use 2x2 arrays coupled with two spatial streams; some also offer 2x3 arrays for redundancy with just two streams. However, chipmakers have been planning for some time on getting 3x3 arrays with three spatial streams into the market with a raw 450 Mbps rate. Apple may be the first consumer access point maker to bite, although there are definitely other 450 Mbps APs on the market. (See next paragraph for update.)
[Update! An informed commenter--see below--notes that there's only a single AP that does three streams. So Apple isn't slipping in higher bandwidth here, just better signal diversity and performance.]
The additional transmit and receive antennas improve how far signals can travel to a client, and how sensitively an access point can pick up distant transmissions. This accounts for Apple's statement on improved range. It also provides improved bandwidth further from the base station; the data rate doesn't drop off nearly as fast as with 2x2. The "up to 50 percent" figure relates to a range of distances, not close up to the base station.
The Wi-Fi Alliance just approved a testing regime for devices with three spatial streams, and all the major Wi-Fi chipmakers were involved in that testing. Our informed commenter says it'll be until late 2010 before we see a large number of 3-stream devices; other opinions?
Wi-Fi Direct is both parallel to and complementary of Bluetooth. Discuss: Today's announcement of Wi-Fi Direct, a peer-to-peer Wi-Fi transfer method, might seem to be firing across Bluetooth's bow. But it isn't quite. Intel's My WiFi is a much more direct threat, and even then may not materialize in quite the way that's being predicted. (Read my coverage, "Wi-Fi Alliance Peers into the Future with Ad Hoc Replacement.")
To review, Bluetooth is a PAN (personal area networking) technology in which devices under the control of the same person or computer communicate over short ranges and relatively low speeds. Bluetooth can create peer-to-peer connections or piconet networks, which comprise a host and up to seven clients. In a very standard configuration, a cell phone might use Bluetooth to communicate with a laptop, sharing its 3G mobile broadband connection, while at the same time a Bluetooth earpiece is paired with the phone to handle audio.
Bluetooth requires a pairing process, in which devices authenticate to each other and agree through a handshake (with optional encryption) to talk to one another. The SIG, device makers, and desktop and mobile OS developers have done a great job of simplifying this process down to typically entering a PIN--one of several options with the current security system, Secure Simple Pairing--instead of having 20 to 25 steps as it used to be.
Bluetooth's current release (2.1+HDR [high data rate]) encompasses a wireless spec for 3 Mbps data transfer (raw) using the 2.4 GHz band. The spec also includes application-layer elements, which are called profiles, and which define a large array of end-to-end tasks, like printing, file transfer, or acting as a modem. This allows any manufacturer to make a Bluetooth keyboard that talks the HID (human interface device) profile, and which is tested and certified as such, to talk to any other Bluetooth device with the HID profile.
The Bluetooth SIG, which maintains and develops the spec, isn't tied to its physical medium. It's tried to partner with other specs in process to extend itself, notably tying its cart at one point to both major ultrawideband (UWB) encodings, and then picking WiMedia, which was the "winner" in UWB. WiMedia disbanded, but handed off the Bluetooth component to the SIG; there may still be life in it. (Originally, Intel et al. wanted to stick one UWB radio in computers and devices, but have many different protocols run over that radio, such as Bluetooth, TCP/IP, Wireless USB, and video. UWB is currently shipping only as an instantiation of Wireless USB.)
While UWB fiddled and burned, however, the SIG worked on Bluetooth 3.0+HS (High Speed), which incorporates a high-speed transfer mode that allows a Bluetooth device to coordinate with a peer switching to use 802.11 for a bulk transfer, useful for large files or high-speed video streaming. The session is still within the structure of a Bluetooth PAN, and the use of 802.11 is entirely under the control of the Bluetooth session. The devices don't suddenly become ad hoc nodes or soft access points. Note the use of 802.11: this is a particular use of that protocol outside of any current Wi-Fi spec.
Wi-Fi Direct is an outgrowth of the interest by Intel and others in reducing the number of radio technologies and the level of complexity in devices, which can correspondingly reduce battery usage, while also developing a spec that's to their liking. Intel has a board seat on the Wi-Fi Alliance and the Bluetooth SIG, but still enjoys charting its own course.
Wi-Fi Direct is a peer-to-peer technology, at least the way it's being described initially. Wi-Fi devices that have services to offer (like printing, file sharing, etc.) can advertise those in a way that other equipped devices can access directly. This new method offers the speed and security of an infrastructure Wi-Fi network with an access point at the center without the overhead of joining such a network or making such networks public to allow access to specific resources. That is, someone can print to your printer without you giving them a key to your network. Wi-Fi Direct is built on top of 802.11n, so it can work in both 2.4 and 5 GHz, too.
The simplicity of Wi-Fi Direct is supposed to aid in devices without keyboards or easy data entry methods, much as Wi-Fi Protected Setup (WPS) was supposed to offer a one-click secure connection. With a peer-to-peer approach, a camcorder could hook up with a laptop to transfer data directly without you needing to enter a WPA2 Personal passphrase or even connect at all to an existing Wi-Fi network.
Beyond speed and security, Wi-Fi Direct will allow an adapter to be scanning and accessing peers while also maintaining a full infrastructure connection to a network. It's this feature that allows devices to ostensibly cut the Bluetooth "cord," although I'm still dubious about that as a general element, as I'll explain.
The My WiFi technology that Intel developed (apparently at least in part with Ozmo Devices) emphasizes more of the PAN aspect, talking about having eight devices associated with a laptop, for instance.
So, the question at the outset was whether Wi-Fi Direct is a competitor to Bluetooth?
Bluetooth and Wi-Fi Direct definitely compete head to head on trying to make the simplest network connection between two devices for a variety of straightforward purposes.
However, Wi-Fi Direct won't be backward compatible to the hundreds of millions of devices on the market that already have Bluetooth 1.x or 2.x. Bluetooth's later flavors (2.x and 3.x) are backwards compatible with those older devices.
And while Wi-Fi with a PAN mode could reduce circuit counts, most Wi-Fi chips that are being sold in the mobile market, and I believe in the desktop/laptop market, are integrated Bluetooth/Wi-Fi modules that often throw in other radios and circuitry as well.
Wi-Fi may eventually be appropriate to build into keyboards, mice, wireless headsets, earpieces, and other low-battery peripherals, but that's not really the case today. Bluetooth dominates there in hundreds of millions of installed devices.
Bluetooth's profiles also seem like an advantage to me. Kelly Davis-Felner, the Wi-Fi Alliance's marketing director, said that Wi-Fi Direct would not have application or task overlays, but would be focused on the networking and communication level, as with other Wi-Fi certifications.
Which means that if I connect my mobile phone with my computer to transfer music over, I still need an application on both sides that handles the file transfer. With Bluetooth, the profiles still need an interface on top, but a universally supported file-transfer method already exists. I can use a Bluetooth program under Windows and on the Mac and within various mobile phones to transfer files today.
If I want a method that synchronizes stored files and handles it automatically, then OS makers or third-party developers still do have to build an application on top of that. But with Bluetooth, they can rely on leveraging a well-supported mechanism. It's asymmetric, in that a desktop OS program for syncing MP3 files or photos doesn't require a corresponding program to be installed on a mobile phone that allows access to its storage via the Bluetooth profile.
Now, of course, I'm being a little disingenuous about profiles, because Wi-Fi Direct will create an IP-based network between the two parties, allowing existing service discovery methods to work just as they do over a wireless LAN today--including Apple's Bonjour and whatever the current name of Microsoft's technology. But none of these methods are supported across gadgets (like cameras). mobile operating systems, and desktop/laptop operating system platforms. That's going to be the challenge for Wi-Fi Direct.
In the end, I certainly see Wi-Fi Direct as provoking additional industry efforts to figure out precisely what's useful about PANs and sell those capabilities to consumers as solutions for frustration or a way to accomplish tasks they're unaware they need to accomplish.
The best thing about Wi-Fi Direct is that it enables a secure, high-speed ad hoc mode that will actually work among different devices, something that's long been needed.
One of the most interesting aspects of Wi-Fi Direct is that it could be used with Bluetooth, since many manufacturers participate actively in the Bluetooth SIG and Wi-Fi Alliance. Beyond Bluetooth 3.0+HS, there could be a convergence path for hand-in-hand networking, playing to each standard's strengths.
The Wi-Fi Alliance announced this morning that it has started certifying fully compliant 802.11n devices, along with new optional elements: The group, which tests 802.11 gear for interoperability, is graduating from the Draft N trademark and testing to plain old N, with updates to logos and processes.
As noted in my earlier article, "The Fine Points of Optional Wi-Fi 802.11n Certification," 2009-08-07, the Wi-Fi Alliance added four additional optional certifications for a third spatial stream, better 2.4 GHz coexistence, space-time block coding, and packet aggregation. A few other tweaks are also added, described in that article.
The biggest change we'll see from the completion of the 802.11n standard and this certification update is three-stream N, which will allow raw data rates of 450 Mbps, along with the potential to simultaneously address three mobile devices at one time that are using single-stream 802.11n. This is likely to have much less impact in the home than in the enterprise, of course. Four-stream, 600 Mpbs devices are still in the future.
The alliance only releases new certification programs after testing with reference gear from major chipmakers, this time involving Atheros, Broadcom, Intel, Marvell, and Ralink. The companies involved all shot out press releases today describing their involvement, and how cool all this new gear will be.
It's likely that as the result of certifying new gear, older devices will see minor firmware updates as tweaks are made. The space-time block coding changes conceivably can be rolled into older devices, as well as some of the packet-aggregation updates. Both improve throughput depending on network conditions.
The Wi-Fi Alliance explains four optional 802.11n elements for future certification: The Wi-Fi trade group has over the last 10 years kept together the notion that every device with Wi-Fi on the label should work at the greatest point of agreement with one another. This has continued in spite of new elements and enhancements to the 802.11 family of standards, including 802.11n.
The recent news that the IEEE had approved 802.11n within the 802.11 Working Group, and ratification was likely a few months away, led the Wi-Fi Alliance to explain its roadmap for adding more steps to the certification process. When the Wi-Fi group certifies a device, it runs it through tests that are supposed to ensure that the equipment responds in a standard manner. (The group also does plugfests in which equipment makers bring lots of gear together outside of lab conditions.)
When the word hit, the alliance identified four optional areas of certification that it would add. I knew about some of these areas, but I spoke with the group today to clarify what this meant for both equipment makers and end users. The Wi-Fi Alliance said it would offer tests for coexistence in 2.4 GHz, space-time block coding, transmit MPDU, and three spatial streams. Scratching your head? After 8 years of covering Wi-Fi, I admit I was in that position over a couple of those.
Let's go through them with the help of Greg Ennis, the alliance's Technical Director, who--along with Kelly Davis-Felner, the group's marketing director--was kind enough to lead me through it.
Coexistence. I first wrote about 802.11n coexistence mechanisms in depth back in Feb. 2007, when I interviewed Atheros's CTO Bill McFarland when the Draft 2.0 approval was imminent (see "How Draft N Makes Nice with Neighbors; 5 GHz Averts Tragedy of the Commons," 16-Feb-2007).
Coexistence has to do with the use of double-wide channels--40 MHz instead of the roughly 20 MHz regular channels--in both 2.4 and 5 GHz bands. The 5 GHz band isn't a problem, because 20 MHz channels don't overlap; Wi-Fi selectable channels in 5 GHz are staggered by intervals of 4 band channels (5 MHz each), such as 36, 40, 44, and 48. In 2.4 GHz, channels are staggered only by a single 5 MHz band channel, meaning that the use of 40 MHz will nearly always conflict with other existing networks.
Ennis said that 2.4 GHz coexistence terms weren't fully settled until recently, even though manufacturers have built in some methods of using 40 MHz in 2.4 GHz. The Wi-Fi Alliance discouarged the use; Apple, for one, doesn't allow its gear to use wide channels in 2.4 GHz.
In the new testing regime, "not everybody is required to support 40 MHz operation--but if they do support 40 MHz operation, they must go through the testing that we've defined," Ennis said.
The mechanisms that require an access point backing off to 20 MHz channels are so broad and severe that it's unlikely you could use a wide channel in any environment in which other Wi-Fi networks operate. Still, Ennis says, it may be of use in enteprise situations, or with future gear that's all 802.11n with these modes enabled that can be more respectful of each other automatically.
Space-time block coding. This term makes my head hurt every time I read it. I go off to the Web and read up on the principle, and it's above my paygrade. All wireless communication has to allot slots in some fashion--through contention or scheduling--for bits to go through. That's the basis of all wireless standards.
What STBC does is extend that beyond time into the domain of space. An access point can, through some complicated encoding, send different information simultaneously using multiple spatial streams so that receivers (stations in Wi-Fi parlance) that have single-spatial stream receivers can separately but at the same time decode their unique package.
The utility of this complicated feature is that we're likely to start seeing lots of single-stream N devices, as I've written about in the past year. (See, for instance, "Does the iPhone Need 802.11n?", 26-March-2009.)
Chipmakers are most likely now delivering quantities of these lower-powered, cheaper 802.11n chips that can't offer two streams--and thus double the bandwidth--as laptop and desktop 802.11n modules can. With STBC, an access point can utilize the full available 802.11n bandwidth by splitting it spatially between two devices instead of halving bandwidth by speaking to a single-stream device solely.
Ennis noted that STBC also improves the signal-to-noise ratio, which makes faster rates and farther distances possible. "I think this is going to be a popular optional feature," he said.
Aggregation MPDUs (MAC Protocol Data Units). While sounding obscure, this is yet another way by which 802.11n can eke out improved speeds. For long sequences of data, aggregation MPDUs lets a Wi-Fi system create a long frame, reducing all the overhead required to send a packet. (Every packet has origin and destination information, a preamble, and other data that adds overhead.)
For video, for instance, Ennis says that this kind of aggregation can improve throughput, although probably not by double-digit percentages. "It's not as dramatic an improvement as say using more spatial streams, or using 40 MHz channels," he said.
Currently, the Wi-Fi Alliance tests aggregation only if a manufacturer's access point sends these aggregated frames; it checks that a station can properly receive such frames, which can be interpreted under earlier 802.11n drafts. The new optional certification tests for aggregated frames sent by both stations and access points. (If included, it must be tested.)
Three spatial streams. This last one is quite simple. The Wi-Fi Alliance can now test for devices that send three streams of data across space up from two streams of data. Ultimately, we should see devices that can handle four, with a maximum raw symbol rate of 600 Mbps with wide channels in 5 GHz.
Those are the technical bits. I asked Kelly Davis-Felner, marketing director, how all the above plus other specifications already available and other elements coming down the pipe would be presented to buyers. The a/b/g/draft n labeling can only go so far. She said that's her primary focus right now, and there should be more news on that front soon.
The 802.11n spec celebrates its seventh anniversary without ratification: The gears at the IEEE grind but slowly, and 802.11n is still not actually a ratified and published standard even though its been built (in "draft" form) into tens of millions of devices, and has a certification standard (Draft N, natch) at the Wi-Fi Alliance. (The alliance is separate from the IEEE, developing standards for testing interoperability of commercially produced devices using the IEEE standards as the basis.)
Wi-Fi guru Matthew Gast, author of 802.11 Wireless Networks: the Definitive Guide (foreword by yours truly), writes on his marvelously named blog that 802.11n has moved up a few rungs of the IEEE hierarchical process towards shedding its draft label.
The 802.11n spec was developed in a process that started with the High Throughput Study Group, which was turned into Task Group N within the 802.11 Working Group, which specializes in wireless LAN protocols. Matthew writes that the working group has now passed the spec upwards to higher-level groups, starting with the IEEE review committee, which meets 11-September-2009. Matthew notes that's exactly 7 years after the first meeting of the high-throughput group.
In practical terms, this is all institutional process, rather than anything that will result in changes. As far as I can tell, there have been no substantive changes to 802.11n in years, and the less-important changes occur on the driver side, Matthew said via email. It's also important to note that no device has appeared that implements all the optional parts of 802.11n, and some monkeying around has occurred in those areas.
The draft label should come off in September.
Qualcomm says it has 600 Mbps 802.11n 4x4 solution: The cellular and GPS chipmaking giant finally releases new Wi-Fi gear, blowing the roof off with a 600 Mpbs (raw), 4-radio, dual-band, 4x4 antenna array--the N-Stream Wireless LAN WCN1320. The chip will sample this month, allowing manufacturing partners to start designing products around it. A production date isn't announced.
Ever since the grand compromise was made that allowed Task Group N in the IEEE 802.11 Working Group to move forward, the option of having four radios and a 4x4 antenna array for a raw data rate of about 600 Mbps has existed. However, the cost of such a device would be so high that until 2-radio (raw 300 Mbps) 802.11n was in wide use--especially in enterprises--the 4-radio flavor didn't seem to be something the market would demand and pay a huge premium for.
1 1/2 to 2 1/2 years into the N revolution, depending on how you count, the time must be ripe. Qualcomm is advertising this product as a way to carry multiple HD streams across a house. The chip integrates an application processor, which allows a set-top box or other equipment maker to offload some processing to the chip instead of adding an additional burden.
Qualcomm acquired the pioneering MIMO Wi-Fi firm Airgo a couple of years ago, and this is the first standalone Wi-Fi product that's emerged from the firm since then.
The company isn't the first to announce a 4-radio 802.11n solution. The startup firm Quantenna announced a 4-radio, 4x4 antenna chip that it could configure in a pair for what they say would be an aggregate of 1 Gbps across 2.4 and 5 GHz. As far as I can tell, Quantenna is sampling, but no products are yet shipping.
The Wireless Gigabit Alliance (WiGig) brings together 17 tech firms for 60 GHz streaming video, LAN standards: The 60 GHz unlicensed band, available for use in various forms worldwide, can carry Gbps of data, but there hasn't been unity about how to proceed. The new WiGig group will focus on streaming video (SiBeam is the leader in this band already), wireless LAN (the IEEE already has a 60 GHz working group underway), and docking/synchronization--a replacement for UWB, which hasn't lit up the market yet, but is at least available right now.
Multi-Gbps wireless LAN networking would be a hoot in the home, especially as we push data to networked storage devices and move ever-larger video and photo files around, but the standard's real potential is in providing for lossless high-definition streaming alongside these other purposes.
The group has been working together for a year, and chose this moment to makes its public debut. A standard is due out in fourth quarter, with testing to follow. WiGig intends to bring its work to the IEEE group on 60 GHz wireless LAN (802.11ad), and many WiGig members are also Wi-Fi Alliance members and IEEE participants. It's possible that 802.11ad will look a lot or entirely like WiGig. WiGig will also create a testing plan and carry out certification.
Bill McFarland, chief technical officer at Atheros, said in an interview today that it's clear consumers will wind up moving increasingly more data around the home. "People will end up with large files and high data rate streams. They're going to want to be able to move it flexibly," he said. Rather than have multiple chips dedicated to different purposes, WiGig is trying to unite it all under one banner.
McFarland noted that 60 GHz has a big advantage: 7 GHz of available in the U.S. and much of the world. "This very broad piece of bandwidth that we can use without licenses, without paying, and it allows us to use it in kind of big chunks, where we can get to very high data rates"--multiple gigabits per second.
The high data rates allow uncompressed HD video--roughly 3 Gbps--which avoids the current expense, possible image degradation, and latency of adding H.264 chips or other compression hardware between the transmitter and receiver.
The WiGig group isn't intending its standard as a Wi-Fi competitor; 60 GHz attenuates rapidly and doesn't penetrate objects well. This limits it to mostly in-room purposes. Wi-Fi in 802.11n can work well throughout a house. The idea of tri-band (2.4/5/60 GHz) chips seems like a reasonable path to take.
I asked McFarland how this 60 GHz effort would avoid the pitfalls of ultrawideband's rocky 7-year road to potential oblivion. He noted that there's no other spectrum available that enables multiple Gbps, and that by bringing together a set of companies involved through the development and marketing chain they can avoid the strife that delayed and may have doomed UWB.
When UWB was initially proposed, the FCC hadn't approved it. Ultimately, regulators worldwide allow UWB, but some have highly restricted the spectrum range, which reduces the number of simultaneous networks and devices, and requires more flexibility in product design. The 60 GHz effort starts with worldwide regulation already in place.
Ultimately, UWB took so long from design to market that "the data rates that UWB offered were not significantly higher than what could be achieved using 11n technology, so there was no strong, compulsive drive" to put UWB in hardware. (UWB started to make noise when Wi-Fi's highest rate was 11 Mbps, remember.)
Mark Grodzinsky, the marketing vice president of startup Wilocity, a firm that will develop chips and reference (and someone who was deeply involved in reconciling 802.11n into a viable standard), said of the 13 firms on the board of directors, "This is a group of companies that really knows how to do this and has done it before very well." Combined, they sell billions of wireless chips each year.
The intent with WiGig is to have several key differentiators that make the technology have multiple factors that can't be achieved with anything today, and that aren't likely to be achieved by any other technology the drawing board. This includes the high speed, but also the notion of multiple applications using a single radio. (This is how Bluetooth has managed to thrive, and it was one of the intents of the WiMedia Alliance for UWB.)
Grodzinsky described wireless docking and wireless display as two capabilities that are highly limited with any technology today. If you have a device capable of eSATA, gigabit Ethernet, and multiple USB streams, but the dock connection is 480 Mbps USB 2.0 or even wireless USB, performance is highly throttled down. A wireless display isn't really possible.
WiGig was also conceived with handheld devices front and center: characteristics that keep power use low are part of the spec from the get-go. Grodzinsky said, for instance, that error correction schemes are only used if errors need to be corrected; other wireless burn cycles on fixing errors when they don't exist.
WiGig's board of directors includes major chipmakers in the wireless space (Atheros, Broadcom, Intel, and Marvell), handset firms (LG, Nokia, and Samsung), PC-focused companies (Dell, Intel, and Microsoft), and consumer electronics manufacturers (NEC, Panasonic, and Samsung). Note there's some overlap among those firms' markets, too. Notably absent is Apple, which rarely joins standards groups at their inception, but is often an early adopter and later board member. Sony is also missing from this list. (Four other firms are "contributors" and not on the board, including more chipmakers.)
SiBeam is also not on the list, although its backers Panasonic and Samsung are. SiBeam is part of the WirelessHD Consortium, which is backed by six firms in the WiGig group, plus Sony and Toshiba. There will have to be a merger or some kind of close association between WirelessHD and WiGig because no TV set or computer will have two sets of chips, and WirelessHD doesn't have a data-transfer focus.
In-Stat says UWB will disappear by 2013: EE Times writes about In-Stat's latest report on ultrawideband, in which the analysis firm says the short-range technology, best suited for personal area networking (PAN), will fade from consumer electronics by 2012 and PCs by 2013. In-Stat believes that Wi-Fi will win out, with newer wireless solutions gradually phasing in, such as the 60 GHz SiBeam approach.
Most of the UWB startups, including all those devoted to video streaming over UWB, have folded or halted normal operations; just Alereon, Staccato, and Wisair remain. (Sigma Designs remains in businesses offers RF and coax UWB flavors for home networking, but isn't focused solely on UWB, nor did it develop a specific video streaming technology, although it works with Fujitsu on one approach.)
Stephen Wood, the long-time head of the now-dissolving WiMedia Alliance (a trade group devoted to UWB standards), spent some time convincing me in March (as I reported in this Ars Technica article) that UWB had a future because separate trade groups were still in interested in pursuing UWB as a fundamental part of their evolution.
Wood's multi-pronged argument is that the cost of UWB chips and integration is finally dropping to the widespread adoption point; the USB Implementors Forum is committed to UWB for its Certified Wireless USB flavor; and that only relatively recently were worldwide regulatory standards put in place that could spur the use of UWB on a truly worldwide basis.
Thus it seems to me that the real question about UWB is whether manufacturers who are members of the USB forum, a few of which already ship a limited set of UWB-enabled laptops, get gung-ho about the technology and start embedding it in large swaths of products when the price hits the critical $5 threshold.
For that to happen, printer and digital camera makers along with mobile handset developers would also need the religion. All the desktop and laptop PCs in the world could come with UWB "free" (the cost hidden in the overall price), but without peripherals it makes little sense.
With many entry-level printers and nearly all portable gadgets--smartphones or otherwise--having Wi-Fi built in, I have a hard time seeing where UWB gets a foothold. Further, with the coming wave of faster, battery-saving single-stream 802.11n devices hitting the market this year, and the Bluetooth SIG having released its 3.0 spec with an 802.11 data-transfer mode for large files, it's just hard to see where UWB can fit in.
Leaks reported from some reasonably accurate sources say that 802.11n might be built into the next model of iPhone, along with chips to support the 7.2 Mbps HSPA flavor to which AT&T is currently upgrading its 3G network.
Could it be? Sure. But is it useful? Not so much yet.
802.11n was developed as a range and speed booster, employing multiple antennas and two or more radios to work over greater distances (sending a stronger signal, having better receiver sensitivity) and at greater speeds (improved encoding, multiple spatial paths, double-wide channels).
That's fine for laptops, desktops, and routers, but it's hard to cram that much radio technology into a battery-powered mobile device without making the time between charges unusably brief.
Meanwhile, chipmakers keep shipping hundreds of millions of commodity 802.11g chips, which they make no real money from, and which they have no interest in improving processes for.
That's where single-stream 802.11n comes in. With single-stream 802.11n, only a single radio and single antenna are used. This may seem odd to cut out most of the advantages of the standard - lurching its way to a 2010 ratification, by the way - but single stream still offers quite a lot.
Apple released simultaneous 2.4/5 GHz base station upgrades today to its two full-featured products: Apple's AirPort Extreme is in its sixth revision with the same name, by my count, and its third featuring 802.11n. The latest release moves from either 2.4 or 5 GHz 802.11n networking to supporting both bands at the same time. This is a hardware update, requiring a new unit, as Apple added a radio to the mix.
Apple also updated its Time Capsule model, an Extreme base station that comes with an internal 500 GB or 1 TB hard drive for backups and networked file sharing. Prices are the same as the previous one-band-at-a-time models: $179 for Extreme; $299 for a 500 GB Time Capsule; and a ludicrous $499 for a 1 TB Time Capsule.
Both units have four gigabit Ethernet ports, configured as 1 WAN and 3 switched LAN ports or as a 4-port switched LAN in bridged mode. Both models have a USB port that allows a printer or hard drive to be attached, or, using a USB hub, multiple devices of each kind.
Apple ships configuration software for both Mac OS X and Windows, and the system is fully compatible with Windows.
The company told me it incorporated small changes to its Mac OS X adapter hardware that allows a Mac to select the faster network if both 2.4 and 5 GHz networks are named identically. This change doesn't break Wi-Fi interoperability, but goes beyond the process that adapters use to select among available networks today. The 2.4 and 5 GHz networks may be named uniquely.
New features were added as well. Guest networking is a neat addition, using a virtual SSID and virtual LAN to create a network for guests that's got a separate encryption key and cannot see the traffic of the main network nor its Ethernet portion.
Apple also added remote secure file-sharing access to any internal or external drive on either base station model using its Back to My Mac service, which debuted in Leopard. The service requires MobileMe, a $100/year email and file-storage subscription service. Mac OS X 10.5 Leopard can initiate a strongly encrypted tunneled connection with any other Leopard system or, now, new base station that's configured with the same MobileMe account.
Those interested in deeper detail should consult my Macworld article.
Meraki has decided they're a grown-up company, after all: Meraki started out as the little guy, with tiny $50 nodes that would self-organize into a mesh Wi-Fi network. Even as the company grew out of its origins, it still focused largely on indoor applications, where outdoor uses were an adjunct. Their new MR58, a 5 GHz triple-radio 802.11n ruggedized weatherproof outdoor node changes that entirely.
The $1,499 (list) unit, which meshes with all the existing gear and includes the license for Meraki's required software-as-a-service hosted management system, can go omnidirectional or directional on each of the three radios as separate systems. The company sees the unit as being a way to link locations (they claim 1 to 20 km with appropriate directional antennas), and provide front-end access in public places.
The company sells into several markets, including hotels and motels, apartment buildings, academic campuses, and hotzones. It doesn't emphasize corporate customers, although Meraki added WPA2 Enterprise authentication in a recent back-end update.
The gear Meraki is selling could be used for cities--the company has some such installations in towns--but the design is intended to extend Meraki's existing ecosystem from tiny indoor wall warts up to outdoor AC and solar-powered single-radio models.
The MR58 is 802.1af Power over Ethernet compliant, sucking down 8 watts at most, the firm's founder told me. This means a single Ethernet run to a roof can power the MR58; no AC outlets required.
You can get the full scoop in my coverage at Ars Technica.
Buffalo Technology has had an injunction lifted in its ongoing patent litigation with Australia's CSIRO technology agency: Buffalo was unable to sell Wi-Fi equipment in the U.S. since a permanent injunction was put in place in June 2007 following their 2006 loss in a lawsuit. CSIRO has a patent that they argue covers aspects of OFDM in 802.11a/g. CSIRO sued Buffalo after the Japanese equipment maker declined to pay royalties.
The injunction prevented Buffalo from selling gear that it offers in Japan and elsewhere in the world during the huge expansion of Draft N sales. This likely caused tens of millions of dollars of lost revenue, if not more. Buffalo was formerly mentioned in a single breath with D-Link, Linksys, and NetGear. (Linksys, as a division of Cisco, already pays CSIRO license fees: Cisco agreed to honor CSIRO's patent assertion because of a purchase of an Australian firm a few years ago.)
Buffalo can now sell Wi-Fi gear in the U.S. due to winning a narrow appeal in October that sent the case back to a lower court to resolve an issue. The company could still be liable for damages and other fees if the lower court finds for CSIRO and higher courts agree.
Orthogonal Frequency Division Multiplexing allows a single Wi-Fi channel to be subdivided into a smaller number of channels, improving performance in reflective environments and adding robustness against interference. It's also used in WiMax, LTE, and other standards. This could mean CSIRO would pursue makers of other technology eventually as well.
CSIRO has never given any sign of asking for predatory royalty rates, but several firms have countersued, including Intel, Dell, and Microsoft. Those cases are still in litigation, as far as I can tell.
The folks at Quantenna made a splash with their "1 Gbps" Wi-Fi announcement today: Venture-backed chipmaker Quantenna says that they have a tiny chip that should make it easier and cheaper to push high throughput Wi-Fi around a home using wall-outlet adapters. The company claims 450 Mbps of throughput from the highest-end Draft N standard (600 Mbps raw), and that it has a 1 Gbps wireless offering that uses multiple bands and channels to achieve throughput. There's not enough detail to know how proprietary that is, or if it's a form of channel bonding.
Quantenna announced three chipsets and a reference design: simultaneous dual band at raw rates up to 1 Gbps, 5 GHz at up to 600 Mbps, and 2.4 GHz at up to 450 Mbps. The reference design is for a compact wall outlet Wi-Fi extender.
The company said it's using a proprietary version of the 802.11s mesh protocol to allow devices to interact with each other. Quantenna's focus appears to be on spreading signals across a house, such as with streaming high-definition, where lots of bandwidth will be needed as telcos, satellite operators, and cable firms deliver HDTV into homes today, but plan much more in the future. Storing HD and then being able to have multiple live streams sent among devices is apparently the wet dream of those involved in home entertainment.
You can be clever about pushing HD around a home (like Ruckus) or brute force it by flooding an area with high throughput like Quantenna, which isn't a bad strategy, but it's an interesting one. The fact is that there are already market solutions that don't require 450 Mbps of net throughput. The segment they're looking at seems too well developed and small for them to capture a sizeable chunk when products based on their design are released in mid-2009. And as a startup, their ability to sign deals with firms that sometimes take 1 to 2 years to negotiate and sign makes me wonder; their investors might be brokering those deals to make them conclude faster.
Small, integrated chips make a big splash because they reduce the battery drain on mobile devices, allow the use of these chips in handhelds, and can dramatically drop the cost of manufacture both through a reduced bill of materials and reduced assembly costs. Quantenna told several sources that they expect to charge $20 for a single-band chipset and $40 for a dual-band chipset in quantity. For chipmakers these days, that can mean from 100,000 to 1m before the price drop happens. (It used to mean much more, but efficiencies have improved in smaller lots of chipmaking, apparently.)
I've followed chip announcements in the Wi-Fi space for years, and small startups that have unique offerings tend to either get swallowed up in short order (Airgo into Qualcomm) or disappear (the very promising Engim\). Atheros, Broadcom, Qualcomm, Texas Instruments, Marvell, CSR, and a few others own the market, and that's just how that is. Chipmakers in this industry segment needs millions and then tens of millions of sales to make it possible to recover their R&D costs while sinking money into future R&D for the inevitable next generation.
(Airgo, I might note, was sucked into Qualcomm and sunk without a trace, although it's likely their patents were part of what was of interest; their approach to building MIMO systems was probably integrated into other product lines and multi-standard chips.)
My review of the Linksys WRT610N at Macworld: The router works quite well at handling Wi-Fi and other functions, but is terrible at working with Mac OS X, one of the advertised features of the product. The WRT610N is a revised design of the previous simultaneous dual-band (2.4/5 GHz) Draft N WRT600N model which had far worse problems.
Linksys addressed many of my concerns with that previous device. The 610N can mount a drive and share it via SMB and FTP, have two full-speed connections running over both bands without skipping a beat, and supports several methods of getting the one-click WPS (Wi-Fi Protected Setup) to work. Read the review for all the details, but I can't recommend this router to Mac users with any needs beyond basic networking; I'm perfectly happy to give it a full thumbs-up for Windows XP and Vista users, however.
WPS is a particular mess, by the way. Linksys has four somewhat distinct methods of using WPS to enable a password-free encrypted connection between a client and a base station: a button on the front that, when pressed, turns on WPS; and three modes (one of them similar to that button) accessible via their Web configuration software. One option is to get the base station to create a short PIN that's then entered on the client system as an out-of-band confirmation that there's no man in the middle.
Apple, by contrast, has a single way of joining a WPS-offering base station: it displays the network's name in bold. Select the network, and Mac OS X displays a key code that needs to be entered on the base station. But the WRT610N can't handle that option. If you put the WRT610N into a mode in which Apple can spot the device as offering a WPS handshake, you can't enter the code into the Linksys router!
This shows that there's still rough edges in the WPS protocol that two of the highest-selling makers of Wi-Fi gear can manage to not mesh up their respective options. (Apple declined to comment for my Macworld story; Linksys confirmed the lack of compatibility, but put the burden on Apple's doorstep.)