Receive new posts as email.
This site operates as an independent editorial operation. Advertising, sponsorships, and other non-editorial materials represent the opinions and messages of their respective origins, and not of the site operator. Part of the FM Tech advertising network.
Entire site and all contents except otherwise noted © Copyright 2001-2010 by Glenn Fleishman. Some images ©2006 Jupiterimages Corporation. All rights reserved. Please contact us for reprint rights. Linking is, of course, free and encouraged.
Quasi-sock puppetry: An extremely detailed financial analysis of municipally owned networks was released by the Pacific Research Institute, a think-tank with strong industry ties. The report is called Wi-Fi Waste: The Disaster of Municipal Communications Networks, although Wi-Fi is not a component of the vast majority of networks that they look at. And they only look at Wi-Fi networks that are municipally owned--three!--a fraction of the wireless networks being deployed.
The speciousness, too, is that Wi-Fi and telecom are related. Most of the Wi-Fi networks being delivered will have small voice components initially, and won't replace residential or business telecom at all. IPTV is a component of most municipal fiber networks, and all incumbent fiber networks, where it's hardly a blip in thinking for early Wi-Fi networks due to the vast mismatch in available bandwidth.
It would take weeks to look through their assumptions and analysis on how these largely fiber-optic or fiber-coax-hybrid networks are huge money sinks. But their statement that the systems have cost taxpayers $840m over 20 years is tricky: some of these municipal entities are utilities that are required to invest continuously in infrastructure, and the money put into networking didn't come from taxpayers--it came from ratepayers or even from the electricity or water markets when surplus was sold--and wouldn't have been returned to taxpayers if unspent.
The reports uses data from no later than 2004, and that's tricky because even though a number of these networks have been in operation in some form for years, the first few years of each network's operation involves extremely expensive buildout that's paid back over many years. So a four-year window of operation could show a disaster. And I'm dubious of any analysis of Tacoma, Wash., that shows it as a financial failure--that reveals something about the assumptions used to analyze the network's revenue. It's also tricky to look back to 2004, before incumbents had deployed any real fiber and before broadband had hit its home tipping point, and expect to extrapolate in a straight line from there into 2007.
The hobbyhorse of broadband over powerline (BPL) is trotted out as a "new" technology that needs encouragement. All anti-regulation thinktanks are pro-BPL as a third pipe to the home, allowing yet another incumbent monopoly (electrical utilities) a broadband option. It's unclear whether BPL will ever be a success, though as I wrote in The Economist in December, changes in US and European spectrum regulation may wind up being a factor in promoting rollouts in certain markets. (Texas is a rare example: TXU Electric Delivery handles just the delivery of power, not retail billing or power plant operations, and their adoption of BPL on a broad-scale is still in the early stages of deployment and relates largely to their interest in having a smarter power grid. The broadband part is extra.)
However, the report authors should be commended for exposing their entire set of collected research, including a long appendix showing the yearly data they used for operating cash flow, interest, capital expense, and other factors to produce their "cumulative free cash flow" number, which is their determination of success. I'm hoping a group with the resources necessary can look through assumptions and recalculate the results. Update: Per a note from Becca Daggett in the comments, I don't mean to imply that the report's numbers are an accurate reflection of generally accepted accounting procedures. Rather, that because the appendix includes all the numbers they used, their process can be reverse engineered and compared against publicly available information from the utilities and municipalities in question.
PRI receives significant funding from industries that they write reports about, according to SourceWatch, and seems to be the last hard-line group opposed to any municipal involvement in broadband except in keeping their filthy hands off it.
Security researcher David Maynor publicly confirmed during a presentation at Black Hat today that he and Jon Ellch had a native Mac OS X Wi-Fi exploit last summer: This is the first time Maynor has confirmed the fact. At the end of a presentation about weaknesses in wireless device drivers and techniques by which those weaknesses can be revealed and exploited, Maynor spoke extensively about his experience in working with Apple's product security, engineers, and PR group last summer. I was sent a copy of the presentation, which Maynor will post on his own site. He left SecureWorks, his employer at the time of last year's dust-up, and has his own security consulting firm now, Errata Security.
In brief, the issue at stake has been whether Maynor and Ellch fabricated an exploit, or whether Apple lied about said exploit; or whether a more baroque explanation could be made. The exploit would have allowed a cracker in radio proximity of a Mac running 10.4.6 and earlier, and possibly 10.4.7 (less certain patches), to gain control of the computer. You can read a relatively concise history I wrote after Apple provided Wi-Fi patches in Sept. 2006 that they claimed were in response to, but not as a result of work done by Maynor and Ellch. (That is, internal work was done at Apple because of concerns, not because of details supplied by the two, Apple said.)
Maynor said he will release code that will cause a Mac OS X 10.4.6 system to crash, but not to "own" it, or take root control. He'll show the control part in a future presentation. This would ostensibly be the code that he and Ellch had created last summer. I would like to assert that Maynor is speaking the truth here based on facts I have been asked not to disclose.
Why should we care? Because this was a significant weakness in an operating system used by millions of people. The response to reports of security problems are significant, as the failure to recognize and repair these problems leaves users vulnerable without being aware of being vulnerable. Researchers who follow reasonable disclosure should receive recognition as encouragement to continue to report these flaws so they can be fixed before the bad guys, typically organized crime using flaws to extract money or con people, take advantage of them.
In the presentation, he shows the emails he sent and describes conversations he had with Apple. Maynor had hoped to put the issue to rest over whether Apple had received material from him: he says and shows that he sent scripts and instructions on replication. I have no reason to believe his screen captures of email are false, nor the responses from Apple he reproduces.
However, and this is going to kill Maynor, what he is able to show from his own email account--he said he cannot show emails sent or received at his SecureWorks account--shows only that Apple received what he sent. It does not show that Apple deemed what he sent "useful." Apple spokesperson Anuj Nayar said at the time that SecureWorks--not mentioning Maynor or Ellch, who was not an employee of that firm, by name--provided no "information to allow us to identify a specific problem."
A later response from Apple's Lynn Fox to queries from George Ou seems to be directly contradicted by what Maynor said today. But there are lots of inconsistencies. In a first-hand account of the talk at News.com, Joris Evers says that Maynor said during his presentation that he sent Apple "code and...packet captures." Fox says that they didn't receive anything related to OS X; Maynor's email shows how he told them how to construct an "FC3" (Fedora Core 3) system that would run the attacks he suggested revealed problems. It's rather confusing.
I have a query into Apple about Maynor's presentation.
It's possible that internal miscommunication within Apple meant that one part of the company received the details directly from Maynor and thought it was useful (read: engineers), while another part was dealing with SecureWorks, which may have provided or not provided a different set of information. SecureWorks was in the middle of a merger with a related firm at that point. It's clear that SecureWorks prevented information from being released to the public.
Maynor makes clear that he and Ellch never intended to show a native exploit, but only a third-party one. They never confirmed the existence of a native Wi-Fi driver exploit until Maynor's talk today. He didn't respond to John Gruber's hack-a-new-Mac challenge because he didn't want to confirm the existence of the flaw. And, he said, SecureWorks succumbed to pressure by Apple--what pressure is unclear, as there should have been no reasonable legal basis--to not speak at Toorcon about the Apple situation last September. SecureWorks and Apple later released a statement that the two firms would work together along with CERT, but no further information has ever been forthcoming.
Brian Krebs of Security Fix at the Washington Post said he saw a native exploit the night before last summer's presentation that led off all this nonsense, but Maynor and Ellch apparently didn't intend to have that information released. It's a reason why they never confirmed Krebs' transcript or account to Krebs or anyone else. It's also unclear whether Krebs saw the actual native exploit, or something akin to it; he apparently did see an exploit without a third-party driver involved. Krebs has been left hanging over this issue this whole time, too, although it hasn't seemed to bother him. News.com reports that Maynor said during the presentation that "I screwed up a little bit" in regards to the confusion about what had been demonstrated.
Maynor wrote in his presentation that he lost patience with Apple after agreeing to provide them information, and their PR arm released statements that broadly denied the existence of flaws that went beyond what Maynor and Ellch had shown and stated was the issue. In an email from Fox in Apple PR that Maynor reproduces in his talk, he notes that she asked him to put a statement on SecureWorks site that would describe more specifically what was demonstrated at Black Hat and to Krebs. What Fox asked for was too broad and possibly inaccurate: she wanted him to deny that any MacBook exploit was possible, and to state that Krebs had not seen any native exploit. Given the confusion around what Krebs saw, I'm not clear whether Fox would have known precisely what Krebs saw, either.
What SecureWorks published was something different--see this archived Security Focus mailing list item, as SecureWorks has pulled the page from their site during a redesign (scroll down to find it). SecureWorks statement, based on what Maynor now confirms, is accurate.
At the end of this presentation, Maynor notes in passing that even after he received the mail he disliked from Fox, he provided information to Apple about a significant Bluetooth vulnerability, one that hasn't yet been fixed. Maynor said this experience has led him to have no interest in providing information to Apple again about security flaws. That's to all our detriment.
Muniwireless.com notes the trend of cities negotiating contracts for broadband wireless without putting out bids: Napa, Calif., is the most recent example in a trend of several where the city just entered negotiations with a single provider which will offer the service; AT&T in this case. Carol Ellison writes that AT&T's rapid entry into the metro-scale Wi-Fi market is just one prong of the company's municipal package. They can offer everything in one bucket. AT&T told Ellison that they are obligated to wholesale access on the wireless network to other companies, although we've seen in the past what a requirement means to incumbent wireline operators.
Ellison notes that this is a great strategy for AT&T and its shareholders, but what about cities and residents? She thinks that an open process with competitive elements better suits taxpayers.
The great breadbasket of Canada looks to bring wireless to major municipalities: The middle part of the country is known for agriculture, and has under 1m population, although food ranks far below oil these days in revenue. The province aims to help improve connectivity for its residents and for business travelers by adding free Wi-Fi to the four largest cities. Because of the relatively low population and large amount of business conducted, it's a relatively low-cost win--Cdn$1.3m for installing 250 nodes across the locations selected and Cdn$339K per year.
The Pew Internet and American Life Project talks to Americans about their use of the Internet on an ongoing basis: Their most recent finding is that 34 percent of those surveyed have used the Internet wirelessly at home, work, or elsewhere. Their free report shows that 72 percent of wirelessly connected users check email every day, compared to 63 percent of home broadband and 54 percent of all Internet users. They also found 80 percent of wireless users have broadband at home, which makes a lot of sense.
75 percent of wireless users have accessed the Internet from at least two of the three areas surveyed: home, work, and some other place, which also correlates to anecdotally observed behavior. The report is based on under 800 respondents with just a few percentage points of expected error.
Ruckus Wireless offers the MediaFlex HS, a $200 device aimed at helping hotspots shape traffic, net revenue: The device uses the multiple-antenna beam-forming system found in other access points and bridges sold by the company, but offers the ability to create multiple virtual networks (SSIDs), each of which has unique properties. This could allow a hotspot to offer a lower-tiered free service, and a higher-tiered premium service, with the latter providing video and VoIP support. Ruckus isn't offering bandwidth throttling, but rather employing quality of service in such a way that one virtual network could have its video and voice packets assigned full priority over another.
Ruckus definitely identifies the need for hotspot operators to have someone more than the commodity gear that lacks flexibility, like multiple SSIDs to restrict network access for different purposes, but for which the much more expensive enterprise APs are overkill and hard to manage.
The municipal broadband scene is seeing a turning of the tides: Oh, yes, Master Shallow, I have heard the chimes at midnight, and it seems that an old way of doing business may be passing away, as incumbents refocus their efforts and state bills are poised to reverse to disable incumbent-benefiting municipal and utility restrictions. Reports in from all over.
Washington State considers bill to allow public utility districts to offer telecom services: The bill would cut the line, "Nothing in this subsection shall be construed to authorize public utility districts to provide telecommunications services to end users" among other enabling changes.
Home of a long-running feud over fiber, Lafayette will start work: The Louisiana town has wanted to roll out its own city-owned fiber-optic network for years and years, and was fought on several fronts by incumbent operators and others. The Lafayette Utilities System received a 7-0 state Supreme Court ruling in its favor to allow it to sell bonds to finance the project. Cox says it already has a state-of-the-art fiber installation and will invest another $500m in the region. In most cases, incumbents rarely upgrade facilities until the threat of municipal competition is invoked. BellSouth also fought the effort. Neither offers fiber-to-the-home (FTTH), which is what Lafayette will build. The utility is rapidly getting its house in order to sell bonds and start building. Some service could start in 18 months.
Pennsylvania bill would re-enable municipal broadband: Rep. Mike Sturla (Dem.) would overturn the contentiously approved law that nearly scuttled Philadelphia's planned wireless network. The law requires municipalities that hadn't started work by a certain point to request a waive from the incumbent telecom provider. The law has been construed to allow public-private partnerships, however, in which cities aren't the owners of the networks being built. However, Rep. Sturla says that it's unfair that some municipal wireless networks can be built due to timing, while other cities have their hands tied.
A very strange story out of Alaska: A police officer seized the laptop of a 21-year-old who was using free Wi-Fi from a library after it was closed parked in his car. The police had warned him off parking in private neighborhoods and using unsecured networks, and had told him to leave the area outside the library the day before they seized his computer.
The article is short on details. He wasn't arrested, but his computer was seized. The basis of that seizure aren't disclosed--what crime was actually committed? Trespassing, perhaps, as he was parked in a place he had already been told to leave? The computer isn't being examined by police; rather, the library's director will be looking into the matter. The fellow in question seems only mildly irritated, and neither he nor the police are sure whether he'll be taken to court over the matter.
The hilarious librarian Jessamyn West notes on her blog that there are a number of other unanswered questions, such as why the library needed a professional to install a "timer," when they could just hit the off switch if they didn't want it used after hours.
This reminds me quite a bit of the quite (not Very) Rev. AKMA (A.K.M. Adam) being asked back in Aug. 2004 to not use the Nantucket (Mass.) Athaneum's Wi-FI while he was sitting outside the facility by a police officer with a sketchy idea of what actual law might be involved.
I read through the Alaska State Troopers' recent watch reports, and found no mention of this. Anchorage police don't publish a blotter, more's the pity.
The rush to put Wi-Fi on trains has slowed in a couple of places in the US: Down in Florida, the Tri-Rail system won't choose among three potential vendors to put Internet access via Wi-Fi on board its trains. Rather, the South Florida Regional Transportation Authority has reached out to three counties (Palm Beach, Broward, and Miami-Dade) to integrate all their efforts at large-scale Wi-Fi together.
In the San Francisco Bay Area, I wrote a few days ago about a trial of Nomad Digital's gear on the Capitol Corridor train line. While four firms were selected last year to test their respective approaches on the CC line, Nomad is the first to carry out a test, in that case using a train borrowed from Caltrain (which runs in the southwest SF Bay). Caltrain tested Nomad's service back in July 2006; vendors were expected to test their approaches during 2006 on the CC line. There's no word on when Capitol Corridor Joint Powers Authority, which operates the CC line, will proceed with additional tests or a formal RFP.
This slowdown in US train-Fi rollouts doesn't necessarily bode well or ill for other, much larger deployments in the UK, Sweden, and The Netherlands.
The LA Times writes that Southern California Edison will allow some of its streetlights to be used in Santa Ana by EarthLink for a test network: The utility has been holding up routine requests for pole access based on concerns that no other utility in the US appears to share. Reporter James Granelli notes that Edison didn't reject requests to put radios on poles and streetlights; it just hasn't acted on them. I'm stunned that none of the cities have filed complaints with the FCC. Edison controls 613,000 streetlights across 183 cities; Los Angeles and Anaheim are outside Edison's control as they own their own electrical operations.
In Edison's defense, their streetlight system has an unusual design, in which there's no direct power feed as in other cities, and there's a concern about automatic switching equipment receiving interference. Of course, in the end, that's Edison problem: They can't deny access for this reason under the Telecom Act. They would more likely be required to perform a massive network upgrade.
EarthLink has been testing a variety of gear and frequencies to alleviate these fears.
City in Illinois has Wi-Fi naysayers over health: All you need to know about this article is the following. Naperville Wi-Fi opponent states, "In a town in Sweden, there were so many hospital calls when the WiMax system was activated that the entire country has since eliminated all Wi-Fi systems." Swedish embassy contacted by reporter: Not true; Sweden is expanding WiMax. Opponent: "That’s what I was told."
I received some rather angry email the last time I suggested that EMF used in Wi-Fi doesn't cause ill health effects. I think I've come up with a formulation to explain my position: Love the afflicted, hate their reasoning. I've been trying to make clear that I don't think people--such as this executive profiled in this Daily Mail article--are making this stuff up. Rather, the preponderance of evidence would suggest that ill health that manifests itself so strongly probably has a cause other than EMF because of the prevalence of EMF.
I captured the coffee: Fon has an interesting idea to spread the reach of the for-fee part of its network. Convince folks near Starbucks outlets to install Fon routers--which Fon will provide at no cost--and thus provide a $2 day rate alternative to T-Mobile's $10 price tag. (T-Mobile charges $10 for 24 hours' access across their network.) Those who operate their network on a for-fee basis with Fon, calls "Bills" in their system,
Of course, T-Mobile has T-1 lines to each of its locations, a strange precondition that dates back to MobileStar's initial arrangement with Starbucks, which provides high-quality, high-service-level-agreement-consistency 1.5 Mbps service in each direction. Comparable DSL and cable service provided by a Fonero is likely to be lower on the upload side, but could be as high or higher for downloads.
However, as noted before on this site and elsewhere, most US ISPs don't permit sharing a network connection with users outside the household, Speakeasy Networks being the sole national ISP that allows sharing any broadband connection without special conditions. And La Fonera, the flagship router distributed by Fon, may have a good antenna, but it's unlikely to provide the same strength of service within a Starbucks that the T-Mobile signal provides.
Still, it's an interesting shot over the bow, and any encroachment by Fon on the relatively high day rates charged by T-Mobile and some other networks could offer some welcome relief. On the other hand, as TechDirt notes--from which I found this story--there's plenty of free coffeeshop Wi-Fi for those not inclined to frequent El Starbuckos already.
My review of the new AirPort Extreme Base Station is up at Macworld: This lengthy review, aided by several colleagues at the magazine, covers a lot of the basics for home users. I gave the unit 4 1/2 mice for how well it lived up both to its potential and how well it works. I was able to see consistently high speeds in testing, in excess of 90 Mbps in a single direction over 802.11n to Ethernet (flooding packets from N to Ethernet), and about 50 Mbps when flooding from N to N via the base station. My conclusion is that the device really needs gigabit Ethernet to achieve its full potential.
You'll note that the AirPort Extreme is what I was referring to in a post a few days ago in which I described how I developed new testing methodology for Wi-Fi gateways. The Extreme has a minor flaw that won't bite many people in its ability to pass traffic at full Ethernet speeds across its WAN port when network address translation (NAT) is engaged. Apple said they are looking into the problem, which is software based. A source unconnected with Apple provided convincing proof that the AirPort Extreme uses NetBSD as its embedded operating system, and that the network stack in that OS could be at fault. But it could be trivial to fix, too. (Update: Not to be obscure about NetBSD: the Acknowledgements.pdf file found on the CD-ROM that ships with the AirPort Extreme provides full copyright and acknowledgments credit for included software, as required by a host of GPL and other licenses. NetBSD is thoroughly acknowledged there; the DHCP software is credited to ISC.)
I'll be writing more soon about particular aspects of the base station, but for now, I'd like to direct you to the technical discussion about the Extreme's use of IPv6, the next-generation Internet routing protocol that's been "next generation" for something like eight or nine years now. IPv6 support is found throughout Mac OS X and is fully supported in the Extreme base station--so fully, Ars Technica's Iljitsch van Beijnum reports, that by default every Mac OS X computer that connects to a new Extreme gateway will be fully reachable through tunneled IPv6 from the rest of the Internet.
The soon-to-be-approved Draft 2.0 for 802.11n will protect 2.4 GHz legacy networks well, but is tuned for 5 GHz: A large part of the horsetrading involved in the IEEE Task Group N's work between Jan. 2006 and Jan. 2007 centered on making sure that Draft N didn't beat up its older relatives, 802.11a, b, and g. The constraints placed on 2.4 GHz will naturally steer new network deployments into 5 GHz, a better band for video streaming around the home as well.
I spoke with Atheros, Broadcom, and Metalink recently, along with the Wi-Fi Alliance, to look at how the current Draft N will protect legacy networks, and where N will take the consumer markets. (Last month, I wrote lengthily about some of the technical issues that make 5 GHz appealing, but may also constrain it.)
St. Petersburg, Flor., picks the Atlanta firm to build out a network covering 60 square miles: EarthLink chalks up another win. Everyone needs a superlative, so St. Pete becomes the first of the 10 largest cities in Florida to sign up for a citywide wireless network. Part of the deal has EarthLink locating "its Gulf regional distribution office" in the city.
The upcoming test areas for Wireless Silicon Valley will include use of the 5.9 GHz automotive band: The project has emphasized public safety and personal access, but it was clear from the get-go that every form of wireless will get a work out, with Cisco and IBM having the opportunity to build systems that they could then sell worldwide. The reserved 5.9 GHz band will allow automotive telemetry, so that cars can provide real-time information to centralized systems, warn drivers of problems, and provide traffic information that's highly localized.
RFID tags were supposed to be cheap and easy to use by now for logistics: But even with Walmart requiring top suppliers to use the radio tags, and equipping several warehouses for scanning, the effort is still nascent. Suppliers don't want to criticize Walmart, but it's clear that there's no return on investment due to a lack of full integration of RFID into existing software systems for handling inventory, shipping, and tracking, and to the continued high per-tag cost. RFID tags used in this fashion are disposable.
As with Bluetooth, the hype preceded the utility. With a bazillion Bluetooth devices on the market, automotive integration, and audio use, you can't find anyone now declaring Bluetooth dead, as was the case even a year or two ago. (Bluetooth still needs to evolve, of course.) Likewise, it's not that RFID has failed, but rather that Walmart's efforts have outstripped the pieces necessary to provide a real return on investment for either the retail giant or its suppliers.
Privacy advocates ask more of new Wi-Fi networks than of incumbents: Greg Richardson is the usually poised head of Civitium, the firm that has written many of the requests for proposals (RFPs) used by municipalities around the US and internationally that solicit bids for companies to build, typically at the companies' expense, metropolitan-scale Wi-Fi and wireless networks. San Francisco has made him lose his cool towards what he describes as "the far-left viewpoints being expressed by the ACLU (on electronic consumer privacy) and ILSR (on public ownership)" (that's the Institute for Local Self-Reliance).
Richardson points out that the far-right opinions initially dominated the debate over whether cities should get into the business of building broadband networks. These "far-right" opinions were largely, but not entirely, issued by organizations that either didn't reveal their funding but are known to be sock puppets of incumbent interests, or those that did disclose their funding, which included incumbents.
In San Francisco, the left-leaning elements have come into play in a way that Richardson thinks is outrageously unfair. In this case, however, there's no allegation of filthy lucre; it's pure ideology. The opposite of a sock puppet, which appears to have no witty name.
Civitium built the San Francisco RFP, and has apparently been involved in the negotiations, so Richardson is in a position to know what horse trading took place. He argues on the privacy side that not only does EarthLink's agreement offer more protection than any similar agreement anywhere else, but that the incumbents in SF aren't subject to anything like the protections that EarthLink will offer. (There are differences, of course, in what you can track on a Wi-Fi network versus a home broadband connection, but cellular data networks have much in common in terms of privacy given up.)
Richardson says that the EarthLink agreement with SF allows for additional privacy standards to be applied but only a non-discriminatory basis against "all similarly situated providers of broadband service." So why just pick on the new guy? Why not make AT&T and others toe the line, too? "What was the Board doing about electronic consumer privacy in San Francisco before the EarthLink agreement was delivered to them?" Richardson asks.
On public ownership, Richardson paints the folks at ILSR as, more or less, socialists, wanting public ownership and the public realization of potential profits. The conservative side of the debate has often focused on whether public bodies should bear the risk and, when successful, reap the reward of public ownership of facilities typically handled in a marketplace. The liberal side usually expects that public ownership produces more egalitarian access to any given facility (quality schools, broadband, bus service, etc.), and that so-called profit represents market inefficiency for a basic service that should, in fact, be conserved for taxpayers (in the form of less taxation) or government (in the form of greater resources to achieve more).
I don't pretend to be an economist, but I do know that I would prefer that cities hire companies that have the expertise to build networks of this scale, and conserve as much risk in these early stages as possible. One example ILSR widely cites is St. Louis Park, Minn., which hired private firms to build a public network that will be fully owned by the city. Or St. Cloud, Flor., which has a free network built and owned by the city. For smaller towns, especially those that might lag in broadband adoption, city-owned networks have less risk and lower overall cost and complexity. There are good cases to be made for cities of a few tens of thousands to take the matter in their own hands because they're increasingly unlikely to find a firm with substantial experience willing to take the risk for a small population of users.
Public ownership is usually recommended when there's a large civic benefit that can be quantified and understood, and where private partners are either unavailable or unwilling to participate in sharing risk. The fiber-optic project that San Francisco's Board of Supervisors promotes is an interesting case in point, and reminds me of the big "risk" that Tacoma Power took in building their Click Network. I cite Click all the time, because while the network showed up in many sock-puppet and non-funded reports arguing against muni ownership, it's actually a big success.
Tacoma Power needed to upgrade their electrical grid, and could have justified building out a fiber-optic network practically without any resale of broadband. In the mid-1990s, when the plan was hatched, it took 18 months to get a new phone line in Tacoma, and the incumbent telco and cable operator had no plans for upgrades. The power utility also expected deregulation would force them into financial straits. Thus, the perfect storm: They needed a new, competitive business; the city needed better telecom and data infrastructure; and the grid needed "smarts."
Finding that perfect storm for Wi-Fi in today's climate is much more difficult.
I can't argue that EarthLink's deal with any city is perfect, nor any of the major Wi-Fi operators deals are perfect. The question is always--what is a city giving up by going into a long-term arrangement that is essentially a franchise in that the likelihood of competing networks with similar characteristics become practically zero when a municipally authorized network is built?
Richardson's rant is a good read, and I'll be curious as to the fallout.
My parting remark? New York has been trying to put in automated, self-cleaning toilets for well over a decade, with firms vying for the opportunity to install these rest facilities with lucrative advertising paying the cost. Wrangling over fine details, massive changes in plans, and special interest groups have led to one result: No toilets. In the meantime, other "hotspot" public toilets have sprung up paid by business improvement districts, private enterprise, or specific transportation authorities. Sound familiar?
A little news on the Capitol Corridor rail line Wi-Fi project in California: The rail line runs from Sacramento to San Jose, and they want Wi-Fi on board. Although tests were originally planned for last fall, it looks like the first was run Feb. 3 (I was invited down to watch, but couldn't make it). Nomad Digital, which has a WiMax installation in the UK in partnership with T-Mobile on the Brighton train line, demonstrated its technology using a train car borrowed from Caltrain. Caltrain ran a test a few months ago with Nomad and others; it runs from San Francisco south to San Jose (on the southern peninsula of the bay).
The Nomad system could take two years to deploy, once a contract is awarded. There's no deadline on that being issued. Capitol Corridor's efforts are being examined by many other agencies in the state, and a contract for their line could be used as a model for expedited deployments elsewhere.
The sensational headline on the news story "Amtrak Wi-Fi service could prevent deaths," refers to the relatively high number of people killed on Caltrain tracks, 17 of them last year with 9 believed to be suicides. A video monitoring system that points at crossing could give engineers enough time to slow down and avoid collisions, intentional or otherwise. Collisions can also cause derailments, of course, so there's more than a few lives that could be saved in a major accident.
Caltrain is looking into using video monitoring to automate train control, which would allow them to exceed a limit of which I was unaware: Human-operated trains can't go faster than 79 mph.
Alereon's CEO posts love letter to ultrawideband: The FCC approval for the use of UWB technology in the US was approved five years ago on this day of love. Alereon head Eric Broockman gives a few noogies to Freescale--which he says is out of the UWB business altogether, something rumored for months now--and then blows kisses to the crowd. UWB in the form of Wireless USB is imminent. Yes, I know it's been "imminent" for anywhere from two to four years. But there are actual working chips, actual working prototypes, and products lurching toward market.
Anticipation, an-tic-i-pay-ay-shun, it's making me way-ay-ay-eight (SF Examiner, SF Chronicle): San Francisco becomes a case study in how not to build a city-wide Wi-Fi network. Despite announcing interest in metro-scale Wi-Fi at the same time as Philadelphia, San Francisco is now about a year into the process of moving from a winning bidder to a fully approved agreement. Phila. took many months, too, but they were the earliest, biggest city, and in the end had an agreement that all parties seemed satisfied with, and with little rancor as the process came to a close. SF, on the other hand, has seen dissent from the start, beginning with the redaction of bids made during early RFI and RFP stages, and proceeding now to the interest by the Board of Supervisors in a city-owned network, and in starting up a citywide fiber-to-the-home plan.
In the latest turn, the Board postponed essentially a no-confidence vote in the plan to March 20 that was scheduled for today. This vote was independent of approving the actual contract, which should now happen simultaneously. SF has 180 days to act from when the city's Public Utilities Commission approves its part of the agreement, which is less controversial and expected in a few days. At 180 days, the city and EarthLink can either walk away from the agreement.
The reason for this post will become clear in a few days when a review I've written is posted on another site: It's an occupational hazard for someone who writes Wi-Fi Networking News to, you know, work with Wi-Fi. Which means that I'm regularly put in the position of being sent a loaner router for review--or, when the company is recalcitrant, buying a unit at retail--and trying to put it through its paces. I'm no Tim Higgins, but I've been working with networked hardware and Ethernet since the late 1980s, so I sort of know what parameters to test.
But I've been stymied in a few instances in testing, and I finally isolated the factor. My network is a bit weird.
Let me break it down. All Wi-Fi gateways designed for homes and small offices have at least two Ethernet ports: one is the wide area network (WAN) port that connects the router to a larger network, such as an office network, or to the local area network (LAN) port on a DSL or cable or other broadband modem.
The other one or more Ethernet ports are LAN ports. Most gateways now--since Apple finally caved in to the trend--have three or four Ethernet ports in a switched configuration. An Ethernet switch can dedicate full capacity in each direction for each combination of ports. So you should get full Ethernet speed between any two connected devices in both directions.
Now here's where a problem comes up. If you pass traffic at Ethernet speeds between the LAN side of a Wi-Fi gateway (either a directly connected Ethernet device or a wirelessly connected adapter) and the WAN side, you typically see a slowdown. Why? Because either the hardware or software can't keep up with pushing traffic between LAN and WAN.
In the case of hardware, it's when the gateway is designed to have a physically separate WAN port and the bridge between the built-in LAN and Wi-Fi service and the separate WAN port can't route at full Ethernet speeds. With software, this problem occurs when you have Network Address Translation (NAT) turned on, and the translation from a public IP address to the private addresses is unable to keep up with the network demands.
Now most people don't bit by this particular flaw, which I've now found in multiple gateways, for three reasons.
First, home users typically don't have a WAN that's faster than the restricted throughput on the WAN port. That may change as fiber pushes out.
Second, most larger networks, such as in businesses and college campuses, don't use NAT on the gateway. They might even use dedicated access points, which have no DHCP or NAT, and which have a single WAN Ethernet plug. If they do use a gateway they turn off NAT. In my "weird" network, I have static IPs on the larger network, and typically in testing plug my larger network into the WAN port, and use NAT on the LAN side. When the WAN limitation is in software, you don't see the throttling because the full Ethernet speed can be achieved.
Third, only recently have we seen Wi-Fi that exceeds 20 to 40 Mbps, typically the restricted LAN-to-WAN speed. And we've only recently seen gigabit Ethernet (1000 Mbps) added as a feature to high-end Wi-Fi gateways. So even on networks which are configured in my weird way, the limitation wasn't visible.
Thus, it's unlikely that most users and offices will get bit by this glitch.
My new testing regime, to avoid hours of effort I spent recently to figure out this problem, is as follows:
Situation A: DHCP/NAT turned on. Test intra-router connections: wireless to LAN and back, LAN to LAN on the Ethernet switch. Extra-router connections: wireless to WAN, LAN to WAN, and back again.
Situation B: DHCP/NAT turned off. Retest same parameters.
It'll become clear in a couple days why I posted this analysis. I advise all manufacturers to revisit their testing regime and make sure that they are testing the "weird" case of throughput from wireless to WAN and LAN to WAN. You might find something interesting.
Los Angeles announces city-wide Wi-Fi network plan: The plan is in its infancy, with step one listed as "hire expert to figure out how to get network built." Or, rather, in the actual press release, "hire a technology expert who will join a city team to structure a proposal to attract and engage the private sector." Or what I said. The goal would be build-out starting in 2008, completion in 2009. The city occupies 470 square miles and has 4m residents. The release includes mentions of business, municipal, and public safety purposes, in addition to basic Internet access for residents. Bragging rights are mentioned--largest in terms of area plus residents. Houston's deal with EarthLink, announced today, would provide a larger area (600 sq mi), but only 2m residents covered. [link via MuniWireless]
As expected, Palo Alto joins San Carlos as the second of two test sites for the vast network project: The city council approved a four-month test in Palo Alto.
The biggest city network picks its vendor: Houston will work with EarthLink to unwire its 600 square miles. While larger projects are underway or in planning stages already, this will be the largest city network under development anywhere in the world. Larger projects have typically been county-wide proposals that typically include much less dense coverage (or necessity for it) across large areas. Wireless Silicon Valley will span 1,500 square miles, but requires individual agreement by 41 municipalities, and will have sparser coverage over parts of its range based on population density. Houston, on the other hand, is dense as baked clay.
The plan is for the network to be complete by spring 2009. The city council could consider the contract as early as this month. In other cities, there have been large gaps between picking a vendor and finalizing a contract with the executive branch, and then months or longer to have a city council or other governing board approve the contract. Then utilities get involved in providing pole access and rights of way. In this case, the Houston Chronicle reports that the contract is already in hand, which must have been negotiated quietly (to reduce pressure and expectations) over the last several months since bidding was narrowed to two firms, one of them a local outfit put together for this project.
The article ends with two asserted statements that I take issue with.
"Networks in other cities have been criticized for spotty coverage and weak signals. Because wireless beams often can't transmit signals through buildings, customers accessing networks indoors sometimes need transmitters inside their residence or business.": The criticism is true, but the "sometimes" is not. I am hearing increasingly that the majority of residential users will need bridges to receive service unless they are adjacent to a node. The addition of 802.11n to laptops and desktops could obviate bridges in some cases; I'll be curious if that's the case.
Secondly, "Another drawback to citywide networks is that they can provide cover for online criminals, because there's no way to track their activity to specific locations." As I noted yesterday in critiquing a Washington Post article, the geographically tied nature of access point usage means that law enforcement, armed with subpoenas, will in fact be able to gather much more information about where illegal activities were conducted.
The metro-scale, multi-radio-node equipment maker gets an investment from a Mobile WiMax pioneer: Interesting news. All the companies that make gear for metro-scale markets like to talk about being radio agnostic, but the radios are generally 802.11a (5 GHz) and b/g (2.4 GHz). Strix is no exception. But they (and BelAir) have always talked about how they have room in their chassis to slip in wireless cards using other technology.
The investment amount by Samsung Ventures isn't disclosed; their portfolio is $400m strong. The company's venture arm is putting money into Wi-Fi and WiMax firms. The VC site doesn't have a link on its Portfolio Companies text, however.
I have an interest in utility poles: It's well documented on this site that I have a small obsession--an attachment, let me pun--to utility poles. In case after case around the country, we see that access to or the lack of access to poles has led to significant delays in rolling out Wi-Fi and WiMax networks. It's one of those things that people in the utility industry knew about and probably started laughing when service providers talked about how easy it would be to mount thousands of pieces of hardware all over creation using light and power poles.
In fact, it's very very hard. Which is why this case of DQE Communications Network Services at the FCC (PDF) against North Pittsburgh Telephone Company (NPTC) is so interesting. The FCC found in DQE's favor when NPTC, a local exchange carrier, denied pole access. NPTC said in response to a 2005 request for pole access that DQE wasn't a "telecommunications carrier"--despite DQE's specific authorization in Pennsylvania as such--and thus DQE wouldn't get the protection of the 1996 law that requires nondiscriminatory pole access.
The FCC found, in short, that a telecommunications carrier can engage in services not covered by that definition without losing its rights. And because the Penn. Public Utility Commission granted specific authorization, that's prima facie enough for the FCC that DQE is a telecom carrier. Interestingly, the FCC makes the case in its order that because DQE is governed by tariffs that it agreed to with the Penn. PUC, that even its pure data offerings constitute the kind of telecom service that's protected by regulation--they offer their data services "indifferently and 'indiscriminately' for a fee," which meets the FCC's interpretation of that definition.
What this means, deciphered a bit, is that if you're willing to put yourself under tariffs and rules and offer the right kind of base service--be regulated as a telecommunications carrier--you also get the benefits of regulation that work in your favor. The downside is having less control over the rates you charge; information services can price however they want, and incumbents have fought hard to move their services from the telecom pile to the information pile for that reason.
I don't think companies will rush out to their PUC, however: DQE contacted NPTC in July 2005, and filed its complaint with the FCC in Sept. 2005. This order took 16 months from then to appear.
This appears to be a news article, but it's got a strong slant towards monitoring: The article presents a sensational opening! Child porn being downloaded! A warrant! Pounding on a door! But it's only an elderly lady, and she's not even stealing music. No, we get the specious fact that the police are powerless to apprehend a villain because the elderly lady is fiendishly operating an unprotected Wi-Fi access point. "Perhaps one of those neighbors, authorities said, was stealthily uploading photographs of nude children. Doing so essentially rendered him or her untraceable," the reporter writes. Not so much: Traditional police work coupled with an exact geographic location should have provided enough clues.
But apparently open Wi-Fi is a huge danger because this reporter hasn't heard of anonymizer and randomizer services that allow anyone at any public terminal--Wi-Fi, college campus, or even, gasp, at a workplace--to be used in a fairly untraceable manner. In fact, probably more untraceably than hopping on a free or open Wi-Fi location and abusing it, assuming security through security.
This quote is rather telling in the same manner: "Unsecured networks are a treasure trove for neighbors," said John Sheehan, program manager of the CyberTipline at the National Center for Missing and Exploited Children. "Those looking to access illegal content obviously feel they have anonymity" and can get away with it.
Again, if neighbors are gaining access, then it's easier to trap those neighbors through monitoring. They may "feel they have anonymity," but they have quite a bit less because of proximity.
And here's a nice piece of opinion right smack in the middle of the article: "Open wireless signals are akin to leaving your front door wide open all day -- and returning home to find that someone has stolen your belongings and left a mess that needs cleaning."
No, it's more like having an endless pot of coffee that you're willing to let anyone pour a cup from, even though you're paying for the electricity. The coffee is essentially free, because you're paying a fixed amount for unlimited java. The "front door wide open" argument applies when you intend to close your network and fail to; don't understand that your network can be closed (and if you did, would prefer to); or you're an incumbent Internet service provider arguing that bits aren't bits: one set of bits must be paid for differently than another, and sharing access is theft.
(I don't think that every ISP must allow sharing, and I do agree in following the terms of service agreed to. But given that the terms for broadband are essentially coercive due to the effect in the US of having no nondiscriminatory access by competitive providers for wireline access to homes, that means that the market hasn't decided that not allowing sharing is a reasonable way of doing business.)
The reporter also notes, "Closing cases is more difficult if the IP address originated from a wireless signal because it often leads back to the owner of the network instead of the criminal." That's only true if you're thinking about abusers has working from single locations. The writer notes that with an increasing number of open access points, the problem will get worse. Again, that assumes that you have thousands of people roaming across extremely wide areas to gain access. It's more likely that people act within a relatively small distance from their home, making a pattern of abuse easier to track down: "Hey, that guy downloaded child porn here, here, and here, so he probably lives about here."
The interesting point here is not that this story is biased towards a particularly naive view of law enforcement or the idea that there must be millions of people engaged in illegal activity over open Wi-FI networks; nor that open Wi-Fi networks are de facto bad and/or unintentional; or that this story has been told better, with greater balance, in other publications over the last couple of years.
No, what's interesting is that more and more home networks are being locked down. In informal and formal surveys by myself and companies involved in monitoring this sort of activity, an increasing number of home networks are locked down with strong WPA security, making them more or less impenetrable to even determined access. (WPA isn't perfect, but it takes a fair amount of effort to break a weak WPA passphrase--too much effort for a casual "anonymous user.")
So I would have cast this story as--how will law enforcement adapt when people can connect from anywhere at high speed and get on and off networks fast? How will individuals and companies who want to share access, whether from home or in a coffeeshop or even at an entire airport (Las Vegas, Phoenix, etc.) cope with law-enforcement demands? What's the rate of change for securing home networks against easy access? Those questions would probably have interesting answers.
Long Island's wireless plan has experienced critic: Craig Plunkett, a long-time Wi-Fi and wireless network operator in the Long Island and greater New York area, criticizes coverage in the local paper. He notes that while the county says that "we're not paying for it," meaning the network that will be bid out, that wireless consulting firm Civitium produced the bid documents, which unfortunately included traces of a previous proposal for Chicago. Plunkett's point isn't to complain about Civitium, but rather the lack of transparency in the plan's development, which includes creation of a local development corporation and other plans that weren't debated before the proposal was released. Plunkett is one of many local firms that might have an interest in bidding on the project (in some part or combination), and objects to a lack of consultation of homegrown expertise.
St. Louis plan proceeds: St. Louis votes for AT&T without putting out an RFP. AT&T will first build a two-square-mile test network--from downtown to St. Louis University--and then extend to another 60 square miles over three years. The network will be free for 20 hours of use each month.
Yuma Wi-Fi firm would rather nobody know about the free service: The folks at Kite, a division of MobilePro, says that April will feature free Internet access, but they're not publicizing it (too late) before the public rollout on May 1. No devices have yet been installed. Just for confusion sake, MobilePro is the parent firm of Kite, which "powers" WAZMetro (Wireless Access Zones). At one point, Kite was NeoReach--in fact, NeoReach is still noted on the WAZMetro site's privacy page.
The Federal Trade Commission is looking into how to enforce how broadband speeds are advertised: The San Francisco Chronicle says that a two-day workshop by the FTC will look into the "up to" rates and pricing that broadband firms promote. While the price is fixed, the minimum speed isn't typically stated, and no speed at all is promised. The reporter tries to track down whether any regulators track the "up to" rate and complaints, and finds that the California Public Utilities Commission lacks jurisdiction (broadband is nationally regulated), and the FCC said it doesn't regulate ISPs. So the ball is in the FTC's court, probably in terms of advertising and deliver. [via BroadbandReports.com]
The high-def streaming media adapter gains good network security: Since I regularly criticize consumer electronics and handheld devices that lack full Wi-Fi security stacks, I should also point out when that changes. The Mvix USA MX-760HD is a kitchen sink full of audio and video streaming options that work with high-definition up to 1080p. It can even hijack a video DVD in a computer's drive and play it using an encrypted stream (and a licensed process, the company says).
But they didn't have WPA, although an update was promised. Now the $300 can be upgraded at no cost for both home and enterprise WPA (WPA Personal and WPA Enterprise, which is WPA over 802.1X). Good going!
Boingo launches $8/month worldwide VoIP over Wi-Fi plan: The firm had soft-launched this offering before, providing it as an option for Belkin's Skype phone and offering details to the press. It's now formally out there as Boingo Mobile--flat-rate, non-metered, worldwide Wi-Fi phone access. Boingo's Internet access service is a flat $22 per month for US locations, but has metered rates at many non-US hotspots.
Boingo Mobile can be downloaded as a software add-on for Windows Mobile 5-based smartphones and PocketPC handhelds. The company expects operators and handset makers to offer integration, too.
At the 3GSM World Congress in Barcelona, Spain, Boingo's technology was shown running on the Symbian OS, the smartphone platform that powers 70 percent of such phones worldwide (and almost none in the US). They will be demonstrating the service on a Nokia S60--70m of this series of phone are already on the market.
Also at the conference, Boingo announced an operator-focused server platform that allows remote setup of the Wi-Fi side of handsets without the user having to make any changes themselves. This allows operators to customize the phone's service plan, or remotely enable service when an existing user with a capable handset wants to turn on Wi-Fi access.
The Wi-Fi Alliance says that nearly 100 handsets are certified: The group has certified 82 dual-mode handsets and 10 Wi-Fi-only phones. The idea of certifying voice handsets that incorporate Wi-Fi allows the alliance to ensure both interoperability and better performance. Frank Hanzlik, the alliance's executive director, said in an interview that this testing helps the manufacturer produce devices that function better in difficult RF environments, as well as align the phone's function relative to Wi-Fi gateways. The alliance has also been working closely with the CTIA, the cell industry's trade group.
Hanzlik said that he has been working to raise awareness of the WMM (Wireless Multimedia) extensions that allow voice packets to achieve priority across a network, WPA2 security, and the special WMM Power Save mode, which can extend battery life by 25 to 40 percent on a handset through better management of unnecessary communications with a gateway. Hanzlik expects over time to see WMM and WMM Power Save in more gateways. WMM Power Save could be a simple upgrade for most routers, as it requires no changes in the radio. Incompatible power save modes can actually waste power, and the alliance would like all makers to move towards their certified version.
For large-scale hotspot networks, moving to WMM Power Save could dramatically improve the experience of mobile users making Wi-Fi calls. "When you look at these very, very large operators like T-Mobile here in the US, or some of the folks in the Wireless Broadband Alliance [a worldwide consortium of hotspot operators], we're trying to get the word out to these folks" to upgrade their networks or plan to include WMM Power Save from the beginning.
Crazy Apple Rumors Site suggests that 802.11 is your problem: "Look, it’s one thing to be 802.11n. It’s another thing to be an 802.11n enabler."
The city of San Carlos will be the first "concept city" for the Wireless Silicon Valley Project: The 1,500-square-mile effort will begin with a couple of square miles, one of them in San Carlos. The test will last 120 days from commencement, which wasn't announced. The 41 entities in the area to be served are still working on a model agreement; this test phase gets work started before that agreement is finished and then separately executed by many, many executives and councils.
NTT DoCoMo said they hit nearly 5 Gbps between a transmitter and receiver, with the receiver moving at 10 km/h: A year ago, they hit 2.5 Gbps. The new device doubles MIMO antennas from 6 to 12 and improves signal processing. The same 100 MHz of spectrum was used. The company will release details at next week's 3GSM World Congress in Barcelona, Spain. 4G was thought to be slated to launch as early as 2010, but a Super 3G flavor of WCDMA will precede it with 100 Mbps speeds by 2010.
Linux, *BSD, and other Unix variants have lagged in Wi-Fi support due to chip vendor's stated concerns about access to the low-level radio functions on their chips: But a meeting last month in London, the Linux Wireless Summit, apparently has helped move development along. DesktopLinux.com reports that the meeting included Linux kernel developers, and representatives from Broadcom, Devicescape, Intel, MontaVista, and Nokia. The summit is part of an effort to standardize parts of Linux for reduced maintenance and complexity, as well as greater functionality.
The summit's organizer is quoted and paraphrased as stating that the FCC will only certify Wi-Fi devices that have a closed-source component for handling low-level radio settings, such as frequency choice and power levels. I don't know that there's actual evidence as to this fact, and would love to see. That would be an extra-regulatory step for the FCC, as there is no defined required for releasing radios that cannot be modified; the onus is typically on the purchaser who modifies hardware conforming to regulatory limits, and suffering the penalties if they fail to conform.
For instance, worldwide 802.11a equipment can use the 4.9 GHz band in some countries; it's limited to public safety purposes in the US and military uses elsewhere. Using 4.9 GHz in some parts of the world could get you thrown into jail for a long, long time.
It's interesting that these considerations are now being made openly. A couple of years ago, I was provided with some of this reasoning from sources I won't identify, but told that the concerns about the FCC and other regulators couldn't be discussed publicly.
You can read some of this history in a January 2005 post that starts off discussing an Economist article criticizing Atheros and Broadcom.
Skyhook Wireless breaks into the big time with Sirf deal: Skyhook has a constantly updated database of coordinate-tied Wi-Fi signals that allow it to produce a GPS-like set of results based on a scan of the vicinity from a laptop or handheld. Sirf supplies GPS chips to most of the location-enabled devices--from TomTom, Garmin, Magellan, to name a few--and cell phones in the world. Sirf will integrate Skyhook's system so that mobile devices that use GPS and have Wi-Fi radios can add those results to the mix.
Skyhook's Wireless Position System (WPS--yes, another technology called WPS) will be available as an integrated option to carriers that want to provide location-based services like directions and nearest-business services to phone handset and handheld users.
Skyhook chief Ted Morgan said in an interview that this deal provides full legitimacy to their technology approach. "It's the leading GPS chip company saying yes, there are important areas where GPS doens't work great, and Skyhook is the answer to it," he said.
"It's taken a couple of years for us to win over the GPS world. They've got 15 to 20 years of experience building this system, and to come along with an entirely new model is always treated with some level of suspicion," he noted.
The deal rises on the fact that dual-radio GSM and Wi-Fi phones are already common and likely to become more so. "We think the market for Wi-Fi hybrid cell phones is going to be fairly healthy," Morgan said, citing the "dozen phones" in the work for BT, which has rolled out a converged calling service, and T-Mobile's US entry into the converged market.
Morgan said that Skyhook remains a complement to satellite-based location placement, but that WPS can overcome some of the irritants that today's user of devices with GPS have to deal with. Specifically, Morgan said, GPS devices require as long as a couple of minutes when initially fired up to obtain good satellite fixes.
Skyhook's system requires a few seconds. Skyhook doesn't associate with access points, making signals far too weak to be used for a network connection still viable data points that correspond to its database of 15 million APs. "If you just want to look for where the nearest ATM machine is, you're not going to stare at your phone for two minutes," Morgan said.
The Skyhook system can assist GPS receivers, too, even by providing a general geographical location, which, in turn, allows the GPS receiver to have the right idea about which GPS satellites can be received and where they are located in the sky. Cell phone networks offer a similar sort of assisted GPS using cell-tower locations, but Morgan said that assistance typically works only on the cell operator's home network, where the Wi-Fi option would work anywhere Skyhook has coverage.
Morgan explained that GPS systems can be optimized to take a variety of information to produce better results, and that this works especially well when entirely different technology is employed. Sirf's system will be able to integrate WPS coordinates with GPS coordinates for better accuracy and more quickly than GPS alone.
The company covers about 70 percent of the US population using 200 full-time route drivers. The firm is expanding coverage into Europe and is "working on Asia." Morgan said, "We're going to expand the coverage according to those deals."
Skyhook's software provides a loop back to the firm's servers, so that scans of access points in the vicinity of a device are added to the database of locations. With GPS and WPS in one system, and with the potential of tens of millions of mobile devices deployed, Skyhook could obtain a vast amount of new information that further improves the accuracy and extent of their coverage.
Equipment with GPS and WPS will take a little time to reach market. "You won't see the major device makers until '08," Morgan said, but Wi-Fi-only phones could have the technology as early as the second half of 2007.
FCC Commissioner McDowell strongly encourages rural telcos to adopt wireless broadband: An increasing number of reports show that the universal service fund (USF) that derives its fees from urbanized telephone use to subsidize rural telephony is off the tracks, and likely to change significantly in structure. McDowell said to the National Telecommunications Cooperative Association's annual meeting that free markets failed for rural American telephony, and the USF was once an effective mechanism for fixing what was broken. With USF drawing from fewer people--VoIP providers get some exemptions, for instance--rural providers need to adapt.
He urged the providers to participate in the upcoming 700 MHz auction. It's very sweet spectrum--it penetrates well and goes long distances--and licenses in rural areas are likely to be available at reasonable prices, but the infrastructure to build out won't be cheap.
AirDefense likes to rub it in: At the current RSA Conference, an event devoted to understanding and improving secure communications and systems, AirDefense found 623 Wi-Fi enabled notebooks and mobile phones on day one, of which 56 percent were configured to automatically log on to commonly named Wi-Fi network. They found seven rogue networks, two of which masqueraded as the official conference network, and one even had a forged security certificate--which must mean a server-side certificate that handles 802.1X authentication.
Update: AirDefense shot me an updated note about this. On day two of the conference, they detected 847 networks, 481 of which (57 percent) were open to evil twins. On day two, they also saw a spike in DoS (denial of service) attacks--85 of them. This included using CTS (clear to send), which forces other stations to hold off transmissions; deauthentication, forcing clients to reconnect; and jamming.
Later update: On Feb. 9, AirDefense released more information, noting that they spotted 1,137 of 2,017 wireless devices at the conference that could have been compromised. Many clients leaked information that would have allowed later off-line password cracking or network replay for access on their home corporate networks. AirDefense also noted 10 percent of laptops had unpatched software or disabled firewalls.
The Wi-Fi Alliance's head Frank Hanzlik recently made a tour of India: The reason? The country is poised for vast growth, according to a report released coincident with his visit. Annual sales of US$42m today will grow to $744m in 2012, or 61 percent per year on average. This excludes embedded devices and laptops. Hotspot access will growth as well as WiMax trials turning into full-scale deployment, with the potential for mobile WiMax to play an additional role. The 60-page report is available at no cost.
Immunity releases $3,600 portable Wi-Fi penetration tester, Silica: The product couples a Nokia 770 Internet Tablet with a custom version of Immunity's CANVAS software. Other devices will be supported in the future. The software allows automatic detection of Wi-Fi networks, automatic connection, and automatic penetration. This is intended to be a white-hat product used to improve security at companies--or demonstrate flaws. The software can be enabled via a stylus, and then actively work while located in, say, a pocket. CANVAS has "hundreds of exploits," this article states, and Silica could, for instance, compromise a computer, and leave it set to be remotely controlled. The company vets purchasers.
There's a call among some San Francisco groups to yank Wi-Fi in favor of fiber: Public Net San Francisco wants the EarthLink deal to be canceled in favor of a publicly owned fiber network that would reach every home. SFLan's Ralf Muehlen said, "300 kilobits per second is so 1997; it'll be utterly ridiculous in 2023, which is how long Earthlink's monopoly will last." (SFLan is a high-profile, long-running project to share Internet access wirelessly; it's got a lot of high-profile participants, too.) The San Francisco Bay Guardian goes into some of the details of a fiber plan.
Esme Vos of MuniWireless.com argues for co-existence. Wi-Fi fills in while fiber is being built. Built it now as the first step in ensuring competition--and offerings like she sees in her town of Amsterdam, The Netherlands.
Finally, the ACLU critiques the privacy aspects of the EarthLink deal. The organization wants more limits and less ambiguity about what information is collected. "The ACLU said a municipal Wi-Fi network should let users opt in or out of any service that collects data on what they look at or search for on the Internet, or their e-mail messages. There are no provisions for that in the paid or free service terms, it said."
Good interview with The Cloud's Owen Geddes, its biz-dev director: The Cloud is one of the world's largest hotspot networks, and started out with the notion of being a reseller to aggregators. Geddes obviously has an interest in the device-driven Wi-Fi market, as opposed to pure laptop Wi-Fi. He notes that Western Europe could move from 80 to 90 percent laptop Wi-Fi usage last year to 70 percent consumer electronics in 2008. The Cloud just introduced what's unfortunately rare--flat rate Wi-Fi, costing £12 per month. They're adding rates for devices ranging from £5 to under £10 per month depending on the device.
Geddes also mentions something that I've heard about from some US hotspot operators: the fact that music download services can't per se expect a free ride from Wi-Fi locations because of the bandwidth consumed. He casts it the opposite way: a 99p download price with no Wi-Fi fee. But that also means that the Wi-Fi hotspot shares in that download revenue.
The eminently sensible Dr. Bill Koslosky passes on the news that people with pacemakers and implantable defibrillators have nothing to fear from Wi-Fi networks: With hospitals increasingly deploying WLANs for mobile communications and data access, as well as less significantly for patient and visitor use, it's worth a worry. The good doctor notes that a session at the American Heart Association's annual meeting included results from German study--which had no commercial funding--which showed that even at the maximum signal output rates and closest distances, no "programming or telemetry functions" showed evidence of interference.
However, certain "noncritical pacemaker programming functions" could have problems at the highest output levels and closest positions--1 watt and 10 centimeter spacing. It's unlikely that one watt of power would be broadcast pointing at human beings from a distance closer than 10 to 50 feet, however, most likely being used in omnidirectional antennas located on roofs and streetlights! However, the recommendation was to not put Wi-Fi access points near outpatient pacemaker clinics, as a conservative recommendation.
EarthLink lost $24.8m in its fourth quarter, but startup mobile operator Helio is the cause: The company saw revenue up 5% ($328.2m), with a critical 35% increase in broadband revenue, with just a 15.8% drop in dial-up ($145.2m). EarthLink earned $5m on $1.3b in 2006, down from $143m in earnings in 2005 on a slight revenue increase.
Dial-up revenue is a cash cow for the company. EarthLink's founder recently told me that the cost of providing dial-up is extremely modest even compared to the low rates now charged. While declines are inevitable, it's interesting to see that relatively small of a drop. The AT&T acquisition of BellSouth should push a lot of cheap DSL into territories only served by dial-up or expensive DSL/cable options given the FCC agreement that AT&T signed, and that could accelerate dial-up's decline.
Helio was the driver of this big loss. The mobile virtual network operator (MVNO), a cell operator that doesn't own its infrastructure but purchases access from "real" operators, is a joint venture with SK Teelcom, and had a net loss of $191.6m that quarter, with EarthLink booking half that loss. Helio is acquiring customers at a much slower pace than hoped, and will see losses for the foreseeable future, perhaps hitting cash flow positive by 2009.
They had 70,000 subscribers signed up as of Dec. 31, 2006, and 100,000 anticipated by mid-2007; the division expects 200,000 to 250,000 by the end of 2007.
Each Helio user, however, contributes an average of $100 per month in revenue versus $40 to $50 for major cell operators. While Helio has to purchase its time and data from other operators, it isn't directly saddled with the challenge of building and upgrading those systems.
EarthLink's metro-scale Wi-Fi business wasn't broken out as a separate expense, and they could not have received much revenue yet since few networks were operational even in part in 2006.
Fon is giving away 10,000 La Fonera routers to US residents for its birthday: The Spanish-based firm is building a hybrid of grassroots and operator-backed Wi-Fi locations and hotspots worldwide. The router is generally pretty cheap--about US$30 and €30--but this removes any barrier.
As usual, I do complain about Martin Varsavsky's characterization of their network as "on our way to becoming the largest Wi-Fi network [in] America" with 18,000 "Foneras ordered" because a Fon node isn't equivalent to most public hotspots. Some Fon nodes are identical to hotspots and "worth" as much in an apples-to-apples "largest Wi-Fi network" comparison. Others aren't. But I don't think Fon should be as hung up about being large; they should be concerned about being dense, and dense in the right areas.
Fon nodes in Seattle, for instance, are largely in residential neighborhoods, apparently in people's homes. In dense areas, that means that neighbors, especially in apartment buildings, will be able to use those Fon locations. But Fon doesn't encourage regular use, purposely pricing its day charge at about US-two-bucks, varying depending on locality. Fon wants its users to buy broadband lines from operators, and hopes that casual use and other forms of use (not fully explained yet, but VoIP and UMA are certainly components) will increase broadband lines rather than replace them. There has to be a backhaul, although Fon is backhaul agnostic.
For Fon locations that are in more heavily trafficked or traditional hotspot locations, Fon does offer the advantage of community (existing users seeking out the location) and simple revenue collection (for hotspots run in the Bill mode). Fon is certainly among the simplest and cheapest way to set up a hotspot with a small bar of entry, although competitors like LessNetworks and now Whisher have alternatives, neither of which offers the potential for revenue collection (yet).
The terms of the giveaway limits routers one per Fon user or shipping address (US residents and addresses only) until 10,000 are depleted or March 31, 2007. You can register and immediately obtain a router. The routers must be activated as Fon locations, but there's no stated penalty or terms if you don't. The terms ask that you pass the router on if you're not going to use it.
The city of Xi'an, capital of Shaanxi province, China, will get 1,000 Wi-Fi hotspots: Although the press release describes provider Along as turning Xi'an into a "Wi-Fi city," it's not a city-wide Wi-Fi network so much as an extensive network of hotspots. The city covers nearly 900 square miles, and 1,000 access points doesn't provide enough density for residential use, of course. The release says 7m people live there; Encyclopedia Britannica cites a 2003 estimate of city population at 2.7m. The company will add a number of hotspots at universities and "tertiary institutions," by which I have no idea what's meant.
Peplink last week announced its updated Pepwave Surf Series of wireless bridges for metro-scale networks: The new Pepwave Surf keeps parity with Ruckus Wireless's latest CPE (customer premises equipment) bridge, intended to bring signals of large-scale networks into the home. The rebranded Pepwave Surf now offers virtual SSIDs, which allows devices in a home to connect to the bridge, while it connects to the metro-scale network. Peplink said via email that their device dynamically adjusts power to use less signal strength on the home network. Peplink uses an omnidirectional antenna as opposed to Ruckus's MIMO approach.
Peplink also added an external set of "signal bars," green LEDs that show the strength of the Internet-connected network, making it easier to move the bridge to the optimum receiving position.
On the provider side, the new Surf models can be remotely accessed and tested by tech support and management tools to check on whether the bridge is active and functioning correctly, including collecting low-level signal information.
The two model names with its "home access point" feature--virtual SSIDs--are the Surf AP 200 and Ap 400 (200 mW and 400 mW, respectively), retail for the same price as under their former names: $189 and $289.
Wireless Silicon Valley may have two tests up soon: Palo Alto and San Carlos could have agreements in place to set up pilot projects within a few days, the Palo Alto Daily reports. Palo Alto and San Carlos would each host a one-square-mile testbed. It would take about four months to build a network of 35 to 40 access points in each location. This is a big first step in the project, as part of what's at issue is a model agreement that the 41 individual municipal entities in the project need to approve separately; that agreement is apparently fairly far along, this article reports. An IBM spokesperson estimated the network's cost at over $100m, which is somewhat higher than earlier estimates, but may reflect the partners'--Azulstar, Cisco, IBM, and Seakay--interest in testing more technology and systems than are strictly necessary, since they're footing the bill.
Toledo's conditions may limit bidders: Toledo may delay bidding process to obtain more bids. At least one potential bidder, 20/20 Communications, won't participate because of specific demands the city has built into the RFP. An incumbent fiber and cable TV operator, Buckeye Express may have too much of a natural advantage for backhaul. Buckeye is owned by the same firm that owns the local newspaper.
No RFP for St. Louis deal with AT&T? Esme Vos at Muniwireless.com says, "Show me the RFP." There was no public tender process, she notes. The mayor of St. Louis's blog simply notes he asked the board of aldermen to introduce and vote on a bill enabling the deal.
Meraki's inexpensive mesh-routing nodes are a hit: About 15,000 users are connecting to their $50 nodes ($100 for outdoor units) in 25 countries during their testing phase. The company didn't note how many routers were shipped, but it's likely between 1,000 and 2,000 based on their descriptions of density. Meraki's devices cost a tiny fraction of what metro-scale mesh networking equipment costs, and that's partly because they have fewer features and reach much shorter distances, requiring denser installations. But the point isn't necessarily blanketing a city, but rather putting a signal over a neighborhood, a village, or an apartment building. Meraki is a bit like powerline networking: Covering connected areas without a lot of infrastructure. You can listen to an archived podcast interview I conducted in Oct. 2006 with Sanjit Biswas and Hans Robertson, two of the co-founders.
Meraki received very positive coverage in Randall Stross's Digital Domain column yesterday in the New York Times, in which he looks at how the next billion people will be connected to the Internet. Stross compares the efforts of city-wide networks to outdated methods of lighting cities by using huge arc-lights high in the air. He cites a practical test in Portland, Ore., where 400 apartments were served by 100 Meraki routers, which works out to $13 a household for installation, and, Stross writes, about $1 per month per household for Internet access. Because this rollout was in association with a non-profit, that group is obviously dealing with the labor costs of installation, tech support, and maintenance. (Even with a very low failure rate, 100 routers might see a 1% failure per month or higher, especially in high-use conditions.)
Finally, Meraki announced $5m in first-round funding led by Sequoia; other investors weren't noted. GigaOm mentions that Google and Sequoia invested in Fon; Sequoia in Ruckus Wireless; and Benchmark Capital in Whisher.
Computerworld reports that in testing at airports, they found honeypots intended to lure unsuspecting users: I'm a bit lagging on this story, reported two weeks ago, but it's still relevant. The "Free Wi-Fi" scam involves password snatchers setting up fake Wi-Fi networks in public places, like airports, that use free in their network name. Connecting to these locations puts your machine at risk. Further, for Windows users, your laptop might connect in the future to other identically named locations without asking if you want to connect. The attacker can snarf unprotected passwords and unencrypted email, as well as infect your computer.
Computerworld cites security firm Authentium as having found dozens of "free," ad-hoc wireless networks of this sort at airports across the U.S. The firm told Computerworld that in multiple visits to O'Hare, they found over 20 ad-hoc networks advertising free service each time, and saw "fake or misleading" MAC addresses, the numbers designed to identify each Wi-Fi or Ethernet adapter uniquely.
The article offers specific advice on how to avoid this problem. The most prominent in my mind? Use a VPN. Several firms offer VPN-for-hire for travelers who don't work for companies that offer or require VPN use on the road. Try JiWire's Hotspot Helper (Windows only, $25/year) or WiTopia.net's personalVPN (Windows/Mac, $40/year), for instance.
(Disclosure: I have a very small stake in JiWire.)
Over at Macworld.com, I write about how AT&T may have a great futurebomb in the iPhone: It's all about keeping DSL and mobile customers from switching to other providers, reducing the number of bills, and cutting costs, all while increasing fees by providing incrementally better services. The iPhone could be the first tool down that path from any major cell operator.
Atheros designs Bluetooth chip aimed at PCs: Most Bluetooth chips used in computers are repurposed from mobile applications, Atheros claims. Their new product is more efficiently designed with a lower cost of goods and integrated flash memory.
Also features the Solid Gold Dancers: Broadcom said that they will offer a single chip with Bluetooth, Wi-Fi, and FM radios on board. The chip uses a 65-nanometer (nm) CMOS process, which means its circuits are tightly packed using the most common manufacturing techniques. Size has a relationship to power requirements. The Wi-Fi is a/b/g; the Bluetooth 2.0+EDR with 2.1 upgrades possible.
Update: CSR on Feb. 7 also announced a Wi-Fi, Bluetooth, and FM converged chip platform. The company released specific throughput figures, rare in the industry, noting that Wi-Fi by itself could achieve 23 Mbps in their chip designs, and Wi-Fi and Bluetooth together using "collision detection logic" would drop Wi-Fi down to 18 Mbps of net throughput.
On Feb. 7, Texas Instruments also announced a triple-threat, this time with 802.11n.
Reuter reports that Fon could move from grassroots to mainstream: Fon, so far, has build its tens of thousands of nodes mostly through individuals who obtain a router from them or flash an existing device with new firmware, and set up shop. Although some ISPs allow and some tolerate sharing a connection via Fon, only a few actively encourage it. This could change, Reuters reports, if a deal with BT goes through.
Under the deal, which BT and Fon wouldn't comment on for Reuters, BT would allow its millions of broadband users to share their networks with Fon, and BT's Fusion mobile callers--who can call over Wi-Fi or cell using UMA (unlicensed mobile access)--could access Fon's nodes to place calls. Fon claims 250,000 Foneros, but a smaller number of active nodes.
The Fusion plan would benefit from BT-broadband-backed Wi-Fi nodes because BT can separate VoIP packets on their side of the broadband connection, providing a higher-quality service than a company like Vonage, which must push VoIP packets over the broadband connection out to the Internet, over an unpredictable route.
The article claims that BT could push software to its routers to enable Fon, but I imagine that's an oversimplification--unless most BT broadband users also received a Wi-Fi router from BT, and it's a router that they can insert Fon software into.
Three months of free Wi-Fi from The Cloud for UK-based Windows Vista purchasers: Microsoft had already set up such a deal with T-Mobile USA for US purchasers. The Cloud has 7,500 UK locations. The arrangement lasts until April 30 regardless of purchase date.
Posted by Glenn Fleishman at 9:29 AM | Permanent Link | Categories: