Sascha Meinrath of CUWiN offers his follow-up on previous posts about mesh networking's scalability and utility: Continuing a conversation that began back here, and continued here, open-source and world-wide community mesh networking developer Sascha Meinrath replies and elaborates on those posts.
Chari is right on the mark with his clarifications on network performance degradation rates. The case I had made purposefully oversimplified the throughput degeneration rate. However, in real-world deployments, the actual throughput of a network probably degrades at somewhere between 1/n and (1/2)^n -- where n is the number of hops. Think of these two equations as two limits of the probable degradation rate; as anyone graphing these functions can see, they map an increasingly wide area of probable degradation rates as the number of hops increases -- representing an increasingly large "unknown". The point is that exact throughput degradation rates are fairly impossible to pin down because the variables that need to be taken into account differ by locale. As anyone who has done numerous real-world implementations will attest, bizarre confluences of factors can sometimes cause unanticipated outcomes and disruptions.
One of the major problems facing wireless deployers is that almost all research has been conducted either via computer simulations or in "in-vivo" deployments that are highly contrived (often within science buildings or even within single laboratories). This research provides extremely useful guidelines for anticipating problems; but often fails to capture the complexity of deployments in the community. A closer-to-life example of "real-world" usage is MIT's roofnet project, whose deployment is being used to help proof the ETX route prioritization metric that is being integrated into CUWiN's software. However, this network is utilized mainly by computer science students, who are not exactly representative of the population at-large.
Nitin Vaidya's work has made tremendous strides in our understanding of ad-hoc and multi-hop networks (which Chari does well to point out); but what is really needed is a truly community-based network (with all the attendant messiness) that can be utilized to explore the real-world limits of wireless networks. It is with this goal in mind that Nitin, David Young (CUWiN's technical lead), and I co-wrote an NSF grant proposal entitled, "Engineering Community Wireless Networks" earlier this year. For companies and entrepreneurs working on wireless networking solutions, the possibility of gaining real-world data is extremely valuable. Likewise, for those of us working on Community Wireless Networking solutions, these data will provide an opportunity for better understanding the constraints for deploying robust networks and create more precise parameters for degradation rates.
Chari was also right on the mark in pointing out that scalability and performance are not necessarily directly linked:
"Scalability speaks to how large of a network you can build. Scale is unrelated to the number of hops. A mesh network with a small number of nodes but few wired backhaul points and/or an in-line topographic layout may have a large number of hops. Conversely, a mesh network with a large number of nodes but many wired backhaul points and/or a lattice-style topographic layout may have a small number of hops throughout."
However, I would argue that they are still significantly correlated. In either case, an ideal networking system would be able to handle any of the topographies Chari alludes to as well as allow for multihoming (the use of bandwidth from multiple internet connection points for a single download or upload). Multihoming, however, brings us back to the same problem of scalability and hops being highly correlated. Most importantly, as Chari states, "The real limit to scalability in most mesh networks is routing overhead." Both TBRPF and OLSR protocols share the problem of not scaling to extremely large networks. They're built with a cell-phone, tower-based mesh topography in mind (one need look no further than the cute OLSR MPR flooding demo to see this). A-HSLS will scale to thousands of nodes arranged in a truly non-hierarchical fashion -- it's the difference between two protocols that are most useful by major telecoms, and one that is useful for community wireless networks. A-HSLS is more useful for Community Wireless Networking purposes, while TBRPF and OLSR are more useful for major telecoms.
As regards the 500mW limit I propose as a proactive solution in my manuscript, it is important to remember that one can today legally transmit at up to 1W; and with a exemption (which is pretty much a rubber-stamp process) and an amplifier, one can go up to 10W. The problem is not what power level one can transmit at with today's wireless card technology; but what will be rolled out in the future. I'm especially concerned about the WiMax technologies being proposed that allow for higher transmit powers within the same frequencies as today's Wi-Fi systems -- see this link.
The take-home message is simply that there are multiple uncertainties within wireless technologies -- and it is not that these unknowns should be viewed as a barrier -- but that there are very few universally correct answers. Whether one is looking at throughput degradation or "the best" routing protocol, wireless is a nascent technology with an incredibly diverse set of possible implementations. It is impossible for those of us working with wireless technologies to fully understand all of the different available options, much less have answers to all the questions that are asked of us. But in debating the pros and cons of different solutions, I am hopeful that we'll increase our collective understanding of wireless technologies and creat additional opportunities for making the right choice for particular implementations.