wifi for 600, alex

Hi list,

I just read over: http://www.nanog.org/mtg-0302/ppt/joel.pdf because I am on the PyCon ( http://us.pycon.org ) team and last year the hotel supplied wifi for the 600 attendees was a disaster (they probably were not expecting every single one to have and use a laptop the whole time). Joel's pdf was for a conference in 2002 or 3, so I am hoping much has changed. But from what I have found, I think my hopes may be just a dream.

Does anyone have any advice or URLs of more recent case studdies in supplying wifi for 600 laptop wielding geeks?

Thanks,

Carl K

ps, Google is one of our sponsors, and I see that they want to be the next inertube or something - maybe we can get them to do it :slight_smile:

carl,

tony kapela (email me for his email address or you may find it in the
nanog mailing list archives)) has engineered the most succesful
wireless access at nanog in recent years. he did a lightning talk
about some of the challenges at nanog STL
(http://www.nanog.org/mtg-0610/lightning.html) that was detailed and
useful. it details (down the the config sample) how to deploy a
usable 802.11 network for a bunch of geeks requiring usable ssh, a
mixture of 802.11b/g and still running bittorent. it's fabulously
good stuff and worthy of more attention than it got at the lightning
talk level.

i have been utterly unsuccessful at recruited a general-session talk
from tony for nanog toronto, but one might hope that public praise
combined with public shaming may work some wonders for nanog in may.
:slight_smile: in any case, he is a fantastic resource for this kind of
wireless data engineering. look at the video of the lightning talk,
read the preso.

t.

The IETF in Vancouver was a disaster (the floors were transparent to RF), but Jim Martin and Joel
Jaeggli and company have done an excellent job and the 802.11x has been quite good since.

And the IETF is 1200 people all of whom use laptops all the time.

Marshall

Carl Karsten wrote:

Hi list,

I just read over: http://www.nanog.org/mtg-0302/ppt/joel.pdf because I
am on the PyCon ( http://us.pycon.org ) team and last year the hotel
supplied wifi for the 600 attendees was a disaster (they probably were
not expecting every single one to have and use a laptop the whole
time). Joel's pdf was for a conference in 2002 or 3, so I am hoping
much has changed. But from what I have found, I think my hopes may be
just a dream.

Let me just say that my experience since then has been informative, but
that I haven't really had time to condense that into another preso. We
were working on an IETF hosting document that would have covered some of
the experiences, but that morphed into a requirements docuement which is
less useful for someone looking for pointers.

In our recent endeavors we've spent a lot more time focused on ap
density and less on rf engineering and ap tuning and we have generally
avoided disaster...

An observation I would make is that the number of mac addresses per
person at the tech heavy meeting has climbed substantially over 1 (not
to 2 yet) so it's not so much that everyone brings a laptop... it's that
everyone brings a laptop, a pda and a phone, or two laptops. In a year
or two we'll be engineering around 2 radio's per person in five years
who knows.

An observation I would make is that the number of mac addresses per
person at the tech heavy meeting has climbed substantially over 1 (not
to 2 yet) so it's not so much that everyone brings a laptop... it's that
everyone brings a laptop, a pda and a phone, or two laptops. In a year
or two we'll be engineering around 2 radio's per person in five years
who knows.

We did the wireless network at LCA '06. Due to abuse at LCA '05 we required everyone to register their mac address to their registration code before we let them onto the network. This means we have a nice database of MAC's <-> people.

We saw:
199 people with 1 MAC address registered
102 people with 2 MAC addresses registered
9 people with 3 MAC addresses registered
5 people with 4 MAC addresses registered

1 person with 6 mac addresses registered

We did have a lot of problems with devices that didn't have a web browser (so had to ask us to add their macs manually, there were 11 people who had this that aren't accounted above). Mostly voip phones, but it's amazing how many people have random bits of hardware that will do wifi!

This is perhaps biased as there was also wired ethernet available to some people in their rooms (about 50 rooms IIRC), so some of those 102 people would have a MAC for their wireless and a seperate MAC for their wired access.

We also ran soft AP's on soekris boxes running Linux, so we could hook into the AP at a fairly low level. We firewalled all DHCP replies inside the AP so it wouldn't forward any DHCP replies received from the wireless to another client on the AP or onto the physical L2[1]

As an experiment we firewalled *all* arp inside the AP's so ARP spoofing was impossible. ARP queries were snooped and an omapi query was sent to the DHCP server asking who owned the lease, and an ARP reply was unicast back to the original requester[2]. This reduced the amount of multicast/broadcast (which wireless sends at basic rate) on the network, as well as preventing people from stealing IPs and ARP spoofing.

To stop people from spoofing someone elses MAC, we also had lists of which AP a MAC was associated with, if a MAC was associated with more than one AP we could easily blacklist it and visit people in the area with a baseball bat.

We didn't see much abuse, (and didn't have people complain about abuse so I guess it's not just that they hid it from us), I think mostly because people knew that we had IP<->MAC<->name mappings, and abusers knew they could easily be tracked down.

One of the more interesting things was that during the daytime we were a net importer of traffic as people did their usual web surfing, but at about 10pm at night we suddenly became a net exporter as people started uploading all their photos to flikr.

Carl Karsten wrote:

Hi list,

I just read over: http://www.nanog.org/mtg-0302/ppt/joel.pdf because I am on the PyCon ( http://us.pycon.org ) team and last year the hotel supplied wifi for the 600 attendees was a disaster (they probably were not expecting every single one to have and use a laptop the whole time). Joel's pdf was for a conference in 2002 or 3, so I am hoping much has changed. But from what I have found, I think my hopes may be just a dream.

Does anyone have any advice or URLs of more recent case studdies in supplying wifi for 600 laptop wielding geeks?

How was the wifi at the resent nanog meeting?

I have heard of some success stories 2nd hand. one 'trick' was to have "separate networks" which I think meant unique SSID's. but like I said, 2nd hand info, so about all I can say is supposedly 'something' was done.

Carl K

Carl Karsten wrote:

Hi list,
I just read over: http://www.nanog.org/mtg-0302/ppt/joel.pdf because I am on the PyCon ( http://us.pycon.org ) team and last year the hotel supplied wifi for the 600 attendees was a disaster (they probably were not expecting every single one to have and use a laptop the whole time). Joel's pdf was for a conference in 2002 or 3, so I am hoping much has changed. But from what I have found, I think my hopes may be just a dream.
Does anyone have any advice or URLs of more recent case studdies in supplying wifi for 600 laptop wielding geeks?

How was the wifi at the resent nanog meeting?

I thought it was quite good. I also think that the IETF wireless has gotten its act together recently as well;
I suspect that Joel Jaeggli has had something to do with this.

Regards
Marshall

Perhaps wandering off topic, but I was throwing around the idea of
writing a pseudo-SIP server for registration of this kind of device -
assuming that you have a second device that does have a web browser,
register with that and then request to add a SIP device, it provides
you with a one-off SIP "number" to call to provide the mac address.

  Bill

There are a few fairly easy things to do.

1. Don't do what most hotel networks do and think that simply sticking
lots of $50 linksys routers into various rooms randomly does the
trick. Use good, commercial grade APs that can handle 150+
simultaneous associations, and dont roll over and die when they get
traffic

2. Plan the network, number of APs based on session capacity, signal
coverage etc so that you dont have several dozen people associating to
the same AP, at the same time, when they could easily find other APs
... I guess a laptop will latch onto the AP that has the strongest
signal first.

3. Keep an eye on the conference network stats, netflow etc so that
"bandwidth hogs" get routed elsewhere, isolate infected laptops
(happens all the time, to people who routinely login to production
routers with 'enable' - telneting to them sometimes ..), block p2p
ports anyway (yea, at netops meetings too, you'll be surprised at how
many people seem to think free fat pipes are a great way to update
their collection of pr0n videos),

3a. Keep in mind that when you're in a hotel and have an open wireless
network, with the SSID displayed prominently all over the place on
notice boards, you'll get a lot of other guests mooching onto your
network as well. Budget for that too.

4. Isolate the wireless network from the main conference network /
backbone so that critical stuff (streaming content for workshop and
other presentations, the rego system etc) gets bandwidth allocated to
it just fine, without it being eaten up by hungry laptops.

5. Oh yes, get a fat enough pipe to start with. A lot of hotel
wireless is just a fast VDSL or maybe a T1, with random linksys boxes
scattered around the place.

--srs

The oft-overlooked 802.11a is great for this purpose when there isn't
enough wiring infrastructure to drop a RJ45 in all the necessary
conference rooms. Whereas 802.11[bgn] has only three (or four,
depending on who you quote) mostly non-overlapping frequencies -- even
less when MIMO is in use -- 802.11a has eight *completely*
non-overlapping standard channels. In nice open conference hall space
with at most two walls in the way, the rated shorter range of 11a is
actually not so noticeable because of the lack of radio noise.

2.4GHz is soooooo last decade. :wink:

(The 802.11[bgn] density where I live is so high that I resorted to
installing 802.11a throughout my house. Zero contention for airwaves
and I can actually get close to rated speed for data transmission.)

That is a really nice list. Is there a wiki somewhere I could post this to?

Carl K

Suresh Ramasubramanian wrote:

http://nanog.cluepon.net/ !

From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On
Behalf Of Suresh Ramasubramanian
Sent: Wednesday, February 14, 2007 6:25 PM
To: Marshall Eubanks
Cc: Carl Karsten; NANOG
Subject: Re: wifi for 600, alex

[snip]

2. Plan the network, number of APs based on session capacity,
signal coverage etc so that you dont have several dozen
people associating to the same AP, at the same time, when
they could easily find other APs ... I guess a laptop will
latch onto the AP that has the strongest signal first.

Speaking from experiences at Nanog and abroad, this has proven difficult
(more like impossible) to achieve to the degree of success engineers
would expect. In an ideal world, client hardware makers would all
implement sane, rational, and scalable 'scanning' processes in their
products. However, we find this to be one market where the hardware is
far from ideal and there's little market pressure to change or improve
it. On many occasions I've detected client hardware which simply picks
the first 'good' response from an AP on a particular SSID to associate
with, and doesn't consider anything it detects afterward! If the first
"Good" response came from an AP on channel 4, it went there!

Also incredibly annoying and troubling are cards that implement 'near
continuous' scanning once or say twice per second or cards that are
programmed to do so whenever 'signal quality' falls below a static
threshold. A mobile station would likely see very clean hand-over
between AP's and I'm sure the resulting user experience would be great.
However, this behavior is horrible when there are 200 stations all
within radio distance of each other... you've just created a storm of
~400 frames/sec across _all_ channels, 1 on up! Remember, the scan
sequence is fast - dwell time on each channel listing for a
probe_response is on the other of a few milliseconds. If a card emits 22
frames per second across 11 channels, that 2 frames/sec per channel
becomes a deafening roar of worthless frames. It's obvious that the CA
part of CSMA/CA doesn't scale to 200 stations when we consider these
sorts of issues.

I can think back to Nanogs years ago where folks tended to have junky
prism II radios which did this (type of scanning). Nanog 29 in
particular was quite rife with junky prism II user hardware. A lot of
the laptops were "sager" or something silvery-plastic-generic from far
overseas.

In my selfish, ideal world, a "wifi" network would behave more like a
CDMA system does. Unfortunately, wifi devices were not designed with
these goals in mind. If they had, the hardware would be horribly
expensive, no critical mass of users would have adopted the technology,
and it wouldn't be ubiquitous or cheap today. The good news is that
because it's gotten ubiquitous and popular, companies have added-in some
of the missing niceties to aid in scaling the deployments.

We now see 'controller based' systems from cisco and Meru which have
implemented some of the core principals at work in larger mobile
networks. One of the important features gained with this centralized
controller concept is coordinated, directed association from AP to AP.
The controller can know the short-scale and long-scale loading of each
AP, the success/failure of delivering frames to each associated client,
and a wealth of other useful tidbits. Armed with these clues, a
centralized device would prove useful by directing specifically active
stations to lesser loaded (but still RF-ideal) APs.

True, the CCX (cisco client extensions) support on some devices can
permit stuff like this to be shared with the clients (i.e. CCX exposes
AP loading data in the beacon frames, and can tell the client how to
limit it's TX power) in the hopes that this can be used in the 'hybrid'
AP selection logic of the station card. What stinks for us is that very
few (generally fewer than 10% at Nanog) of the clients *support* CCX.
What's even more maddening is that about 35 to 40% of the MAC addresses
associated at the last Nanog could support CCX, but it's simply not
enabled for the ssid profile! Here we have one potential solution to
some of the troubles in scaling wireless networks that depends entirely
on the user doing the right thing. Failure, all around.

This gets back to the point of #2 here, in that only "some" of the
better-logic'd client hardware will play by the rules (or even do the
right thing). In a lot of theses cases it's better to expertly control
where a client _can_ associate with a centralized authority (i.e.
controller with data from all active APs). We simply cannot depend on
the user doing the right thing, especially when the 'right thing' is
buried and obscured by software and/or hardware vendors.

3. Keep an eye on the conference network stats, netflow etc
so that "bandwidth hogs" get routed elsewhere, isolate
infected laptops (happens all the time, to people who
routinely login to production routers with 'enable' -
telneting to them sometimes ..), block p2p ports anyway (yea,
at netops meetings too, you'll be surprised at how many
people seem to think free fat pipes are a great way to update
their collection of pr0n videos),

I would add that DSCP & CoS maps on the AP's can be used to great effect
here. What I've done at Nanog meetings is to watch what's going on
(recently with some application-aware netflow tracking) and continually
morph and adapt the policy map/class map on the router to set specific
DSCP bits on "p2p" or "hoggish users" traffic. These packets from the
"hog" can then be mapped in the AP (by virtue of their specific DSCP
coloring) to a CoS queue which has lower delivery priority than other
non-hoggish traffic. This way, the p2p/hog/etc bits act as a 'null fill'
and use the available space on the air, but they cannot overtake or
crowd out the queued data from higher priority applications.

You'd have to ask other Nanog meeting attendees, but I think it's fairly
safe to say that the way we treat certain SSH, dns, and tcp-initial
packets over the wireless network has yielded happy, content, and
less-lagged-n-frustrated users.

> > How was the wifi at the resent Nanog meeting?

I liked it! (heh)

> > I have heard of some success stories 2nd hand. one
'trick' was to
> > have "separate networks" which I think meant unique SSID's. but
> > like I said, 2nd hand info, so about all I can say is supposedly
> > 'something' was done.

The quick & dirty formula for Nanog wireless is as follows:

A) ssid for b/g (typically "Nanog" or "nanog-arin")

B) ssid for a-only (typically "nanoga" or "Nanog-a")

C) 1/4/8/11 channel plan for the 11b/g side, and tight groupings of
5.1/5.2 GHz and 5.7 GHz 11a. Many 11a client cards do not cleanly
scan/roam from 5.3 -> 5.7 or from 5.7 -> 5.3, so this apparent 'panacea
of 8 channels' really isn't. for each 'dwelling area' where users are
expected to sit, try to stick with 5.3 channels + reuse or 5.7 channels
+ reuse. If you have to, go ahead and mix them, but ensure that
'roaming' from this area to other areas features AP's along the way
which are in the same band, less 'hardest handoff' causes the card to
rescan both bands.

D) Ensure 1/4/8/11 channel re-use considers the "full" RF footprint. An
active, busy user at the 'edge' of an AP's coverage area might as well
be considered another AP on the same channel because the net effect is
to be another RF source which grows the footprint. Basically, the
effective "RF load" radius is twice as wide as the AP's own effective
coverage if the users transmitter power and receiver sensitivity is the
same (or nearly the same) as the AP's. This is perhaps the most subtle
and ignored part of wifi planning.

E) Help 'stupid' hardware do the right thing by ensuring an 'ideal'
channel is 'most attractive' to clients at every point in space where
capacity is the goal.

In areas where you are able to receive three AP's on channel 1, ensure
that you 'attract' the stupid hardware to a better AP that isn't
co-channel with others. A situation might be three AP's on channel 1
being heard at -72 dBm to -65 dBm by a client. You should place another
AP on channel 8 or 11 near by and ensure it's received level is
approximately 10 dB higher than the AP's on channel 1. This will tend to
attract even the dumbest of stupid hardware.

Why do you want to attract this client to another channel other than 1
so badly? Because if you are in a point in space that can receive three
AP's worth of beacons on the same channel, those three AP's can all hear
your transmissions. Every transmission your hardware makes means you've
consumed airtime on not one, not two, but three AP's at once.

An especially bad situation which you should strive to avoid is one in
which you are able to hear AP's on channel 1, 4, 8, and 11 with levels
of, say, -75 to -70 dBm. There is no clear winner here (a 5 dB
difference wouldn't be large enough in many drivers to base the decision
on it alone). In this case, ensure that co-channel AP's are heard at
least 20 db down from the strongest AP's so that users landing on
1/4/8/11 don't consume airtime from co-channel AP's and that the other
adjacent AP's transmissions fall below the threshold of 'clear channel'
assessment of the clients.

F) Use DSCP, CoS, and .11e EDCF support to allow the wireless devices to
treat different types of packets over the air appropriately. As
mentioned weeks ago by Todd and others (but apparently missed or ignored
by the thread) there's a PDF and a video up covering some of the results
of this concept.

Check it out at:

http://www.nanog.org/mtg-0610/presenter-pdfs/kapela.pdf

...and http://www.nanog.org/mtg-0610/real/delivered.ram

Perhaps this wasn't quick (little that's complicated is), but it's dirty
all the same! Try these ideas out at your next wireless event!

-Tk

Speaking from experiences at Nanog and abroad, this has proven difficult
(more like impossible) to achieve to the degree of success engineers
would expect. In an ideal world, client hardware makers would all
implement sane, rational, and scalable 'scanning' processes in their
products. However, we find this to be one market where the hardware is
far from ideal and there's little market pressure to change or improve
it. On many occasions I've detected client hardware which simply picks
the first 'good' response from an AP on a particular SSID to associate
with, and doesn't consider anything it detects afterward! If the first
"Good" response came from an AP on channel 4, it went there!

That is exactly how nearly all devices today function; the exceptions are small. There's a bit more needed to truly establish what is a good association and what isn't, from performance characteristics to functionality.

There are things underway that can mitigate some of this, neighbor lists for example.

Also incredibly annoying and troubling are cards that implement 'near
continuous' scanning once or say twice per second or cards that are
programmed to do so whenever 'signal quality' falls below a static
threshold. A mobile station would likely see very clean hand-over
between AP's and I'm sure the resulting user experience would be great.

There's actually a lot more to clean hand-overs between AP. For starters, you need to know what's around, find them(!) (i.e., channel), and reestablish any security associations and take care of IP mobility (at least at scale).

However, this behavior is horrible when there are 200 stations all
within radio distance of each other... you've just created a storm of
~400 frames/sec across _all_ channels, 1 on up! Remember, the scan
sequence is fast - dwell time on each channel listing for a
probe_response is on the other of a few milliseconds. If a card emits 22
frames per second across 11 channels, that 2 frames/sec per channel
becomes a deafening roar of worthless frames. It's obvious that the CA
part of CSMA/CA doesn't scale to 200 stations when we consider these
sorts of issues.

High density and the relatively high rate of AP can cause the same from beacons, for example. There's a tradeoff between mobility and density of beacons, too: you need to hear a sufficient number of them to make decisions in the current model.

In my selfish, ideal world, a "wifi" network would behave more like a
CDMA system does. Unfortunately, wifi devices were not designed with
these goals in mind. If they had, the hardware would be horribly
expensive, no critical mass of users would have adopted the technology,
and it wouldn't be ubiquitous or cheap today. The good news is that
because it's gotten ubiquitous and popular, companies have added-in some
of the missing niceties to aid in scaling the deployments.

Hmm. I think it would be good to frame which parts of a "CDMA system" (whatever that actually refers to :wink: you mean by that

We now see 'controller based' systems from cisco and Meru which have
implemented some of the core principals at work in larger mobile
networks.

And which have similar scaling challenges with small cell sizes and mobility. In fact, you could argue the model is particularly challenged in that case.

One of the important features gained with this centralized
controller concept is coordinated, directed association from AP to AP.
The controller can know the short-scale and long-scale loading of each
AP, the success/failure of delivering frames to each associated client,
and a wealth of other useful tidbits. Armed with these clues, a
centralized device would prove useful by directing specifically active
stations to lesser loaded (but still RF-ideal) APs.

So goes the theory at small scale, yes. And I would contend that "RF-ideal" is something you will only find inside of an RF tent.

3. Keep an eye on the conference network stats, netflow etc
so that "bandwidth hogs" get routed elsewhere, isolate
infected laptops (happens all the time, to people who
routinely login to production routers with 'enable' -
telneting to them sometimes ..), block p2p ports anyway (yea,
at netops meetings too, you'll be surprised at how many
people seem to think free fat pipes are a great way to update
their collection of pr0n videos),

I would add that DSCP & CoS maps on the AP's can be used to great effect
here.

I don't I agree. Having QoS mechanisms in a cooperative, unlicensed frequency has its limitations, rather than anything amounting to scheduled access. And scheduled access in WiFi is of limited availability in chipsets today, not to mention incompatible with non-scheduled access.

Best regards,
Christian

There are things underway that can mitigate some of this,
neighbor lists for example.

For the sake of the lists topic centrism, I was avoiding getting into
points like that. :slight_smile: Which brings me to the part about:

Hmm. I think it would be good to frame which parts of a "CDMA
system" (whatever that actually refers to :wink: you mean by that

Well, neighbor lists for one. That is, if a client device is continually
informing something like a "BSC" what it perceives is the 'hearable
topology,' we can then implement far more useful logic in the BSC to
better direct the underlying activities. Second is network assisted
handovers and handoffs (even in the absence of policy-knobs such as
neighbor lists). Perhaps third would be more related to the way the PCF
shim can be used to schedule up and downlink activity in each BTS by a
rather "well informed" BSC. Perhaps even more useful would be support
like handup/handown for moving clients (when possible) from .11g to
.11a, just like CDMA BSC's would do to direct a mobile station between a
classic IS-95 BTS to IS-2000 BTS.

Anyway, I don't mean to stray too far off topic, but indeed there are
many 'good' things already designed (some decades ago) and understood
within the wireless community which would be well to appear in .11 at
some point. Hopefully my comment makes more sense now! :slight_smile:

There's actually a lot more to clean hand-overs between AP.
For starters, you need to know what's around, find them(!)
(i.e., channel), and reestablish any security associations
and take care of IP mobility (at least at scale).

Indeed. IAPP and things like it were designed to assist or deal with
carry-over of authentication after all the layer-2 and 1 things are
accounted for. Who even interoperates with IAPP today?

And which have similar scaling challenges with small cell
sizes and mobility. In fact, you could argue the model is
particularly challenged in that case.

Some aspects are improved even in small, dense environments. Some of the
interesting work that Meru does is to aggregate & schedule back to back
.11 frames for things like RTP delivery. Meru, for example, also
globally schedules & coordinates delivery across all APs for specific
management messages. But even still, you cannot create capacity where
there is none, so if there's simply no free RF, we're hosed.

So goes the theory at small scale, yes. And I would contend
that "RF- ideal" is something you will only find inside of an RF tent.

I should have said 'comparatively equal' to whatever shade of grey is
available... :slight_smile:

I don't I agree. Having QoS mechanisms in a cooperative,
unlicensed frequency has its limitations, rather than
anything amounting to scheduled access. And scheduled access

I see your point there. In the case of .11e and EDCF, significant
improvement can be had even if only one half of the path has the
support. In our cases, yea, we only down control of the downlink to the
mobile station. I'm not sure I'd even want clients using
"self-medicated" EDCF, so the unlink prioritization/scheduluing issue
looms large without a great solution.

in WiFi is of limited availability in chipsets today, not to
mention incompatible with non- scheduled access.

Check out EDCF. It's not changing any fundamental part other than the
radios behavior during CCA backoff, and any client can benefit from it.
Also, I explain how it works briefly in the lightning talk video.

-Tk

[..]

Anyway, I don't mean to stray too far off topic, but indeed there are
many 'good' things already designed (some decades ago) and understood
within the wireless community which would be well to appear in .11 at
some point. Hopefully my comment makes more sense now! :slight_smile:

Yes, that is true. There are also mechanisms which have to be invented completely from scratch because the architectural model is different (decisions being made at the edge rather than an "omniscient" controller). Integration with other modes of mobile communication is one such example.

It's an interesting problem to have, but it also makes the standard very challenging as there is amendment after amendment with lots of old non-compliant devices around from before the time when a feature was invented.

[..]

in WiFi is of limited availability in chipsets today, not to
mention incompatible with non- scheduled access.

Check out EDCF. It's not changing any fundamental part other than the
radios behavior during CCA backoff, and any client can benefit from it.
Also, I explain how it works briefly in the lightning talk video.

Maybe I really need to start thinking about creating a proposal for a talk at NANOG for service provider issues in Wi-Fi, such as those we live every day in my (mostly) day job. Hmm.

Best regards,
Christian

Another mobile-land feature 802.11 could do with - dynamic TX power management. All the cellular systems have the ability to dial down the transmitter power the nearer to the BTS/Node B you get. This is not just good for batteries, but also good for radio, as s/n has diminishing returns to transmitter power. WLAN, though, shouts as loud next to the AP as on the other side of the street, which is Not Good for a system that operates in unlicensed spectrum.

UMTS, for example, has a peak tx wattage an order of magnitude greater than WLAN, but due to the power management, in a picocell environment comparable to a WLAN the mean tx wattage is less by a factor of 10.

Please don’t forget that 802.11 uses the CSMA/CA protocol. All nodes,
including the AP and all the clients should hear each others’
transmissions so that they can decide when to transmit (when the medium is
idle).

Yes. But so long as they can all interfere with each other, you’re still going to pay a cost in informational overhead to sort it out at a higher protocol layer, and you’re still going to have the “electronic warfare in a phone box” problem at places like NANOG meetings. 3GSM is the same - even the presence of ~10,000 RF engineers doesn’t prevent the dozens of contending networks…

Essentially, this is a problem that perhaps shouldn’t be fixed. Having an open-slather RF design and sorting it out in meta means that WLAN is quick, cheap, and hackable. Trust me, you don’t want to think about radio spectrum licensing. On the other hand, that particular “sufficiently advanced technology is indistinguishable from magic” quality about it causes problems.

Intentionally limiting the clients’ TX powers to the minimum needed to
communicate with the AP makes RTS/CTS almost obligatory, which may be
considered a bad thing. Once again, in the ideal situation all nodes hear
each other, at least from the CSMA/CA’s point of view.

Regards,
Andras

I’m not sure that’s ideal in my point of view, in so far as we’re talking about a point-to-multipoint network rather than a mesh. And why would anyone ever want to use more power/create more entropy than necessary?

This argument sailed around in the early days of WiMAX, when people were talking about running it in unlicensed 5.8GHz spectrum and “finally getting away from the telcos and the government”, until they realised that it’s not “big wi-fi” and isn’t designed to cope with contending networks…

Alex

Alexander,

as you might imagine, conceptually there is no disagreement whatsoever here :wink: And, in fact, that already exists on some platforms, but it's somewhat limited at the moment due to lack of support for standards body/bodies at this time. But I'm hopeful that we're closer to meaningful improvements. This is just as important for managing the available spectrum as it is for device power efficiency.

Best regards,
Christian

It shouldn't be that difficult, because one device that does manage
its power output shouldn't affect anyone else who doesn't.