[MLB-WIRELESS] [TECH] Dipole antennas, and melbwireless structure

Ben Anderson a_neb at optushome.com.au
Tue Mar 19 05:50:14 EST 2002


>   Ben> Firstoff, thanks for your comments...
>
> it's just such a relief to have technical items to comment upon ;-)

Yes indeed.  I'm hoping it will herald a new era of melb-wireless discussion
*grin*


>   Ben> I agree.  Fine step.  But that's all it is.  I have serious
>   Ben> doubts you can make this scale technically, financially,
>   Ben> socially, etc.  I'm looking "long term" -- ie what I ultimatly
>   Ben> think it should look like to "just work" -- ie ubiquitous
>   Ben> public network.
>
> i wonder if this isn't the core of a lot of the technical
> disagreements that we see here:  some people are aiming at getting
> something running tomorrow, that will work until the end of the year,
> and others are thinking australia-wide (at least) with a planning
> horizon of 3 years plus.

I know it is, I've spotted this ages ago...  I keep pointing it out, you're
the first who seems to have picked up on it.


> perhaps it's worth having two streams of design (and maybe two "working
> groups"), one for each timeframe?

Sounds reasonable, though some sort of cross-communication would be very
useful to both parties, such that the designs converge.  the 'now' solution
can provide actual difficulties, bug reports, etc so the 'future' team can
include them in the problem domain covered by the design.  At least it'd
stop people responding to my scalability ideas with 'should I use ipv4, or
ipv6 (even ipv5!!??!)' or 'omni vs directional' which have absolutly nothing
to do with the concepts I'm talking about </rant, i'm feeling better now ;)>


>   >> i guess it's possible that we'll reach a stage where the density
>   >> of nodes means that directional antennae are no longer required,
>   >> but i suspect this is several years off ...
>
>   Ben> the "omni antenna for local... and a couple of directional
>   Ben> antennae..." solution is unlikely to be able to deal with
>   Ben> scenario -- ever.
>
> i agree.  it's not designed to.  however, it will work today and will
> give us a testbed for developing the routing protocols, security
> systems, etc required to support millions of nodes.

As long as it doesn't get locked down, consolodated, and finalised by
whatever 'committee' gets elected....  Must be able to change to deal with
the future.


> of course, you will have "wasted" two directional antennae and two
> 802.11b cards, but at today's prices that looks like about $70 ;-)

Heh, unless the spectrum gets banned, it'll be useful to something (though
by that time, you'll probably have to eat a few new cans of pringles ;)


> in 3 year's time, we'd probably be using different hardware anyway.

Perhaps, look at the lead time they had with 802.11a.  If we're on a
broadcast-based network, the routing technology will still be the same....
Even a fixed network.  The 'mojo' based routing mechinism should stll work,
and scale.  It shouldn't matter if a node on this network design I'm
suggesting has both an 802.11a and 802.11b, the packet will get routed
through whichever interface is closer, and cheaper...

>   Ben> Packets with the most mojo being 'spent' on this hop (for
>   Ben> example, as one metric that could be used -- lots more analysis
>   Ben> and simulation would be very useful on different schemes for
>   Ben> load-ballencing) get ordered at the front of the queue.
>
> so you're thinking that a packet will contain a per-hop expenditure
> specification?  (presumably some sort of policy, unless you're
> thinking source-routing) ?  that seems complex/big?

Or perhaps just a flag that's "pay any mojo that's necessary to get to the
front of the queue" -- because it's low latency stuff that'll be in demand.
if people are willing to wait a long time, then the stuff is basically
guaranteed of eventually getting through *grin*.
I was thinking broadcast everywhere towards the physical direction of the
destination node to discover the network path in-between (cached discoveries
would be good), and then the best path deterined and sent back in an ack,
with the source then source routing the packets until something flags the
source to change...  I know i'm being a little vague, hopefully there'll be
other problems people propose that I'll have to do some dancing with the
concepts and ideas to get it all working ;)

>   Ben> And the addressing scheme, the only realistic one that scales
>   Ben> well that I've come up with is one based on physical location
>   Ben> (for example, gps co-ordinates could be used).
>
> in order to scale, an addressing scheme needs either some form of
> summarisation (ie. hierarchical containment) or, an ability to rely on
> an external mechanism to determine a forwarding path without needing a
> lookup table for every node/address.  geographical addressing is one
> way (are there others?) to get the latter.

I've not been able to think of anything...  and up until recently, I was
still trying to do it by discovery of the network, and I kept running into
big scalability issues.  The geographic thing just seems "right" -- though
if there's another option, i'd love to hear it.  Anyone?


>   >> what does "premium" service actually mean?  extra volume?  higher
>
>   >> are packets charged to their originating node or the previous
>   >> forwarder?  how do you prevent spoofing of packet "owner"?  does
>   >> this penalise "good" services like proxies?
>
>   Ben> I believe that every packet needs to be encrypted.  And using
>   Ben> digital signatures, it makes it very difficult to spoof
>   Ben> packets.
>
> these packets are beginning to sound quite large: mojo, spending
> policy, signature ...  is this a replacement for IP?  or a shim
> between IP and 802 frames?

Replacement for IP seems to be the 'nicest' place to put it in the osi
layer, with some state also being extracted from upper-layers for use in the
networking layer.  (ie layer3 switching type technology included)


>   Ben> And proxy's should be rewarded with mojo, as it's a beneficial
>   Ben> service to the infrastructure of the network.
>
> so i can earn mojo external to the routing protocol also.  ok.

I think it should be something to consider, though it is extra work to make
this happen, it's really only a number++ type deal...   Doing simulations to
figure out how much each thing should be worth to keep the network scaling
is going to be very important as we get closer to "go" time (assuming it
happens).


>   >>  sounds good!  let me know when i can get one for under A$500 ;-)
>
>   Ben> Theoretically (!) they're already available for somewhere
>   Ben> around that price...  No software mind you, just a pci board
>   Ben> with lotsa nuts on it... they've been made for years...  I'm
>   Ben> trying to get demos/samples ATM for another job.  I will let
>   Ben> you know when I've got them in my hands, ready to start
>   Ben> developing on.  Do you have knowledge of verilog or vhdl?
>
> no.  i recall (i think?) a bunch of people in adelaide working with
> large FPGAs on a PCI card, and dynamically programming the array to
> offload expensive computation -- is this the sort of thing you're
> thinking of?

Hell yes.  I want one, yesterday :)  Though today's FPGA's have **lots**
more gates in them than yesterdays ;)

> is this still viable/optimal in the face of commodity multi-GHz CPUs?

Hell yes.   A pentium 200 can encrypt DES at about 10Mbits.  But that's all,
that's 100% cpu.  The slowest speed grade of alteras flex chips can, in 450
logic cells, do des at 125Mbits/sec.  That's the slowest, and leaves space
on the chip to do other stuff in parallel, and leaves your processor free
for rendering halflife fast ;)
It's cool to do any signal-processing type stuff... throw out those MMX and
SIMD instructions -- fpga, reconfigure the whole pipeline of operations, and
start feeding it data, get fully processed data out the other end as fast as
you can shove data in...
I know I'd prefer to use a 2 dollar fpga (1997 tech) to do 125Mbit
encryption than a 2ghz P4 (2001 tech) ;)  -- take a 2001 tech fpga, and
watch it do the same job at 10 times that speed (or more!!)
Though commodity cpus' are reducing the advantage of FPGA's -- though fpga
technology is fairly well keeping pace with the developments in CPU's...
Pretty much anything MMX or SSE is useful for (ie SIMD stuff), an FPGA could
do it a damn lot faster.  A _lot_ faster.  (yes, i know i'm generalising,
and it's not always true... but generally...  blah)

That's why you haven't seen a win802.11 card yet (ie winmodems use cpu --
the signal processing task is fairly simple, so it is able to be done on
CPU.  Most people still buy real modems, cos it sucks wasting a pile of
'expensive' cpu time when a cheap dsp can do the same job, faster, cheaper,
and leave your expensive processor for "real" work (like rendering halflife
_fast_ ;)  I don't think anyone would use their entire p4 2ghz to get a
softare 802.11 card going when an 802.11 card can be had so cheap...  Yes,
it makes sense to use fpga's for this, and yes, fpga's are what the device
manufacturers actually use to prototype the silicon, and sometimes even ship
silicon, if the expected production run doesn't recover tooling costs over
fpga costs.
FPGA's rock!

Cheers,
Ben.



--
To unsubscribe, send mail to minordomo at wireless.org.au with a subject of 'unsubscribe melbwireless'  
Archive at: http://www.wireless.org.au/cgi-bin/minorweb.pl?A=LIST&L=melbwireless
IRC at: au.austnet.org #melb-wireless



More information about the Melbwireless mailing list