[MLB-WIRELESS] Applications on the melb-wireless network

Ben Anderson a_neb at optushome.com.au
Tue Mar 19 18:02:20 EST 2002


----- Original Message -----
From: "Toliman" <toliman at ihug.com.au>
To: "Ben Anderson" <a_neb at optushome.com.au>
Sent: Tuesday, March 19, 2002 7:53 AM
Subject: Re: [MLB-WIRELESS] Applications on the melb-wireless network


> > Alternatly, one thing I've seen done before is a whole neighbourhood get
> > sold on the idea of high speed network access, and they ran fibre to
> **every
> > home** and installed gigabit hardware everywhere, and the whole cost of
> the
> > project was very comparable per house to broadband internet access for a
> > year (they co-operativly got bandwidth, which was included ä   in that
price).
> > Perhaps we ld consider whether organising ourselves and spreading
the
> > word, and getting whole neighbourhoods to run fibre around is a better,
> more
> > future-proof way of providing a UPN.
>
> i saw this city of which you speak. and all of the hundreds of miles of
> optical cabling in the area was routed into the guy's garage. everybody
> signed up, paid for by whatever sponsored organisation they had at the
time,
> the ISP was running either in the Garage, or down the road from where
their
> house was. the distinctions beteen that scenario and an australian gigabit
> ethernet utopian private network, were sponsorship for hardware.
bandwidth.
> accessibility. infrastructure. none of which are planned or implemented
here
> in australia in sufficient quality or quantity to be of any use. we don't
> even have the capability to lay cable or wire resiences like those guys
> could. if you can hunt down the URL, i believe it might even be in an
> archive of Wired magazine from 1998-1999.
>
> the 'fiber neighbourhood' idea was panned out for a simple reason. it was
> impossible to implement due to the fact that the world doesnt resemble
utah
> (flat, even roads). or new york(extremely dense residential area, w/ old
> cable junctions). or silicon valley (the place DOES has fiber up the ying
> yang, along with earthquake-proofed streets and buildings). The
geographical
> locations of millions of american and australians would make fibre cabling
> one street a logistics nightmare, let alone 300 to 800 million users
around
> the world.
>
> if you read bill gates manifesto, the first one, called "the road ahead"
way
> back in 1995, he planned something along the lines of low-orbit satellites
> delivering voice, video and data across a wide blanket of 2-way satellite
> services, entirely roaming and mobile. the idea was to have ~56 or so
> satellites in geosync orbit at very low orbit levels so that microwave
could
> reach the 2-way satellite dishes and give people what they get on cable,
> even while driving or at the office. then hook up fibre to neighbourhood
> nodes which would serve video and cable content 'on tap' from these
> neighbourhood nodes, as well as negotiate neighbourhood connections
between
> nodes to share traffic. it's a utopian ideal. he also predicted
> micropayments, which are pretty much only in advertising but 7 years on,
> most all of it is coming. slowly. the tablet pc. the microcash payments.
the
> low-orbit satellite networks/wireless technology, etc. theres more but its
> all gatesian. never read business at the speed of thought, but there's not
> prophesaying involved from what i hear.

Ok, quick nasty calculation time...   Let's say 250/switched port, and
250/card for the to the desktop 100Mbit in fibre.  16 dollars per meter of
24core fibre and 50M between houses (probably a better way of doing this,
(1200 for the 24 houses).  another $2.50/m for 20m to the house single core,
(50/house).  That's (24x250+24*250+1200+50*24) = 14400 for 24 houses, or
600/house.
I'm investigating pricing for bandwidth on the southern cross network to
include in this calculation...  So far though, 600 bucks for 100Mbits into a
gigabit backbone is looking a lot better price/performance than 11mbit
wireless networks at 125/card into an 11mbit backbone, wouldn't you agree?
And hybrid fibre coax would be cheaper still, based on the existance of
equipment already existing in a lot of places...  Perhaps hybrid
fibre-twisted pair would be even cheaper, though a 'switch up a pole' every
24 houses could be a logistical nightmare...

> regardless of scaling a gigabit connection for 50,000 users in one suburb,
> which can be done easily enough. (yes truly), you would still need to hook
> customers up with technology easier to use than the common microwave oven.
> then connect fiber to their houses or lease them expensive wireless
> equipment. and manage unruly users on the network.

I'm not sure what you're trying to say here....  Scaling to gigabit cabled,
or wireless gigabit?
Please elaborate on this 'easy scaling' to gigabit for 50k users in a
suburb.  It sounds very attractive ;)


> its more than a low-level network issue, it's a perception issue. not one
> person in australia has the resources to lay fiber optic cable to the
front
> door of every household in one street block, let alone millions of houses
> without fiber optic in them. or the marketing to sell and advertise
> 802.11a/b devices to householders in one street and support staff to
educate
> the users on how to utilize the hardware. the end-user is king in the
market
> for communications dominance, and you have to pander to their wants and
> needs. which usually is the bad part of being a telecomms company.

If we go guerilla fibre to the desktop, rough calculations suggest we can
install the network at similar cost of a years cable internet access...
Yes, I do realise that this calculation relies on high-density sign-up --
which is doable, though it means a fairly impressive 'neighbourhood
co-operation' and doorknocking type scheme to get everyone onboard...
Finance could be looked at, for those with "600 upfront" issues...

> and re: mojo. i'll get to that later.

Excellent :)


> > With a 'backbone' based network, you can forget pretty much all of those
> > except messaging, and chat, for a network of say 100nodes+
> > (think about 10mbit ethernet, where half the bandwidth is used already,
> and
> > it's half-duplex, with a chain of backbone nodes (still 10mbit,
effectilvy
> > 5) all trying to use the same, if not similar slabs of bandwidth in the
> > air...  It's going to fall over very quickly...  And with 100nodes,
> assuming
> > 100% efficiency, in peak times, the network's back to under the speed of
a
> > modem.  Now try and scale the network to a thousand nodes.
> >
> actually, you can scale a large percentage of users in the way 802.11b
> network channels work. the majority of times, traffic can be held back or
> slowed down, and traffic gets off at certain stops along it's journey. its
> not exactly the same train  that has to pass by stop b,c,d,e,f,g to get to
> stop z, it can negotiate across other links, move to other lines and other
> nodes, as long as it gets there in the same sequence at the end.
video/voice
> still is adequate at a 30ms delay time to catch up or pre-buffer
compressed
> H.263 signals which can handle signal degradation, lost packet order and
> still maintain quality. 802.11b networks also have a fairly high bandwidth
> per node, as long as the nodes don't conflict with each other too much.
> moving 5 AP nodes closer together might drop the whole network in that
area
> to 1mbps, so there is a danger of over-access there.

'held back or slowed down' -- implies we need large buffers on the
backbone...  And then that limits the usability of the network...  if ping
times go to 30 seconds, the usefulness drops so that the only application
that's really useful is shifting bulk amounts of data around -- exactly the
type of traffic that's likely to _cause_ the congestion.  So effectivly,
games, IP phone, IRC, chatting, all get DoS'd by people trading mp3's.  If
we don't mind doing nothing but trading mp3's, then fine, just do a blind
queue.  Or is there another solution that you're proposing to this problem
that I'm not reading into your response properly?

> the other problem of scalablility is that of the 90/10 rule. 90% of the
> network is used by less than 10% of the users, inversely, 90% of the
users,
> use less than 10% of the network. welcome to scalability. its not how you
> manage an all-out broadcast war, its how you manage for everyday, the
> worst-possible day, and the 'i love you' bug/Code Red days.

If the broadcast zone is an _entire city_ the broadcast war could use all
the bandwidth at choke-points across the network.  Manging 'code red' type
days shouldn't be an issue with mojo either -- if the user has 'mojo' on the
packet, then whoevers transfering the data for them gets compensated in
'priority traffic rights' for transferring the bug.  As the user doing the
flooding runs out of mojo, they lose priority access.  So people who have
mojo left will still have the ability to get low latency access to the
network.
If you have other solutions, please present them.


> all have unique network activity and can be managed in differnet ways,
even
> if everyone was on a 1mbit network, a network admin could scale the impact
> of each of these events without touching a single machine, through
planning.

Please elaborate on this.

> for starters, full-duplex 1mbit internet can handle 50 ppl. with a bit of
> occasional discomfort. if everyone on the mesh starts doom 3 and it uses
> something like 900kbit/sec, then the network is going to react. badly. but
> if im sending hawthorn-4 an email while bri-2 sends his 2gb wedding dvd to
> joe in mornington-7, the network should give and take where it needs to.

If you have a user trying to transfer their home movies to their grandma
across the city, and there's no differentiation between that traffic, and
your low-latency needed traffic, then it'll get queued, and the latency will
increase without bounds as the bandwidth comes under more and more demand,
until the routers run out of buffer space and drop packets (ie infinite
latency).  The "solution" that you just talked about doesn't sound like it's
solving the scalability problem to me.  And dropped packets cost a lot,
since that traffic must then use **even more bandwidth** to be transferred
through the network again, all the way from the originator.

> auto-magically.

Uh huh...  Automagically stops working...

> tcp/ip has a knack for doing that kind of thing. and were not dealing with
> radio or half-duplex networks. but i get your point, that 11mbps is
> deceptive. at most, 500kbps will be the absolute most one could expect. in
> realistic traffic loads on a wireless network. maybe more at night.

That's not so much my core point.  Regardless of the total bandwidth we have
available, we need a way to be able to have low-latency traffic on the
network, without simply banning any high bandwidth applications.  And it
should be done in some sort of a 'fair' way.  This is why I proposed the
concept of 'mojo' (still looking for another name...).


> > So to give you a general picture, the packet would be broadcast, and
then
> > each cell in the rough direction of the destination cell would
rebroadcast
> > the packet, and so on until it converges on the destination nodes GPS
> > co-ordinates.
>
> so, in your utopian-private-network, for every single client in an area
> where the node loses contact and re-establishes itself, the node would
have
> to ... send data across all the routes to a 'central authority' for a
> defined location, change it, and that location would then either be stored
> and accessed in a DNS type fashion, or broadcast to all routing nodes. i
> called on your DNS analogy here.

(the U stands for ubiquitous, not utopian...  utopian is infinite infinite,
and can't happen...  Ubiquitousness is possible, utopia realistically isn't)
When a client *moves* they'd have to retransmit their location back to some
fixed 'home' node for that device.  Whether it's dns-like or not, I haven't
decided yet.  I haven't found an alterative that solves the scalability
problem while still allowing nodes to roam on the network.  If you can see
one, please pipe up :)


> for starters, what distinct advantage does 'discovery broadcast' traffic
> have over RIP or metrics in OSPF, especially if 'primary services' nodes
are
> out-of-reach ? does the client and router just hold for 5 minutes while it
> decides to allocate you a temporary encrypted login/ address or does it
just
> 'hang' and get back to you when the link comes back on. important details
> when dealing with an unstable network interface.

I think you're misinterpreting what I'm proposing.
the network is discovered by transfering data in the physical direction the
data needs to travel, extracting state information out of the transport
layer for use in routing decisions in the network layer.


> the other question is, at what cost does roaming across nodes cost you
mojo
> ? the mojo think spooks me in the wireless aspect, because you assign
> gratuity for good behaviour, but in the real world, obligatory good
> behaviour is about as far below realistic as telstra is about data
pricing.
> any altruistic system can be cheated, and with mojo, non-participation is
> proably one of the first problems to address.

Roaming is no different to being at a fixed location.  Using highly
congested links costs you more mojo, less congested, less.
Please, if you can see a way to cheat the mojo design i've proposed so far,
speak up, and I'll see if I can incorporate protection against it into the
design.  Now is the time to solve these issues :)


> > Broadcasting the entire network topology to the entire network when
> there's
> > routing changes...  sheesh, the mind boggles.. We've only got 11meg...
> > It's not about the routing info that needs encrypting.... it's the
> payload.
> > It should be a default.
> [snip]
> >
> > If you use GPS as the routing decision metric, mobile units would need
to
> be
> > able to know where they are, and then also send info back to a known
'home
> > node' or a 'dns server' - just some repository where the actual current
> gps
> > co-ordinates of that device can be located.  This could be a good reason
> to
> > encrypt that particular data :)
> >
> [snip]
> > Having it as default unencrypted is a problem in that there is a lot of
> data
> > that's being passed around between nodes, and we don't have 'carrier
> > licences' to protect us from the law... if someone sends kiddieporn
> through
> > your node, it's probable that you'll be considered personally
responsible
> > for that data.  I'm not a lawyer, so I'm not 100%, but I'm pretty sure
> > that's how it works....  if everythings encrypted by default, when
> something
> > encrypted hits, with x and y's gps location, one can use that info to
> > extract a lot of information.  If however, the packet is encrypted by
> > default, and the network sends a lot of traffic for others around, it's
> > going to be very difficult to figure if x is talking to y, or if they
are
> > just passing on someone elses data...  It's been tested somewhere, I
> forget
> > where...  but encrypted kiddie porn is not kiddieporn without the key to
> > unlock it.  So you can pass it around all you want until someone has a
key
> > for it, and then it's illegal...  Again, I'm not 100%, and don't wan't
to
> > test the theory, which is why I think we need a default encryption
> > library... Hell, it's only 11mbits, even 500Mbytes/sec isn't that hard
to
> > encrypt in real-time thesedays...
> >
> > I don't want tunnels.  I want the network layer to deal with all this.
> > Since the bandwidth is broadcast, you can't stop kiddies using a
different
> > routing scheme and dropping the througput on 'our' public network
> > enormously.  And on what metric are you going to base this QoS -- how
are
> > you going to decide who gets priority?
>
> 1. routing traffic is nothing. absolutely nothing , not even a meg a day.
> per-second updates are useless, but every 5 minutes a packet travels from
> the nodes to the central area to make sure they are alive, and thats about
> it.

I think you're talking about the amount of routing traffic on fixed networks
with basically no changes.  Wireless is a whole kettle of fish.  Imagine a
single node moving through the network, each time it can see a different
router, that change has to be broadcast *over the entire scope of the
network* or the part of the network that didn't hear that update *can no
longer see that device*.  Now instead of a single moving device, think about
joe-average driving to work with his laptop, a large scale migration...
there's going to be lots of large routing changes, really fast.  Ie we're
now talking a **LOT** more than a meg a day in routing traffic.
Of course, if you have a mystical magical solution to deal with this in less
than a meg a day, please tell us!!


> 2. don't bring kiddie porn in a serious discussion. even if it's for a
good
> reason or about illegal stuff. it is a base argument, and it devalues the
> whole area ethically and socially. if the shit hit the fan legally, anyone
> on the network coud be arrested for posession and broadcast, encrypted or
> not. there is no precedent for publishers of content over public carrier
> license like there is with ISP's. making this unilaterally about child
porn
> is monstrosizing the situation, in which any other material could be used
> that's slightly illegal. for instance if DeCSS is transmitted, the MPAA
> could sue us all for broadcast of illegal  and copyright material. i just
> don't know the ramifications of broadcast law and data regulations. it was
> one of the things that killed digital TV and data-casting, responsibility
of
> content. and some other issues ive long since forgot.

Hmmm, another example...  DeCSS can be done with a prime number.  Numbers
are not illegal, and don't make sense to be made illegal.  When data is
encrypted, they become just numbers without the key.  I'm pretty sure
there's been a test case on this, which means the liability can be
effectilvy dissipated.  Or at very least, assuming it still is illegal, the
nodes are protected by the security of the encryption -- nobody knows what
the data is unless they have a key.  It's going to be difficult to prosecute
someone for a number that's difficult/impossible to prove exactly what it's
content decrypts to.


> 3. routing is easy. hardware routers and AP's can be configured to drop
data
> from the hardware address level quickly. if it's not on the same SSID,
it's
> not routed. it's not even recognised. if WEP or an equivalent software
> encryption was formed to send packets in a 802.1q VLAN type environment,
> then it would be harder just to 'log in', but for the sake of
> troubleshooting, 802.1q tunnels are far, far easier to implement in
windows
> and linux. if a kiddie can get onto the VLAN, they still have to route
> traffic through a secure LAN router or plain AP, with speed restrictions
on
> encrypted and non-encrypted traffic. it would reduce real node
availability
> if hardware AP's could not route encrypted traffic though.

Umm, I'm talking encryption at the network layer in the OSI model.  Not the
application layer or transport layer, which you seem to be refering to.


> there is a reason for high-priority traffic on any network, and that is
for
> practical traffic needs, such as routing, dns, and some entry-level
> services. of course, QoS is a stop-gap for network abuse by larger
> authorities and smaller authorities. it doesnt prevent people from
launching
> attacks, it prevents people from making mistakes with shared bandwidth.
> priority is, as always, shared. it's a concept that few promote, but
> basically everyone's access gets shafted in order to help out the
> collective.

QoS by 'mojo' payments stop 'attacks' as all mojo is accounted for, run out
of mojo, you lose priority.  Priority traffic now talks over the top of any
DoS packets.  The DoS would limit the amount of 'free' bandwidth -- so
leechers who provide nothing could potentially have all access cut off in an
effective DoS attack.  Could make it a three layer priority system, but I
don't think the effort is worthwhile, non-priority traffic 'deserves' to get
dropped IMO ;)

> in the realistic case of network utilisation, the worst case scenario
comes
> into play, so only a 1mbps channel becomes the effective backbone of the
> network. any apps or routes woud be using that to route traffic between
> nodes, though individual nodes would be able to route traffic differently
> ith a higher metric. given enough planning this could be raised to an
> overall 2mbps and the distance between nodes shortened, but as you say,
with
> 500 users sending data on a network that can only support 50kb/sec, then
> there's issues.

Regardless of the total bandwidth of the network, deciding which traffic to
route, and which to drop is still something that has to be done.


> it sounds to me as if you are an experienced electronics engineer, because
> you have the knowledge of how these things route, however you have not
fully
> grokked the concept of using or abusing such a network. and trust me, once
> it becomes more than paper and words, abuse is a much more realistic
outcome
> than standard usage.

Give me examples of abuse, and I'll design defenses into the initial
specification.


> > How are you proposing to manage said network -- it sounds like there's a
> lot
> > of manual configuration necessary for your solution (and I'm not
confident
> > it will scale either).  I'm trying to keep everything automatic...
>
> having an encrypted 'cloud' of information, rather than a 'mesh' of
clients
> is less practical. it would be purely chaotic to organise, which sounds
like
> fun from a user/owner perspective, since cryptography and access controls
> plus inherent privacy would keep users and locations private. but all it
> takes is one user with (legal) access to a node and a powerful enough
> machine to break the encryption to search out for GPS signals in the cloud
> to locate data streams. your "kiddie porn" user/server would then be
located
> down to the square meter. anyone near a computer in that square meter
would
> be guilty of broadcasting that service, instant guilt by association, no
> lawyer would refute it or defend against it. the encryption 'cloud' idea
is
> not only a privacy minefield, GPS is wrought with complications for
> security. add that to the legal haze surrounding broadcasting regulaions,
> and you see that mixing GPS and wireless networks is plainly hazardous. of
> course, GPS is highly optional, unless you want to restrict the clients
who
> would have access.

Powerful enough machine == all the computers in the world for hundreds of
years (including machines that follow mores law from the future) means that
the network will work for now.  Yes, the encryption will need to improve
along with the speed of computers to maintain the safety of the networks
nodes.  High speed, high security encryption is not beyond cheap technology
thesedays.  a pentium 200 can encrypt DES at around 10Mbits/sec.
A node having kiddie porn does in **no way guarantee** that the nodes around
it are guilty of broadcasting it.  The data could have come on CD, on a
wired network, on a roaming wireless node...  And being encrypted, unless
someone is sitting there taking a copy of the network layer data, then it's
going to be basically impossible to prove beyond any kind of reasonable
doubt.
I realise GPS has privacy issues, but I'm at a loss to find an alternative
technology that both allows the network to scale, and protect privacy
effectivly.

> if you did implement compulsory GPS, and you didnt provide every user with
> GPS location devices, users would learn how to fudge erroneous GPS figures
> to gain access, or even, higher mojo. is this possible? sure is. if the
> routing of packets was informally destined to who is able to route the
> packets, a false GPS location could route traffic around your GPS
node-tree
> and cause the node-tree to lose coherence, as well as interfere with mojo
> for legitimate users.

Fudging erronious GPS co-ordinates simply **will not work** as it measn that
packets to that node will go to the wrong physical location -- ie the node
'faking' would never get to hear a response.  And if they did have another
node on the network that was listening at the 'faked' physical location,
then either they're 'paying' for premium access with mojo (through the
entire route they have to go to bounce from the 'faked' address back to
their correct physical address), or their access will be reduced as the
network runs out of bandwidth and drops their packets.
I don't understand how this is going to at all interfere with mojo for
legitimate users.


> > Thanks for your opinion...  I'm not sure I've understood fully what
you've
> > been talking about...   Respond to my responses, I'm just trying to
figure
> > out which parts of this idea I have to do more design work on...  It
seems
> > to work in my head, and on the simulations I've done...  it seems to all
> > work fine...  But I've probably left some things out of consideration,
and
> > hopefully this peer-review will sort out if i'm just smoking good crack,
> or
> > actually on to something...
>
> keep working on it, let me know how you get the snags and the details
kinked
> out.

Give me more snags and details.  That was the point of putting it out there.


> put up a /wifi & mojo/mofi-jowi/mofo-fiji/fiji-mofo/ wikiboard page with a
> quick teaser paragraph on mojo-currency and mojo-credits for other users.

Yep, good idea...  I should figure out how the wiki thing works ;)

> also put up  what you have so far for the fiji-mofo :), in terms of
network
> availability, routing, encryption, and how data flows from point A to
point
> D in the network if points B and/or c go down temporarily. and what affect
> that has on mojo.

Perhaps once I get a bit more time...

Ben.


--
To unsubscribe, send mail to minordomo at wireless.org.au with a subject of 'unsubscribe melbwireless'  
Archive at: http://www.wireless.org.au/cgi-bin/minorweb.pl?A=LIST&L=melbwireless
IRC at: au.austnet.org #melb-wireless



More information about the Melbwireless mailing list