[MLB-WIRELESS] Re: mojo -was- structure

bede bede at gisbornehotel.com
Wed Mar 20 09:10:33 EST 2002


While an interesting idea its flawed and is going to be to awkward to 
implement and police.

and thats mainly because theres no default gateway or firewall for all
traffic to pass 
through, 

my personal link to x node wouldnt end up getting me credits in the mojo
system,

On top of that no ones running the same setup so what ever mojo client
program
that ends up having to be written is going to run into trouble, what
with 
some people running ap's and others nix boxes with wireless nics...

the other flaw is that why would members decide your traffic passing
over there hardware
is more important than theirs....

this is a free system, so any qos you get is what ever is avalible at
the time.


as far as realtime video / sound goes for video confrencing netmeeting
runs pretty well
on a 64kbs link and voice only is much lower again, and frankly 200kb/s
is some kinnda fat ass 
application that atm shouldnt be running on this system unless you setup
or work out a dedicated 
link across town.
I dont feel running 30 fps for video confrencing is a good use of the
system atm unless your planning
on doing phone sex where high quality video is possibly a plus.

Bede



Ben Anderson wrote:
> 
> > >   Ben> As far as I see it, we can either have a 'mojo' like system, or
> > >   Ben> have a "test" that people have to take before they get to use
> > >   Ben> the network to guarantee that the people are altruistic enough
> > >   Ben> to donate to the system when they don't have to.
> >
> > but if we are only building a network for people who are "altruistic
> > enough"  how altruistic is that?
> 
> It's not, and that's the point.  I think it's implicity self-defeating.  It
> won't scale relying on altruism.
> 
> > >consequently, the owner of the busy node will accumulate lots of mojo,
> > >and will be able to afford to put his/her packets at the head of the
> > >queue, thus ensuring reduced latency.
> >
> > queuing should be by type of traffic, not by the "worth" of the sender.
> 
> So if everyone starts making video phone calls at 200kbits/sec... the
> latency goes up, the absolute maximum is 20 users (and more realistically
> more like 12-15).  Even at more respectable voice bandwidths of 20kbits,
> there's only a realistic 120-150 users before the whole network is shot.
> And traffic type is easy to circumvent.  The low latency stuff is needed by
> *lots* of services.  chatting, games, voice, video conferencing, etc, etc...
> there's lots that need interactive performance.  The only way to be able to
> get a reasonable interactive performance is to have a guaranteed QoS metric
> (as in ATM style controls) or to ensure that you have enough bandwidth to
> ensure the network never becomes overloaded.
> As the network gets larger, latency in general will increase.  More nodes in
> area, more hops, more collisions, etc.
> And how then do you propose to stop the 20 interactive users
> videoconferencing from using all the interactive, low latency bandwidth to
> effectivly deny the network of any high bandwidth, high latency services?
> And how are you going to guarantee that what says it's a game protocol, or
> chat protocol is actually that, and not someone tunneling mp3's through it?
> The DoS possiblities in traffic class queuing are just mind boggling.
> If you have a solution for these issues within class based queuing, please
> tell us -- I'd prefer it to be not based on a 'mojo' 'payment' style
> structure too.  I just haven't found anything else yet that even gets close
> to scaling.
> 
> > the above is actually a _dis_incentive for that person to invest in
> > extra bandwidth in their region
> 
> Reasonable point.  But...
> 
> If there's a bottleneck, and the value of mojo in that region goes up,
> people in that region will be inspired to capitolise on that by trying to
> shift some of the bandwidth away from the overloaded section of the network.
> Competition theory.
> 
> More mojo will be paid for low latency traffic, and so an overloaded node
> should actually make less than a high bandwidth low latency node.
> Similarly, a low latency, low bandwidth node is more valueable to some
> traffic than a high bandwidth, high latency node.  And class based queuing
> very rarely takes this into account multiple hops later.  the best metric is
> made on a router-router basis, not a complete route-path basis.
> 
> There are a lot of bandwidth/latency/mojo-worth tradeoffs that could be
> potentially made, and that's why I'm proposing simulation to try and
> discover a tradeoff that is both fair, and scales well without limiting
> "mojoless" access as much as possible.
> 
> Ben.
> 
> --
> To unsubscribe, send mail to minordomo at wireless.org.au with a subject of 'unsubscribe melbwireless'
> Archive at: http://www.wireless.org.au/cgi-bin/minorweb.pl?A=LIST&L=melbwireless
> IRC at: au.austnet.org #melb-wireless

--
To unsubscribe, send mail to minordomo at wireless.org.au with a subject of 'unsubscribe melbwireless'  
Archive at: http://www.wireless.org.au/cgi-bin/minorweb.pl?A=LIST&L=melbwireless
IRC at: au.austnet.org #melb-wireless



More information about the Melbwireless mailing list