Jump to content

Improve network efficiency by finding peers "near" you


teedog

Recommended Posts

I recently came across this very interesting plugin for Azureus being developed at Northwestern University. It is called Ono.

http://www.aqualab.cs.northwestern.edu/projects/Ono.html

As far as I understand it, the plugin chooses clients that are near you (in the network sense) when possible since closer peers probably result in better performance. It does this by using existing data from content distribution networks. Here's the official discription:

For most P2P applications, the decision regarding which peer to download from is generally arbitrary. When most peers offer good download performance, the random solution works well. However, if most peers are in a different part of the world from you, your downloads can really suffer.

The Ono plugin avoids this by proactively finding peers that are close to you (in a networking sense). These peers generally offer better response time, which can lead to significantly improved performance. We identify those peers that are near you by reusing network measurements from content distribution networks (CDNs), i.e. without performing extensive path measurement or probing.

Some FAQs:

# Does this really improve download performance? In our experiments, the simple approach of using nearby peers does in fact improve download performance, even when the peer to which you connect has saturated its upload bandwidth capacity. Don't tell anyone, but we suspect that this has to do with the way requests are serviced in Azureus, although it can also apply to other BitTorrent clients.

# I thought Azureus was already doing network positioning. Why use Ono? Well, as Ledlie, et al have shown, Azureus network coordinates are, too put it mildly, terribly inaccurate. In our own independent measurements, we found that only 10% of the network coordinates had less than 10% error. More than 60% had errors of 100% or more!

# Why not just use class C subnets, AS numbers or measurement-based techiques for figuring out peer locality? While heuristic-based approaches such as class C subnets and AS numbers also scale well, the position information gained through them is not terribly useful and does not take into account dynamic network conditions. Ono, on the other hand, finds peers that are near one another by relying on preexisting infrastucture (CDNs) that perform extensive Internet measurements. Results from our early experience with this technique show that CDN-based clustering of peers is quite effective in practice.

Any chance uTorrent might implement a similar system?

Link to comment
Share on other sites

Well, peer proximity may not increase speeds directly for ONE torrent client, but it WILL increase ISP tollerence of the BitTorrent protocol on non-cable networks.

I live in Denmark and all in-contry traffic is free of network boundry charges for alle the Danish ISP's.

Neighbour contry traffic and european traffic does come with a Mb charge between the ISP's/networks, but it is cheaper than the Mb charges between ISP's/network operators for data crossing the atlantic.

So as peer proximity may not change speeds directly, it may have an indirect effect, as ISP's may have less economic incentive to limit users special bandwidth usage, and thus allow for more ISP's to allow full BitTorrent traffic speeds, and thus indirectly allow for better "local" speeds, whic in the end will show an increase in the sum of total BitTorrent protocol traffic

Link to comment
Share on other sites

Even the good cable networks now typically have more local bandwidth than internet bandwidth, sometimes even to the point that every cablemodem could be going at full rated speed (to each other at least) without any problems.

The problem with torrents is rarity and timing. What's the ODDS that your neighbors are trying to download the same torrents...and at the same time as you? It might only occur on torrents with seeds+peers in the 10's of thousands. ...or are so huge that they're left running for days.

ISPs are unwilling to adapt their infrastructure to an internet that's trying to escape from the past's text-based webpages to interactive graphics and sound. The bandwidth demands will only go up as we go from tiny window-sized blurry pictures to full-screen streaming hi-def video+sound.

If ISPs think BitTorrent is bad, they're REALLY going to hate real-time streamed "web TV".

Link to comment
Share on other sites

Timing is problematic yes, but when something gets released, then the demand for that particular torrent in a very little time slice is very high, and thus the chance that you and your neighbour (or someone else in your town/city) are downloading the same thing increases. And these "internal" networks are tend to be very big, big ISPs high user count, so there's chance..

Link to comment
Share on other sites

I support this request but only for downloading a torrent! It should still be possible for others to connect from anywhere and not get excluded through this.

The only real advantage this option has is to restrict the traffic to a certain area. It does not guarantee high speeds! You may end up with many close but slow clients just as you can find a fast seed on the other side of the world otherwise. The TTL (time to live) of a network packet has only a meaning for the latency. P2P however is not so much about latency as it is about throughput ...

If one wants to limit the traffic to a specific network then using network masks would be of a better use. If you want to find the fastest clients you should simply go with those that respond fastest as well as those that transfer a lot. Any other method will rather cut you off or cause extra work before a transfer can begin.

Link to comment
Share on other sites

  • 1 year later...

Hello,

Finding peers was my idea also. Recently I came to better (for me) one - setting different speed limits for different IP address ranges.

Please think about following approach:

Traffic engineering with at least two traffic classes, each having individual UD/DL speed settings and individual ipfilter.dat -like IP network/address list file. It shall be possible to configure IP address match logic so include/exclude is possible (two files for each filter?). IP QoS (DSCP) support and settings for each class would be nice to have too, but this is not priority for me at the moment.

Regards,

Ogden

Link to comment
Share on other sites

The problem with most methods of finding semi-local peers is once again ISPs often have ip ranges all over the place. Without documentation of these complex inner networks, file-sharing programs can only guess...perhaps based on pingtimes and techniques like Ono which see which Akami(?) servers are nearest to them.

Pingtimes alone are not a good judge for what crosses "unwanted" peering boundaries.

And I doubt Akami(?) servers will appreciate millions of BitTorrent clients pinging them to see which is "closest". :P

Link to comment
Share on other sites

Hello,

I agree that more or less working automatic discovery logic is nearly impossible to make. Except to statistically gather fast peers list, but not always local=fast and vice versa. Fast peer biased selection was offered here before and as I know, denied. Anyway fast peers is out of this discussion scope.

For instance I do not need to discover local IP subnets - I already know them! National backbone operator published list. Other users can try something like this: http://www.ipaddresslocation.org/ip_ranges/get_ranges.php . There is lot of BGP info available online too, but this is advanced way.

If we assume that list of local addresses is known and user can write ipfilter.dat -like file, then he only needs to have possibility to set individual UL/DL bandwidth settings for local peers and non-local peers. Personally I badly need such functionality. Who else think it would be nice to have?

That's why I badly need to separate local peer traffic speed: My ISP have two uplinks, 1Gbps to national backbone and something like 10Mbps international. I have following internet service: "international speed" limited/shaped to 100Kbps and national-wide (local) speed 20Mbps, everything full duplex. My problem is that torrent activity usually clogs 100Kbps international limiter and to have good browsing/whatever experience I have to limit torrent DL/UL speeds like 80/20 Kbps. In result I practically don't use my national 20Mbps connection and actually have 100Kbps "broadband" internet...

The reason why I can't use two uTorrents with different UL/DL BW settings: because in local tracker there is also international peers.

Regards,

Ogden

Link to comment
Share on other sites

ogden said: "The reason why I can't use two uTorrents with different UL/DL BW settings: because in local tracker there is also international peers."

I don't see how that alone stops you. The 2 uTorrents can be connected to each other through local (127.0.0.1) and can share data, though they cannot download to the same location at once -- that causes a file access violation in Windows.

The creation of specially crafted ipfilter.dat files will be the hard part...as currently you have only a whitelist but uTorrent only works with blacklists.

Link to comment
Share on other sites

Switeck,

Thank you for suggestion. It's kind of brain surgery through ***hole :D Anyway I will try to set it up and report results & findings. What kind of data between uTorrent's is exchanged? Finished parts too? Sorry that I am asking here, but where I can find info about uTorrent interconnection?

Ogden

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...