Jump to content

Figuring out nearest neighbors...


Recommended Posts

Looks like there's a plug-in for Azureus which uses the same technique Akamai uses to determine if you are a neighbor or not:


Kind of neat.

You have to trust that Akamai is correct, but it seems like an easy way to test whether your peer is a neighbor without actually performing a trace route.

Link to comment
Share on other sites

You are right, one has to trust the CDN.

I think that ono is the predecessor of p4p and the problem with that (as discussed on other forums) is that the content provider and ISPs are not very interested in private p2p as there is no money to earn.

Link to comment
Share on other sites

> not very interested in private p2p as there is no money to earn

The traditional content distributors are trying to force the same financial model with p2p technology. They can try to make it work, but I don't think traditional distributors can do business as usual. New business environment means you have to figure out how to make money differently.

I think distribution companies should:

1) embed advertisement within the content

-- product placement (e.g. Burger King in the new IronMan movie).

-- mini-ads at intervals while the show is playing

-- this makes it much harder to trim out the advertisement since the line between advertisement and content is blurred. The only issue I can see with embedding advertisement is that if the advertisement is controversial, you just can't remove the ad.

2) transfer content at high speeds

-- you beat the scene releasers to the punch and you can distribute your version of the content

-- why would you want to bother with a scene release when the content owner can seed at high speeds?

3) distribute movies & tv.shows via newspaper by including a burned DVD in the Sunday paper. Newspaper would actually see a jump in subscriptions while distribution companies can charge advertisers for having greater reach.

Link to comment
Share on other sites

"2) transfer content at high speeds"

that's what I mean what p4p may be used for: this traffic shaping mechanism can later be used to favor the commercial content delivery over the private p2p traffic.

I just wanted to point out, that on the net-level, it might be used to de-democratize the net. (in the future) (you might want to read the specs for p4p, can't find the link but you will find it)

And it might sneak into the p2p-community with a tool like ono, which has on the first look only advantages for the p2p user...

Link to comment
Share on other sites

> de-democratize the net

That's a separate issue.

My impression of P4P is keeping P2P traffic in-house as much as possible. Use "close" peers before ones further away. Those that are within the "same" network could share pieces faster and possibly retrieve them from each other or from a ISP cache server. This reduce unnecessary traffic off the backbone and international networks. It would interest me to see if caching the top 100 TPB torrents reduces traffic leaving and entering the ISP.

Link to comment
Share on other sites

"That's a separate issue."

Why is that? (You probably haven't read the specs)

"... share pieces faster and ....from a ISP cache server..."

that is not true: having one/two peers with a broadband connection no matter from where in the world, gives the download & the swarm a speed kick which can't be done with 30 peers uploading with 20kB (with all the overhead)

ISP cache server haven't seen those lately :-)

You are right, that it would be a pleasure and a win (due to border-traffic reduction) for ISPs to have it that way.

I understand and respect your position, and if you want try it (but please read the disclaimer for the plug-in)

I just wanted to share some concerns, which I did :-)

Have a nice day

Link to comment
Share on other sites

>>> de-democratize the net

> Why is that? (You probably haven't read the specs)

Maybe I am misunderstanding your point. You are referring to content providers possibly milking deals with ISPs to prioritize their packets over other type of traffic? I guess I personally don't see all packets as equal. I don't support packet forgery, but even on my router I have my QoS set to make Bittorrent traffic lowest priority with certain ports like HTTP and other ports I use for work to be top priority.

> with 30 peers uploading with 20kB (with all the overhead)

Yeah, the ISP equipment would have to decap upload traffic to local IPs. Not sure if that's even possible since I'm not familiar with DSL modems and copper wiring. Others on this forum have stated having a local peer doesn't necessarily mean increased speeds. ISPs want to use their bandwidth efficiently, while Bittorrent users want good performance. I would support using local peers if it meant at least equal or better performance than using a remote peer. If the upload cap is bound by DSL hardware or wiring, the cache server would sure come in handy! :)

Link to comment
Share on other sites

  • 2 months later...

I don't know why this hasn't been made exactly for uTorrent yet ... I'm betting ludde would of. It's time P2P did move a single step towards working with ISP's and I don't mean getting in Bed with big players as is happening in P4P. Mark my words P4P is going to be dragged on for centuries before any implementation ... and the requirement for an iTracker pretty much Centralizes the network and makes ISP's complicit in Copyright infringement. Anyways Ono is out there, it's free and open source ... and since it's written in Java uTorrent's own C++ ported code would be it's own.

Now I understand that it doesn't increase speed, but it does reduce Transit Cost's for ISP's while at the same time maintaining the status quo. If it is cheaper for ISP's, then they can start stopping there roll-out of DPI machines and throttlers. P4P sounds like a gimmick and Bittorrent INC should get out as soon as possible as ISP's just love to use it as an example to continue there throttling schemes stating innovation is still happening at the cost of killing the BT protocol (ie. Bell Canada in it's latest filing to the CRTC). Now I'm bringing this topic back as when I searched Ono, one topic was trashed for the reason of "Not improving speed", and this was the other left un-answered and dusted under the rug. This reduces ISP's costs and it'll only level the playing field by taking some of the Burden off that CDN networks (ie. Bittorrent INC) place on the Carriers. Since it's on the application level, it is easier to integrate, maintain and upgrade ... does not need ISP involvement ... leaving them out of illicit activities. Also in the end of the day P4P will be branded as a way of not transporting illegal files but at the same time improving P2P. Let's be honest this argument is as good as raising the budget to fight AIDS in 3rd world countries, by increasing the amount of money going to abstinence programs rather then retrovirals and condoms.

My 2cents,

Link to comment
Share on other sites

> EACH time this is brought up that it's not feasible...

Perhaps you are confusing previous discussions of "Nearest Neighbor" which Ono. The issue previously was determining what a "Nearest Neighbor" was. But with Ono, you have a third-party CDN (Akamai) that has been providing distributed for many years as a service.

Using the key assumption that two computers sent to the same CDN server are likely close to each other, Ono allows P2P users to quickly identify nearby users.

This to me seems very easy to implement. Communicate with your peer. Oh, we use the same Akamai CDN? Then we are neighbors. Lets connect! For some Bittorrent users, they pay for out-of-network bandwidth, so having a feature like this means, s/he has access to way more bandwidth per month. Maybe we'll see a feature like this adopted if more users are affected by monthly bandwidth limits (like cell phone companies do with "minutes").

Link to comment
Share on other sites

You are quite right in the assumption... it still sounds like "passing the buck" as it were. Looking at previous design choices and implementations of making things easier for the end user doesn't sound like it'd include any sort of service offloading though. Imagine the havoc when (inevitably) akamai goes down? (Remember when google was down for several HOURS... some people felt like Chicken Little :P)

Certainly IF more and more ISPs follow this suit, AND if more developers in related file-transfer and/or peer-to-peer projects feel compelled, it's possible they can come up with something even more robust... it's all about cost-benefit analysis. I'd say the possibility of relying on an external source for data is still too sketchy for something which will undoubtedly be consumed by millions. I trust in the developers... and when that fails, I look to their helpers and support staff. They haven't made any bad moves thusfar in the development cycle(s) since acquisition. They've worked with other developers before and will likely do so again. Perhaps this type of problem can be addressed adequately at that time.

Link to comment
Share on other sites

Restating the obvious is what I do best, so here goes...

We're rehashing this thread:


Why wouldn't "we" connect even if we're not neighbors?

...At least till we both near our connection max per torrent?

Too many torrents have few if ANY seeds/peers on them.

If you want to download them at all, you don't have a choice of local or not.

Webseed support is already implemented in uTorrent v1.8.

Webseeds are almost the OPPOSITE of local peers -- they should only be called as a last resort for pieces of the torrent, yet there's not really any support in uTorrent yet to conserve their bandwidth. If a webseed is available, it gets constant attention by any uTorrent peer with spare download bandwidth that finds it. In short, its upload will likely stay maxed out till it's taken down.

Initial seeding also has issues with utilization. Peers using sequential downloading will often ignore it. Other peers will download a piece and for whatever the reasons not share it to other peers for a long time if ever. And then there's the peers that don't even report their percentage complete or available-for-sharing pieces...so the initial seeder has to guess what they have.

And all that won't matter if BitTorrent works with fewer and fewer "ISP"s or at severely restricted (read: dial-up!) speeds. Defeating various crippling effects "ISP"s are using has tied up considerable energies of BitTorrent designers. If the crippling effects also affect local peers...spending extra bandwidth to find local peers and being more noticed by traffic monitoring equipment seems a double waste of time. :(

We don't need symmetric download/upload connections, or even fast connections for BitTorrent to work just fine as it is. ISPs' file-sharing bandwidth-eating problems are compounded when they give the customers more bandwidth. uTorrent if given a chance will max out any connection short of 1 gigabit/second lines on the upload side with just a single torrent with 100+ peers. If anything, local peers might only allow it to max out sooner. :P

Many of the ISP's bandwidth problems are last mile, especially for cable. Local peers don't buy them any more last mile bandwidth. Max-range ADSL lines are rate-limited by laws of physics plus transmission method, getting any more speed to them or out of them is like squeezing blood from a stone. WANs that have incredible internal speeds and low internet speeds are really the only place where local peers shine. So I'd think a 1st step goal would be identifying if you're on a WAN.

Link to comment
Share on other sites

The argument is though always about the last mile ... the ISP's have certainly failed at sustaining a healthy network growth. But there claim was that CDN's such as the BT protocol put too much stress on there network as connections are predominantly going externally even if local traffic is available. For example I am downloading a file where two other connections are available (Hypothetically I can only connect to one), connection A is a local connection (same ISP, or an ISP peering partner) ... and connection B is a continent over. Now currently BT connects to whichever, not considering which might be cheaper for the ISP. A setup like Ono atleast gives that criteria a chance that connection A should be preferred when both can offer the same.

The thing is with BT speed is there, but it does not do much to play fair ... ISP's are not dump pipes, think about it. They sell based on the 95th percentile and in the end BT's cavalier use of resources will destroy it as we can slowly see a mood shift. Now the greatest opportunity for the protocol is presenting itself, the chance to make it smarter and fairer ... speed is not the game anymore it's the reward for smart networks. Most major cities have ISP peering points ... akamai is situated at these points, it just makes sense that if I'm closer to akamai server x and so is my connected peer then we might be on the same ISP or on different ISP's that locally peer reducing TRANSIT costs's. If TRANSIT cost's are reduced the ISP's can save more money and look the other way on BT traffic rather then using it as the scapegoat for all there woes. I understand the last mile is the issue, especially in the Cable sector but atleast encouraging local connections is a step forward and reduces stress on border gateways.

Now to debunk the three major concerns.

1) It doesn't speed you up?

It does or doesn't, but for sure it keeps the status quo. Put simply it won't slow you down if this is implemented in such a way that the old Peer selection algorithm is a fallback. Also this shouldn't be implemented as a speedbooster in mind but as rather the last piece of the BT protocol equation ... the fairness element. The goal is to reduce the burden on ISP's remember not to go faster then fast.

2) Not enough local peers would be peers on a file especially a rare file.

True, and especially true if the ISP in question is small and does not peer with other local ISP's. You could be the only one from Malaysia in a 30 peers torrent, leaving you feeling isolated. But this would be the case if BT only had a rigid implementation of a nearest neighbor algorithm. In the case of Ono I'm not all to familiar with it's rigidity, but I know how flexible an implementation could be. The way I envision it is a set of Virtual layers that are ranked. Layer 1 is a prefferred connections, Layer 2 is average connections that falls within the average connection closeness, and Layer 3 is poor connections as they fall too far below the average. Now that all peers are sorted into Layers ... connections cascade down the Layers based on the current Peer Selection algorithm always starting with attempting to connect to Layer 1 connections. Maybe a way of doing it is throught DHT, by pinging said CDN's or even google (google peers with many local internet exchanges) for a position point ... and then following the above concept to connect with peers with similar point of origins DHT could be used to exchange these position points.

3) What happens if the CDN's go down?

Firstly BT is resilient and this wouldn't be a one point of failure type system ... if it fails the fallback is always the current way of Peer Selection sans the Nearest Nieghbour.

This way ISP's are now motivated to build there aggregation (Last Mile) as opposed to there Backbone (external IP TRANSIT). Also it'll encourage them to Peer Locally improving latency in other applications such as Games.

Lastly I didn't intend to troll, actually I wanted to spark debate ... too often ideas are killed before people understand the concept. One worders like "Not speedy" malign the topic, and place not well thought out input into the mix. BT is ever evolving, and every idea deserves input ... then comes P4P into the mix. Every ISP is touting P4P as game changing and there for the masses ... hmmm, doesn't seem to be a client I can use to download a torrent while using P4P, so I fathom what is the hype over? This is one of those intiatives that'll remain forever on paper and I don't think all parties are serious ... when is the ETA simple question simple answer needed. Since Bittorrent INC is involved on paper could they provide more detail other then fluff ... when can we the user use it in the wild ... and what level of involvement are you guys in? Is this commercialized P2P ... as the P4P just sounds too glitzy and made for tv for my liking.

Link to comment
Share on other sites

In case anyone is not clear on the concept of "peering" - please read this wiki article on peering.

MoJo, your points make sense to me. All things being equal, I'd want to prioritize connection to a neighbor (as defined by Akamai CDN) over another IP. It does not harm my speeds and I see it as an olive branch to the ISPs. Somebody has to take the first step... Bittorrent development seems reactive in this area and it'd be great to see proactivity here!

Link to comment
Share on other sites

I think finding local peers is possible, although probably not with 100% reliability...after all, some of that is already implemented with "Local Peer Discovery".

So the question becomes:What to do ONCE both external and local peers are found?

Obviously local peers should be preferenced.

Upload slots would have to be revamped, with at least 1 dedicated to local peer/s...and any extra upload slots to go to local AND external peers, probably at an automatically reduced rate. (low vs medium vs high priority, just inside each torrent swarm.)

Maximum number of connections per torrent and global max connections probably should be rethought and reduced. Speed Guide (CTRL+G) gives wildly optimistic numbers to use in that regard. xx/768k for instance is probably getting more common, so more and more people will have potentially 80-450 connections at once -- with uTorrent trying to extract some "value" out of each of them. Even my "conservative" uTorrent settings (2nd link in my signature) really aren't very conservative for 1+ megabit/second upload connections!

A possible solution to global max connections almost needing to be (max active torrents) x (max connections per torrent) would be creating a minimum connections per torrent...thus reserving anywhere from 1 to maybe 20 connections per torrent. (configurable of course) That way, busy torrents couldn't hog up all the global connections and starve out the other torrents.

Then global connections max could be reduced to:

(max active torrents) x (min connections per torrent) + (max active connections per torrent - min connections per torrent)

Once max connections per torrent is reached, no half open connections should be made for that torrent...and "inactive" peers/seeds should be disconnected, especially external ones. Or for regular seeding, no half open connections should be made for that torrent once MIN connections per torrent has been reached! This is because seeding really doesn't need more than about double as many peers as upload slot max to work efficiently. Any extras beyond upload slot max are just to ensure that each peer is active and will download at a reasonable rate. Once again, local peers should be preferred but not to the point that ALL external peers are neglected.

A serious danger to these ideas is the fragmentation of a torrent swarm into tiny domains that are unaware of each other. Each peer/seed would have fewer connections and on top of that would not make new connections unless considerably less than max connections. And spending most of their bandwidth to local peers. New peers would have a harder time getting their foot in the door, as existing peers/seeds would have little incentive to upload to an external peer. If too many peers/seeds are firewalled, this fails much easier than current bittorrent swarms. Peer Exchange and DHT would vastly reduce the chances of failing, but that's not allowed on private torrents...which probably won't have significant numbers of local peers anyway.

Link to comment
Share on other sites

There's NO connection between the programmers of uTorrent and the P4P crew to my knowledge.

There is almost NO chance of any of P4P features getting incorporated into uTorrent...certainly in the next 6 months at least.

Local Peer Discovery means in uTorrent may still be in flux, so I'd expect at least minor improvement with LPD over time. However, local peers and external peers won't be treated as either/or as far as bandwidth allocation...each will be downloaded and uploaded to at whatever speed windows networking cares to manage. If local peers really DO have a larger pipe with lower latency to you, then they will still be heavily preferenced that way.

Link to comment
Share on other sites


This topic is now archived and is closed to further replies.

  • Create New...