Archived

This topic is now archived and is closed to further replies.

dapkor

Improving Traffic Locality via Biased Neighbor Selection

Recommended Posts

Your ISP is probably forbidden to cache popular content if said content were a copyright violation. It might save them money on bandwidth, but would vastly increase their legal liabilities.

If they made their local network/s more identifiable to give p2p programs some way of recognizing truly local peers, then there wouldn't be the legal liabilities but the bandwidth savings might still be substantial for popular torrents.

But working from the outside of the local network black box makes determining the contents difficult indeed. The 1-time test to see if any peers/seeds can download/upload faster than normal limits is the simplest test of local peers/seeds...but guaranteed to cause bandwidth spikes and packet loss.

Share this post


Link to post
Share on other sites
Your ISP is probably forbidden to cache popular content if said content were a copyright violation. It might save them money on bandwidth, but would vastly increase their legal liabilities.

Actually, i don't quite see how that would be more illegal than transparent web proxies proxying some illegal content. As long as the ISP does not know about it it's not illegal in most most jurisdiction (even the US, with their safe harbor clause in the DMCA).

So it's more a design-question... e.g. a peer cache could cache things on a per-piece basis and not know about the infohash of a torrent at all, this way it's hard for anyone to know if the cached content is legal or not.

Share this post


Link to post
Share on other sites

Ogden:

I'd LOVE to explain this to you and everyone. It's a story begging to be told.

Chapter 1: P4P: Let's Reinvent the Wheel

- Your instincts are correct. All things being equal, the closer the peer, the better the performance.

- The reason for the above is reduced latency. The less latency, the faster possible throughput.

- Latency knows nothing about ISPs

- The more work a router has to do before routing a packet, the higher the latency

- The more routers in the path, the higher the latency.

- IMPORTANT: BitTorrent pairs up good uploaders with matching good uploaders. <-- but some ISPs broke this feature ....

- ISPs have been increasing the number of routers within their local metropolitan loops in order to do packet inspection

- Additional and "smart" devices inserted between end points add latency = effectively increased distance = reduced performance --- Soooooo

- **** KEY POINT 1 **** Because of all the in-network crap being added lately by ISPs, BitTorrent's Choke and Optimistic Unchoke protocol began to prefer off-net peers because they had lower latency than on-net peers

- **** KEY POINT 2 **** Not understanding all of the above resulted in the recent P4P efforts to keep traffic on-net. If they just canned all of the extra "smart" and packet-inspecting, QoS-providing devices, the percentage of on-net traffic would increase simply due to BitTorrent's bias for good reciprocators

Chapter 2: Not all things are equal

- Peers are notoriously misconfigured -- each upload slot should deliver ~5 KB/s (some say 3, some say 7)

- Most Cable Internet subscribers should not run more than 2 torrents simultaneously due to their small uplink rates, but they do anyway for several reasons

- Private trackers have created additional incentives to short-shrift the upload capacity away from downloading tasks by stressing a less-than-useful 1:1 sharing expectation

- Windows XP and earlier windows clients were tuned for much lower throughput rates

- IO intensive activity in the foreground causes background tasks to slow

- *** KEY POINT: The closest peers may not be the best performing peers due to these and other problems ***

SO, in a perfect world, where all the local peers shared the same gateway with no "smart" routers between, and all of the peers had TCP/IP stacks which were tuned correctly, BitTorrent clients that were tuned correctly, and CPUs relatively free of load.

THEN

uTorrent and other BitTorrent applications would likely match the performance advertised by the P4PWG. (I say advertised because it's marketing so far -- their tests are not reproducible).

PS: Chapter 3 -- Cable TV Internet Providers share the same ASN -- but are deployed per metropolitan area. Neighbor Selection works by looking up the ASN for the ISP. For P4P to work for cable, each metro area is going to have to get its own ASN because each backbone provider connects in the metro area. E.g. Comcast doesn't meet Level 3 at just one gateway -- it meets it in most of its metro areas. By ASN, a Cable Internet provider looks like one ISP. By routing, it's numerous small metropolitan ones.

Share this post


Link to post
Share on other sites
The reason for the above is reduced latency. The less latency, the faster possible throughput.

common misconception, let me counter it with a famous network engineering quote:

"Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." - Andrew S. Tanenbaum

Share this post


Link to post
Share on other sites

Yes it would be Wacked to have such service. This would and should be the solution. But hell To manyy files. So lets just work on Local Peers at start. Having help frm the ISP to Shape traffic to decrease the amounds of nodes it passes.

Traffic is like driving a car, Ral fast. But to go 1000 miles you need more money then to go 10 miles.

SO can ay one tell me what Bittorrent is up to now with P4P

Share this post


Link to post
Share on other sites
Windows XP and earlier windows clients were tuned for much lower throughput rates

It's actually Windows Vista that's a current disaster when trying to max out Gigabit ethernet cards...at least while almost anything else is going on. :lol:

The earlier versions of windows were optimized for high-speed (100mbps) low latency (0-2ms) LAN connections, not 1-10mbps 100+ ms internet connections. This may seem to be saying the same thing as the quote, but it's predominately the higher latency rather than max bandwidth that's the killer. Earlier windows versions could do dial-up just fine even with its crappy-high latency because it was so dog slow at best anyway. ISDN wasn't a problem either because it often had surprisingly good latency (for low bandwidth loads) to be as slow as it is.

The long latency delays between SYN-ACK packets prevented fully utilizing "fast" internet connections on Win 9x/ME. Back in the day, my Win 98SE OS (unoptimized!) could only download around 80-120 KB/sec on my cable line that was rated for ~3mbps bandwidth. My end couldn't respond quickly enough to keep the other end sending at full speed, a pipelining issue.

But to maximize local peers would require even Win XP to be tweaked for higher latency, high bandwidth connections. Customer's modems may be running specialized first-line-of-defense packet shaping/QoS software which certainly doesn't do latency any favors. This is also partially why some modems act as mini-routers/firewalls. Also mentioned here:

http://forum.utorrent.com/viewtopic.php?pid=248296#p248296

Then there's ISPs (Rogers Cable in Canada, I mean YOU!) that use lots of line repeaters to stretch their lines to maximum length and max customers per node...that have both higher latency due to extra length and extremely high contention ratios. There are other limitations of the networking technology that ISPs use. Cable is limited to max bandwidth per trunk line. ADSL has a low individual ADSL upload speed. There may even be packets-per-second issues with cable due to its daisy-chain style network which limits 1 "speaker" at a time.

It is a high contention ratio (customers per mbps of real internet bandwidth) that makes local peers so desirable. ISPs have created this disaster mostly due to extremely expensive internet lines (T-1's and faster) THEY have to leash from the telephone companies. The larger an ISP can make its local network, the more bandwidth it can route internally without passing through the limited internet gateways. But since ISPs make their internal networks both transparent (almost by necessity) and are rather secretive about their methods...designing file-sharing software to take advantage of this "free" bandwidth is going to prove difficult. One size won't fit all...ADSL strategies may not work as well with cable...or wireless LANs...or satellite...or [God forbid!] dial-up!

Share this post


Link to post
Share on other sites

At this rate, it'll be faster and possibly cheaper to trade bandwidth using disc media via postal mail service.

Share this post


Link to post
Share on other sites

well, my 2 cents are in here ...

http://forum.utorrent.com/viewtopic.php?pid=317192#p317192

We have a problem here, of P2P DL capping. It is done on all P2P traffic to overseas locations. Will it be possible to implement in uTorrent something like a reverse IP filter, so that IP ranges in it will be preferred to others ? It will contain like IPs-range of the ISP, and uTorrent will try to communicate with those peers first. Something of this sort was already done for detected "local" peers (local to your LAN) and special speed limit for them. In fact those are also "local" in sense that they are in your ISP net.

Another possibility (and maybe a better one since it can be more automated) - have this list include also country code(s), or even just a check-box saying - "prefer same country peers".

How about it ?

Share this post


Link to post
Share on other sites

@ hermanm, lol :D See Firon's analogy heh

Even with Comcast's affirmation (albeit in PR only thusfar) that they will find alternatives current traffic interruption techniques, the problem still becomes if one software manufacturer wants to supply this, without a KNOWN standard for identifying these peers.... it would have to be implemented per ISP . In the grand scheme of things this is infeasible.

The reverse ipfilter can work in your circumstance but how many people do you think would really be aware enough to take advantage of this? Suredly those in Australia, New Zealand, and Malaysia which have horrid connection costs and connectivity issues outside their local area are more apt to know about their "local subnets" and even the ISPs may in this circumstance be willing to come up with a "standard" for determining this... the fact is the ISPs in general have not come to the tipping point where it's better sense to work with those facilitating the bandwidth. I'm sure if Comcast's desire to get help from BitTorrent, Inc. pans out it may spur other ISPs into coming up with a committee for helping this along, but as of right now I will wait until I see it.

Share this post


Link to post
Share on other sites

I was trying to reply to one poster who mentioned XP as being intially bad with utorrent

speeds. I think I've read that it can be fixed .. perhaps there are .key (registry key

files) to fix it. Is that true and if so where are they typically found?

More to the point. I've been running Win98SE for a very long time and like it.

utorrent runs as fast as I could possibly want with no special settings.

I have plenty of computers and wonder if it's just plain simpler overall to run my

torrent download/uploads using a Win98SE computer instead. See comment below

about excess power use.

Could someone comment on this?

Note for those who, like me, are concerned about excessive power usage.

I suppose a laptop or other selected low power computer could be used for utorrent

down / uploads. It does not take a lot of processing power to use utorrent. I don't

know the minimum configuration but I would think that maybe an AMD K6-500 cpu

might work OK running Win98SE and 512mb of ram or maybe even 256mb.

This setup with a laptop should be under 60 watts total.

Networking would give plenty of disk space and external drives make moving data

easy now.

If there is a thread on this I'd like to move this comment there.

If not well I suppose we can still comment on what I said eh?

Thank you.

Andy

=-----------------

Share this post


Link to post
Share on other sites

It's better to use Windows 2000 instead of 98. 9x has serious limitations in the number of connections it can make (among other things).

Share this post


Link to post
Share on other sites

well, it's ~450 max as I remember, and uT speedguide advise on less then that in most cases... so leave 98 alone... ;)

ps: btw, did you convince alus to try and improve the new RSS implementation/UI ? ...

Share this post


Link to post
Share on other sites

I was going to request this feature but found this topic. Probably it's not worth to change default behavior now, but I believe it can become quite popular option. It's not as simple as LPD, but we always have both basic and advanced settings..

Share this post


Link to post
Share on other sites

I am trying to learn something in this topic, but the one person who seems to know everything is posting in such a clipped and truncated manner, that I do not have the ability to learn half as much from that source as I have from everyone else posting here.

example: (from jewelisheaven)

"If you do not get it, and reading the arguments and thoughts against it by experts in this area don't change your mind, you will feel as though you are hitting your head against a brick wall by continuing to push the issue. If you wish to become a purveyor of improving this technology feel free to reference the documentation BT@Theory.org and Official repository and come up with your own code and submit it."

I assume the experts mentioned is him/herself. Because everyone else was just bouncing ideas and observations around. I would like to learn more, instead of just being told "No, it can't happen you stupid idiot." Essentially. No offense.

Share this post


Link to post
Share on other sites

The "experts" he refers to would more likely be The8472 and DreadWingKnight (and probably a few others). It's not that it can't happen -- it's that preferring local peers won't necessarily improve speeds as purported.

Share this post


Link to post
Share on other sites

I never heard a no. I heard a "not until ISPs decide to stop crippling traffic and instead help facilitate (they don't even need to PROVIDE a solution) solutions which would actually be applicable to more than one ISP at a time". Man that sounds like some WG :/ The obvious problem is that many network operators specifically don't want to do this because it requires "giving up sensitive information", when infact they're shooting themselves in the foot.

I reference experts as they've spent a deal of time working on specific problems ... http://wiki.depthstrike.com/ and http://www.azureuswiki.com/index.php/User:The8472 are information on/from/of the two people Ultima mentions. Anyone who shoots their mouth off without thinking or actually knowing what they talk about deserves little respect or thought. @atlantisisdead, you want to learn, great!! Read more and learn. Learn a little every day about what you're passionate about. No offense taken. ;)

Share this post


Link to post
Share on other sites

You will probably have to wait till uTorrent evolves into a multi-NIC/multi-WAN client. And that's going to be LOTS of code-changes away. Although you're probably connecting through the same NIC or WAN connection either way, each subnet you want to get different speeds on has to be treated as a different network programming-wise. Even the issue of upload slots PER torrent could get crazy -- local connections could go idle as all upload slots are already tied up on more numerous internet ips! Also, is internet speeds a part of your local speeds or separate? (If you had 20 mbps local and 1 mbps internet, does that mean if running at 1 mbps internet that you only effectively have 19 mbps local?)

You could try running multiple uTorrent clients with crazy ipfilter.dat files so you can treat each subnet differently, but that will be sub-optimal as many trackers refuse to talk to multiple clients on the same ip (but different port). You'd also have to make sure your trackers AREN'T blocked by ipfilter.dat!

Also, if you're not firewalled then even if you have ipfilter.dat lists correctly, you'll be "hit" by many duplicate connection attempts on ALL your running clients. If firewalled, you won't see them at the client-side level but they'll still be hitting the firewall.

What you ask is nowhere near as simple as it first sounds. Even a 2-tier fast/slow setup with a "LAN" speed (where you can define "LAN" ranges) and an internet speed will be a big improvement over what we have. Even with what's already in uTorrent concerning local peer discovery, that's going to be a code project and a half.

Share this post


Link to post
Share on other sites
Locality traffic advantages (From the PopkinPasko Presentation, link on wikipedia p4p): [...]

- Decrease incentives for ISPs

to "manage" P2P traffic

Not always true.

Cross ISP = costly , Intranet ISP traffic = CHEAP like HELLL

Not always true.

Some ISPs are quite open about their problems. DocSIS network providers would prefer transit traffic over intra-AS traffic. Other providers are the opposite. If you want to help ISPs then what needs to be exposed is *policy*. This is distinct from locality. Policy allows ISPs to expose things like, "please use this AS [because it is a private peering point over which we negotiated great terms]." Clearly your ISP would like to exploit (though not always reveal) beneficial terms. If you are cynical you might say even at the expense of user performance, but sometimes there is no performance cost in adhering to policy.

Performance and policy should be separate. Locality is sometimes right, sometimes wrong because it is not the right feature for either. When optimizing performance, the metric of interest to BitTorrent is throughput.

To the extent delay correlates with throughput, locality can be exploited and it seems Ono has been somewhat successful at this. With RSS feeds, the same peers will appear from one swarm to the next. Longer term history for these peers might be exploited. Without RSS feeds, rates for peers within an AS or IP prefix might correlate. These are research problems. Exposing policy is a standards problem.

What ISPs are willing to reveal and how they reveal it, is an avenue that BitTorrent, Inc. is taking up in the IETF. If the community has proposals either regarding policy or performance optimizations then please submit them as BEPs to bittorrent.org.

--Dave

Share this post


Link to post
Share on other sites

Why is it that we cant connect peers based on internet connection speed? With my 20MB connection I should be connecting with other 20MB connections. Those with 56k connections connect with other 56k peers. When we pay for our connections we expect the level of bandwidth we pay for. Cant everyone get all the bandwidth they pay for? Or is there really not enough of it to go around?

Share this post


Link to post
Share on other sites

Because BitTorrent doesn't work that way, and doesn't need to. If you upload more, more users will recognize that you're doing your part in contributing and should (in return) prefer to upload to you rather than peers that don't give them as much data.

That's why uploading is important in BitTorrent.

Share this post


Link to post
Share on other sites
With my 20MB connection I should be connecting with other 20MB connections.

Firstly, you don't have a 20 MegaBYTES/second upload...or even 20 megabits/second upload. You probably don't have greater than 2.5 megabits/second upload. It wouldn't surprise me if you barely have 0.5 megabits/second upload. Your connection wouldn't be "fit" to connect to 20MB connections by your own criteria.

You're quite likely also using horrible settings in uTorrent and getting mediocre results. :P

Share this post


Link to post
Share on other sites

well youre probably right. I am a new torrent downloader but have worked the net thouroughly since '92.

I remember going to usenet and pieceing movie bits together. I had the latest and greatest processors with ADSL with 3 computers on a home network at our retail location.

I ownned and managed the 6th car audio website online in the whole world and after 4 years had ranked in the top 1/4 of the top 1% of all e-commerce websites.

I am hoping to be a sponge to all this new info....why does it seem to be such a difficult thing to do, configuring the firewall?

Is there a place to rate the second and third download and/or upload speeds?

Share this post


Link to post
Share on other sites

Hi, I just read this topic.

Offten I download torrents with 1000+ peers. Later I realize that i'm not the only one from my country.

In Serbia there is one major internet provider. It's provider of cable net. While i pay for internet bandwidth i get "local" bandwidth for free. So, if it would be possible to choose local peers instead of all peers (or some of them) it would grant me double speed.

I'm not sure about other countries, but here (6m people, I don't know how many users) is this way and it will just grow since this provider has good network.

Since I can find out my providers ip range, simply put, is it possible to somehow say to utorrent to choose peer from 66.66.*.* ip range? Or is it going to be possible?

Share this post


Link to post
Share on other sites

You can always block all other IP ranges if you want via ipfilter.dat. A feature to force µTorrent to connect only to peers in a certain IP range has been requested before, and given the past decisions made on peer preferencing, I tend to doubt it'll be implemented.

Share this post


Link to post
Share on other sites