Jump to content

Torrent Community: The Speed Problem


janzam

Recommended Posts

My fellow torrenters.

I'm not expert in the torrent world, but i do have my Masters Degree in Computing & Information Systems.

Many of you think that there might be some forbidden technical know-how behind all these speed problem. On the other hand, i believe it's fairly simple to understand.

Nowadays, the percentage of torrent users has increased by the millions, literally. Therefore, many people are going to be downloading at the same time (Peers or Leechers), especially new, popular files such as TV Shows, Movies ect.

Naturally, the more users download the same file, the slower your connection to the seed/peer is going to be.

"Hypothetically speaking", if one is downloading, Lost.S03E11.HDTV.XViD-NoTV.avi FOR EXAMPLE, which has 15351 Seeders but 39463 leechers, the ratio of downloaders to uploaders is drastically disproportional.

Therefore, if you are trying to download a recently added (but popular file), be patient, and do not worry about you pathetic speed of merely 10kb/s. There is absolutely nothing wrong with your network connection settings believe me.

In fact, you could just make sure of that by downloading a file that was released a week or two ago, which should have less leechers, therefore your speed should reach 300kb/s AT LEAST!!!

Patience is a virtue guys.

Give me feedback on what you think of my theory, even if it's complete bullox!!!:)

Link to comment
Share on other sites

The reported seed/peer values by trackers are often inflated by "seen" ips that are already logged off/gone-for-good and predominately firewalled -- so you can't connect with them and can only HOPE they attempt to connect to you if you aren't firewalled!

Many of the seeds/peers are on slow ISPs that often throttle BitTorrent traffic further. They may be running multiple torrents, splitting their limited upload and download bandwidth even further. They may foolishly think setting upload speed max to "unlimited" and upload slots to 10+ is actually better/kinder to other people. (But it means to whoever they're uploading...they're only upload to them at <0.2 KB/sec...which can take a short eternity to get a complete 1-4 MB torrent piece.)

Of the many that do try to configure the BitTorrent client for better results, they almost entirely only care about download speed...and lower upload speed thinking that will increase their download speeds. Not only is this disastrous if everyone does this, it generally won't even give THEM better download speeds. The 6 KB/sec minimum upload speed to avoid the "Download Limited" message in µTorrent is probably one of the most commonly used upload speeds for µTorrent users as a result of this. ...and it was one of the most bitterly argued-about "features" in µTorrent! (Really, getting 3-6 times your upload speed back in download speed is HARDLY what I'd call limited!)

Or they get confused by the differences between bandwidth measurements (often in kilobits/sec or megabits/sec) and file/torrent transfer speeds (which are in KiloBYTES/sec)...and try to use an upload speed max either 10 times faster than what they can really do (and overload constantly) or 1/10th of what they can do.

Or they hear their connection's speed is "1 megabit/sec"...which really is only their download bandwidth max and think that also applies to their upload bandwidth...when even the best ISPs only give 1/4 as much upload as download. On Comcast, I currently have at least 6 megabits/sec download but only 384 kilobits/sec upload -- for a ratio of 16:1!

Lots of people who get THIS far and have everything correct, seem to think MORE connections are better...so why not 100's of connections at once? There is some bandwidth needed just to STAY connected to an otherwise idle ip-to-ip connection. It's not much, but when there's 100's of connections at once...well it adds up! What's more, so many connections often crash, overload, or subtly slow down numerous networking software and hardware. It is almost the most common problem posted here...certain networking software or hardware simply CAN'T handle what everyone else takes for granted.

There's also the very poorly understood half open connection max. People don't know what it is, so they set that value in µTorrent from its default of 8 to as high as 1000! Microsoft lowered Win XP's half open connection max from over 100 to only 10 with Win XP Service Pack 2. And now Windows Vista home edition supposedly sets the value to only 5!

Few people even know what MTU is, let alone how it severely affects file-sharing programs if it's not set as high as possible...typically 1500, though for some A/DSL lines it must be as low as 1400.

Lastly, even for those that do everything so far correctly, there are hit-and-runner BitTorrent users. They download as FAST as they can, and the very moment they finish downloading the file they stop that torrent...having shared to others maybe only 5-20% as much as they downloaded. This "slack" has to be picked up by someone, because if you're downloading faster than you're uploading...chances are, someone else *ISN'T*! Seeds are downloading at 0 KB/sec for instance, but it's hard to convince some people to seed at all...let alone for hours on end after the download finished!

Then there's hacked and poorly written BitTorrent clients out there that try to get better download speeds for themselves by breaking the BitTorrent protocol rules. These generally prove disastrous for small torrent swarms and cause a BIG download speed reduction for others even on larger torrent swarms (with lots of seeds+peers.)

Oh, I forgot to mention the corporations dedicated to destroying file-sharing...they poison torrents and release fake torrents. Their ips are QUITE numerous and they are VERY aggressive in what they're doing!

The biggest wonder about file sharing is not that you can occasionally get good download speeds...it's that it ever works at all!

Link to comment
Share on other sites

  • 1 month later...

Yeah so here are some improvements that can be made to help rectify things:

Instrumentation.

Particularly (for remote hosts overall and updated per configured interval):

Connect min delay / intv [+ weighted avg (last 64 (or configurable x) intvs)]

Read min delay / intv [+ weighted avg (last 64 (or configurable x) intvs)]

Write min delay /intv [+ weighted avg (last 64 (or configurable x) intvs)]

Connect / intv [+ last interval (+ total)]

Read / intv [+ last inverval (+ total)]

Write / intv [+ last interval (+ total)]

Connect failure / intv [+ last interval (+ total)]

Read failure / intv [+ last interval (+ total)]

Write failure / intv [+ last interval (+ total)]

BT protocol overhead vs payload data read [+ total] (this will indicate churn)

BT protocol overhead vs payload data written [+ total] (this will indicate churn)

BT lowest down rate per peer [+ highest] [+ amount that fall within the mean avg]

BT lowest up rate per peer [+ highest] [+ amount that fall within the mean avg]

BT peer starvation avg (some kind of average based off of how often you're choking)

BT peer starved avg (some kind of average based off of how often you're getting choked)

BT snub count [+ total]

All of this instrumentation data is not computationally or even memory intensive to keep - even if you kept it for 1024 peers. Most of the delay data could be retrieved from WFMO() + storing the stats within each socket/connection/peer/etc. object (like any sane app would). We're talking ~64 bytes / entry or so to store these stats. Instrumentation is cheap to collect, cpu cost is almost non-existant. 64k or so additional memory use + the miniscule cpu time used to update those stats and provide averages is complete chump change compared to the amount of time it takes continually answering people's speed issues and tracking random slow downs that could be provided to the user with straight up data they have accessible to them. "Downloads going slow? Check your starvation rates and average read/write delays." Without data and instrumentation, any amount of "here is generally what you should use" is a complete generalization and just as voodoo as Billy turning up max user uploads to 65535.

Ideally all stats would be tracked within the peer / seed / foo object, and all totals derived when the user is on a stat tab. This would provide the most granularity, as it hits the peer level, but also summarized data that's usable by an average user. It doesn't take a rocket scientist to understand that if connect() and write() are taking 2+ seconds on average to complete - there's a problem houston. This is the simple kind of data average users would only need to care about.

But for the hardcores, it's good to provide it at the peer level as well, especially when it's already collected.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...