Jump to content

Uploadspeed problem: "the net allows it, uTorrent can't keep up?"


sk-lt

Recommended Posts

  • Replies 60
  • Created
  • Last Reply

and how about the other clients? does he experience bad speeds (150k)? has he ever experienced such speeds?

EDIT: not bad (we'll never reach speeds of 8 mb/s in bulgaria :P - "we" means average people :P)

EDIT2: i'm uploading at 300-500k/s to a µTorrent 1.3 user now (omg what's going on?)

total upload speed: 600-800 k/s - so the network can support it :)

haven't tested utorrent - utorrent with myself :P

Link to comment
Share on other sites

(17:12:09) (ludde) i've added some optimizations to the uploading code

(17:12:13) (ludde) maybe it will upload faster now

(17:16:31) (ludde) (for single connections)

(17:19:37) (ludde) i'll test it against azureus soon

It's super, that THE boss has taken some mesures. But, as I wrote, azureus is one of not-too-many-problems-having client. But oh well... I can complain further :D

Thank u!

Link to comment
Share on other sites

seems that utorrent - utorrent is the best uploading environment (can't be sure, but i have never reaches upload speeds of more than 150k/s to other clients.. utorrent -utorrent reached 800 k/s :) 300-1000k is not a bad speed too.. i'm not aiming too high (8 mb/s) i just want to download a film in less time than i watch it :P we'll see.. we believe in ludde :P

btw ludde, try to test it against another client

Link to comment
Share on other sites

(17:40:18) (ludde3) yay

(17:40:27) (ludde3) now it uploads at 7.5MB/s instead of 4MB/s

(17:46:21) (ludde3) tweaked some settings

(17:46:10) (ludde3) 8MB/s

:P

Err... So these "settings" are in the code or what? Am I able to change something myself or should I just wait for a new release?

Link to comment
Share on other sites

Ludde, please consider increasing the pending-pipeline-requests factor. My "feel" is that in higher latency situations (I think you're testing on low-latency ?), utorrent is getting starved-out due to being non-piplined. The TCP ack delay is killing the efficiency in high-volume and high-latency situations.

I've seen situations where I'm TFT with another client (Mainline 4.x maybe) in the 100 KiB/s neighborhood and *my* pending reqs are usually 0-2 and the other side is 12-50 reqs. I've never been able to maintain download parity with these peers and yet the peer dl rate for the other guy is something insane like 700 KiB/s, so I'm pretty sure he'd give me more if I piplined the reqs better so he could saturate the TCP pipe properly.

Since the only penalty for being too agressive with queued requests is a slight reduction in protocol efficiency (well that and the stupid BC bug), I implore you to lean in that direction instead of having it strangle my flow.

Link to comment
Share on other sites

As much as I love µTorrent, I'll have to agree with AllWeasel on his request. I haven't found a way to explain this behaviour without sounding like a leech (which I am not, stats are there to prove it if requested), but AllWeasel managed to break the proposal down in a clear way. His comments perfectly reflect what I've also been experiencing, and it may be a significant factor to the numerous "my download speeds are slow!!!" threads... :/

Link to comment
Share on other sites

I forgot to mention another empirical observation that may help you. I had noticed an odd tendency for utorrent to incur a rather severe penalty in DL rate when *approaching* the physical capacity of the upload BW. Unlike other BT clients, where I could see the ISP throttling back the upload rate in response to exceeding the ISP (or "physical") limit; with utorrent, I'd have to run considerably lower UL caps to attain "stable" and saturated DL rates. The UL wouldn't saturate and wouldn't ISP-throttle, but the DL rate would just crash by a third or more and it would wobble all over the place. It smelled of a PID control loop that isn't tuned properly and it keeps bouncing from rail to rail.

Other clients can use significantly higher UL caps and still not avalanche the DL rate. Even when they were over-uploading and delaying the acks, it was a fairly smooth albeit non-linear transition. I've found "the sweet spot" where a 5 KiB increase in UL cap would plummet utorrent's DL by nearly 100 KiB/s, and then immediately recover when the setting was restored. It just occured to me that an under-fed pipeline of reqs (*from* utorrent to other peers) would absolutely behave this way. Also, I can't *ever* remember seeing the occasional 500-700 KiB/s DL rate (that my 9 Mb/s cable modem can deliver) after switching to utorrent. Granted, it was like the "best case" deal, but still, 450+ was pretty common and I haven't even seen *that* since I switched to utorrent. I'm not whining about DL though. I'm simply asking to adjust the behavior to maximize the EFFICIENCY of whatever the peers can negotiate.

A couple of days ago, I installed cFosSpeed (thanks Firon) and noticed all the benefits you would expect from acks having higher priority and low latency. :) Nonetheless, utorrent was still doing the avalanched DL thing and I verified this last night by switching to another client for a little while (same torrents). I was able to get much higher DL rates on one popular torrent (2K+ peers) *and* I was able to add nearly 20 more KiB/s to my upload cap before the other BT client began to get upload-starved. Even when starved, it wasn't the avanche effect, it was a lot smoother than that. It was, well what you expect BT to feel like when it can't send all the BT system messages that it needs to for a given DL rate. Again, I think this points to a req queue that is too small to properly saturate the TCP snorkel.

Another reason to consider this issue seriously, is that when the incoming pipeline is not saturated, utorrent is artificially limiting a good TFT peer and makes them appear less giving. This pushes the peer preference toward smaller BW (but lower latency) peers which totally screws the purpose of using TFT to alter the peer preference. It also screws the other peer (and us) from a fair chance to achieve pareto efficiency. The faster they try to go, the more starved they appear, so we cut back which causes them to cut back yada yada yada.

When transferring data over TCP, like BitTorrent

does, it is very important to always have several requests

pending at once, to avoid a delay between

pieces being sent, which is disastrous for transfer

rates.

Honestly, in average usage, I don't see utorrent posting "several" reqs, I see a "couple" or a "few" and by far mostly zero reqs pending on active peers. Granted it's symantics, and purely subjective, but I nearly *always* see higher req queues on all the other peers. IOW, 0|3, 0|2, 2|7, is the kind of thing I see - another hint that utorrent may be starving the queue a bit too much.

I remember reading about a Linux-based custom BT client that lived in a proprietary WAN or LAN environment. The author was complaining about the disk bandwidth and all the memory-mapping non-copy tricks he had to employ to keep the network upload rate saturated. As a side-note he mentioned that in the early days of testing, he was shocked to see how dramatic the proper req queue size was to obtain high data rates. After he implemented an intelligent algorithm to adjust the queue size dynamically, he immediately hit the wall with the disk bottleneck - and was shocked at the increased data flow rate of course. :)

I don't mean to pound on you, I just thought these experiences might be helpful to explain seemingly unrelated issues that might be caused by req queue starvation - which should be pretty easy to fix.

Link to comment
Share on other sites

I forgot to mention another empirical observation that may help you. I had noticed an odd tendency for utorrent to incur a rather severe penalty in DL rate when *approaching* the physical capacity of the upload BW. Unlike other BT clients, where I could see the ISP throttling back the upload rate in response to exceeding the ISP (or "physical") limit; with utorrent, I'd have to run considerably lower UL caps to attain "stable" and saturated DL rates. The UL wouldn't saturate and wouldn't ISP-throttle, but the DL rate would just crash by a third or more and it would wobble all over the place. It smelled of a PID control loop that isn't tuned properly and it keeps bouncing from rail to rail.

That's exactly what I've also experienced (and I'm sure others as well), unless I set my upload at an even lower treshold. Some have coined it the "yoyo-effect", and it certainly looks like that when viewed from the Speed tab.

Other clients can use significantly higher UL caps and still not avalanche the DL rate. Even when they were over-uploading and delaying the acks, it was a fairly smooth albeit non-linear transition. I've found "the sweet spot" where a 5 KiB increase in UL cap would plummet utorrent's DL by nearly 100 KiB/s, and then immediately recover when the setting was restored. It just occured to me that an under-fed pipeline of reqs (*from* utorrent to other peers) would absolutely behave this way. Also, I can't *ever* remember seeing the occasional 500-700 KiB/s DL rate (that my 9 Mb/s cable modem can deliver) after switching to utorrent. Granted, it was like the "best case" deal, but still, 450+ was pretty common and I haven't even seen *that* since I switched to utorrent. I'm not whining about DL though. I'm simply asking to adjust the behavior to maximize the EFFICIENCY of whatever the peers can negotiate.

I concur. :) There's nothing wrong in trying to achiever greater download speeds, as long as this is done in an honest, orderly, non-BitVomit like, "within-the-BitTorrent-protocol" manner.

A couple of days ago, I installed cFosSpeed (thanks Firon) and noticed all the benefits you would expect from acks having higher priority and low latency. :) Nonetheless, utorrent was still doing the avalanched DL thing and I verified this last night by switching to another client for a little while (same torrents). I was able to get much higher DL rates on one popular torrent (2K+ peers) *and* I was able to add nearly 20 more KiB/s to my upload cap before the other BT client began to get upload-starved. Even when starved, it wasn't the avanche effect, it was a lot smoother than that. It was, well what you expect BT to feel like when it can't send all the BT system messages that it needs to for a given DL rate. Again, I think this points to a req queue that is too small to properly saturate the TCP snorkel.

Interesting observation. Well, IMO this further proves the point that the request queue is perhaps a bit too conservative in nature and should be addressed by Ludde sometime.

Another reason to consider this issue seriously, is that when the incoming pipeline is not saturated, utorrent is artificially limiting a good TFT peer and makes them appear less giving. This pushes the peer preference toward smaller BW (but lower latency) peers which totally screws the purpose of using TFT to alter the peer preference. It also screws the other peer (and us) from a fair chance to achieve pareto efficiency. The faster they try to go, the more starved they appear, so we cut back which causes them to cut back yada yada yada.

Indeed, and thus I also believe this issue is very important to many people who've been having download speed problems. Once fixed, it could eliminate a lot of headaches. :)

When transferring data over TCP' date=' like BitTorrent

does, it is very important to always have several requests

pending at once, to avoid a delay between

pieces being sent, which is disastrous for transfer

rates.[/quote']

Honestly, in average usage, I don't see utorrent posting "several" reqs, I see a "couple" or a "few" and by far mostly zero reqs pending on active peers. Granted it's symantics, and purely subjective, but I nearly *always* see higher req queues on all the other peers. IOW, 0|3, 0|2, 2|7, is the kind of thing I see - another hint that utorrent may be starving the queue a bit too much.

I remember reading about a Linux-based custom BT client that lived in a proprietary WAN or LAN environment. The author was complaining about the disk bandwidth and all the memory-mapping non-copy tricks he had to employ to keep the network upload rate saturated. As a side-note he mentioned that in the early days of testing, he was shocked to see how dramatic the proper req queue size was to obtain high data rates. After he implemented an intelligent algorithm to adjust the queue size dynamically, he immediately hit the wall with the disk bottleneck - and was shocked at the increased data flow rate of course. :)

This is certainly something to keep in mind if/when the request queue gets resolved, otherwise we could open up another can of worms on the forums... :P

I don't mean to pound on you, I just thought these experiences might be helpful to explain seemingly unrelated issues that might be caused by req queue starvation - which should be pretty easy to fix.

Me neither, but I hope Ludde will have the time to look at it closely and make due changes to his client. It's all for the common good. :)

Link to comment
Share on other sites

although the topic discussed here is quite sofisticated for me ;-), i`ve few things to post here. i`m still having bitcomet download speed problem, mainly in small swarms, where most (all) seeders/peers are bitcomets. The second thing is that i noticed after reading your posts that my reqs are also very low - not going over 2. maybe this wil be way to go?

Link to comment
Share on other sites

i`m still having bitcomet download speed problem, mainly in small swarms, where most (all) seeders/peers are bitcomets. The second thing is that i noticed after reading your posts that my reqs are also very low - not going over 2. maybe this wil be way to go?

BitComet is the tool of choice for leechers, so it should come as no surprise that many of those BC clients don't give up much. There isn't anything utorrent does to discriminate against *any* client, so a BitComet-specific rate problem isn't really a utorrent problem. In cases where BC is ill-behaved and causes utorrent to alter it's normal behavior (usually cheating non-BC clients) - well that probably *is* a utorrent issue. :)

On the second part of your comments, having a req count of 1 or 2 or even zero is *not* a bad thing in all cases. The "correct" req count is relative to the combined factors of data rate, Internet packet delay, turnaround delay (for both clients), TCP error rate, Internet QOS, choker status of both clients, and the UL or DL caps imposed by either client, the ISP (or even the physical transport medium), and a few oddball items that aren't usually significant.

Calculating the necessary req queue for any given moment is often done as a PID loop or adaptive/predictive algorithm with similar behavior It's usually very complex to fully implement unless you just cheat a bit and use empirical observation to determine a few constants for you.

Imagine that you own a store and it takes two days to get supplies from the coast inland to your little town. If you don't want to run out of carrots, you have to keep enough carrots in the store to cover today's sales *and* tomorrow's sales *and* the next day's sales *and* a little extra for days when carrots are more popular (or the delivery is late).

You sell 50 carrots today, so you order 50 carrots to be delivered two days from now. The next day you sell 25, so you order 25 to be delivered two days from then. Now, somebody publishes a new salad recipe at the town meeting and you sell 75 carrots today. In addition, you hear people talking about that great new salad recipe, so you know tomorrow's carrot sales will be over normal. This is a "prediction" on your part. So, instead of ordering 75 carrots to cover today's sales, and leave you short of stock tomorrow (or the next day), you order 150 instead.

Sure enough, the salad craze is booming and you sell 85 carrots today, 95 the next day and 105 the day after that. So each day, you "predict" what the increas in sales will be or else you risk running out of carrots before the delivery truck gets here in two days. This goes along for some time until you sell 75,70,65,60,55,50 and all the while you're accumulating more and more carrots that are getting pretty old.

Now, you have to start "predicting" the slowing sales and order less than you're actually selling - until you hit the ideal point. The ideal point is just enough carrots to nearly sell out today and just enough so they don't get too old to sell and more carrots already ordered so that you always have some carrots on the shelf.

Now add the wildcard things like the truck breaking down or hollidays or union strikes or multiple suppliers. Things get complicated in a hurry. The computer tries to do the same kind of things. It tries to predict what factors will conspire to delay the data packets and pre-order enough pieces to cover those cases - without ordering too much, because un-ordering a pre-order is a PITA and uses even more bandwidth to fix the mistake of ordering too much.

Several of the things that are part of the "prediction" equation are sensitive to the flow rate which is constantly changing. In addition, having to un-order something can cause a previous prediction to be wrong simply because of the extra BW needed to un-order it. So, you have to predict that those things will alter your predictions sometimes.

In the grocery store analogy, we're suggesting that utorrent is letting the shelves go bare sometimes and that means a drop in the *maximum* carrot sales that were possible that day. We're suggesting that whatever prediction utorrent is making, it might be better to add another 10 or 20 percent to each order (or in some other way) cause a bit more reserves to be available - generally speaking. This sort of thing is often a tweak-and-try situation.

Link to comment
Share on other sites

yeah 404 released http://utorrent.com/download/beta/utorrent-1.4.1-beta-build-404.txt

from what i see now (30 seconds after i started it) it behaves better.. uploading to a bitcomet client with higher speeds (300-400k).. i will make some test to see if it could reach even higher speeds

TEST1

seeding with µTorrent leeching with Bitcomet

started slowly (uploaded at about 300k for 20 seconds) then gained speed and reached maximum speed of 1.5 mb/s

//I've done that test twice to see if the result is the same... It is..

TEST2

seeding with BitSpirit leeching with Bitcomet

reached maximum speed of 2.5 mb/s (lol it's double compared to TEST1.. not a small difference..)

//Did this test just for a comparison

TEST3

seeding with µTorrent leeching with BitSpirit

reached maximum speed of 7 mb/s (omg that's cool)

TEST4

leeching with µTorrent seeding with BitSpirit

reached maximum speed of 3.5 mb/s

TEST5

leeching with µTorrent seeding with Bitcomet

reached maximum speed of 5 mb/s

conclusion:

a lot better speeds now ludde.. good work.. i noticed that when i upload to other clients (not bitcomet) my Reqs column goes (0|150 and changes to 0|200 and so on).. when i upload to bitcomet it is 0|15, then it disappers for some time and appears as 0|10.. maybe the low upload speed to bitcomet is due to that?

i think that release should drop down the speed problems complaints to 50% :)

Link to comment
Share on other sites

Apparently build 405 has done something to significantly improve my DL throughput. I'm guessing that the req queue was increased, but it feels like that wasn't the only internal tweak. After using it for only an hour or so, I've seen peak DL rates over 580 KiB/s which had been quite rare before. Utorrent is also much quicker to ramp up the DL rate and appears generally more responsive to changing peer conditions. It still feels a bit slow to keep the max peers connected, but it never gets too far down to worry about it.

In any case, thank you very much for listening and being so responsive to the users. :)

Oh and I wanted to add that I'm still constantly blown-away by how fast utorrent get a torrent going when it's first launched.

Edit:

Last night, I grabbed some large public torrents which had marginal swarms. I had constant DL rates over 300 KiB/s and short runs over 780 KiB/s. This is absolutely better sustained DL than the previous utorrent build. Thanks again. :)

Link to comment
Share on other sites

Yep, I can upload with full power now. But it feels as it needs more tune-up though... The ul rate dosen't jump to max from the start, it needs some time (with bitcomet). Azureus and BitTornado are doing just fine leeching :)

Thank u again!

Link to comment
Share on other sites

i set my cache to 4000 and works fine.. higher speeds need more disk activity..

however there is no cache when uploading.. peers request different pieces all the time so there is no way for it to cache

There is, actually.

diskio.read_cache_size

I prefer to have a minimum 5mb read cache (and the utorrent process gets 5mb bigger) but this way my HD won't be torn to pieces.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...