Archived

This topic is now archived and is closed to further replies.

rafi

V2.01 uTP issues/bugs [2] (Packets size distributon - uTP vs TCP )

Recommended Posts

updated results for build 18408 below here: http://forum.utorrent.com/viewtopic.php?pid=463418#p463418

This issue is not 2.01 only related but 1.83 and above. Since 2.01 is not widely distributed yet, I'll post this anyway, hoping that 2.01 will be fixed so to improve the uTP performance overhead.

1. Two downloads were done of the same file, one using uTP only , the other TCP only.

2. All data was sampled with Wireshark (and commView) filtered for only the local uT port.

3. upload limit was ~20K and no download limit was set on my peer's side (a 2.5M/250K DSL connection)

The bottom lines:

1. for the same amount of downloaded data (~4.38MB in this test/sample) - uTP used 13130 packets, and TCP only 8499 ( uTP used 55% more packets than TCP! )

2. The reason for that is - uTP uses much smaller packets then TCP (that uses the maximum of 1506B per packet) (see also results related to resulted overhead in test 1.A in this thread)

3. Also, if we want to achieve the same download rate , uTP PPS (packet per second) will be higher then TCP. Weaker Soho routers might suffer from that effect. ISPs will not like it much either ... :(

It is recommended to use maximum possible packet size for uTP , so to minimize both the PPS and the total # of packets. I guess controlling speed can be done by delaying packets.

UTP:

statsutppacketsizes.png

udptotalsallnotes.png

TCP:

statstcppacketsizes.png

tcptotalsallnotes.png

Share this post


Link to post
Share on other sites

It is not recommended to use the maximum packet size at all times. There's a lot of reasons why, but it's only a good idea if you have a sufficiently high upload.

Share this post


Link to post
Share on other sites

I don't fully agree. but ok, more and more people have high upload, so make it the largest possible for them . I will try it if a proper setup will make it definable, and promise to let you know how it goes ... :) I'm sure that if all those people will use highest packet size, peers with less UL power will benefit from it (using them as seeds) as well.

Share this post


Link to post
Share on other sites

It should be noted that since about 1995 or so, modern routers could handle high PPS low bandwidth situations just fine.

If you're using a BSD box running on a Pentium Pro, this might be an issue.

Share this post


Link to post
Share on other sites
It is recommended to use maximum possible packet size for uTP , so to minimize both the PPS and the total # of packets. I guess controlling speed can be done by delaying packets.

This is *not* recommended. Larger packet sizes mean higher delay and lower ability to react to that delay. We specifically started with maximum packet sizes, and moved the packet size down until we were able to control delay accurately.

We are working on our ability to ramp up the packet size if the line rate is sufficiently high. Some of these improvements are going in 2.0.1.

Share this post


Link to post
Share on other sites
It should be noted...

noted ... :)

We are working on our ability...

I an aware of that. I thought more stats info (and overhead) can be of help. No logic what so ever can determine the *available* line rate (connection's maximum upload or download). The user can specify that manually though. So, if possible - let him do that. For now, as it is - any ettemp to determine line capacity based on speed or latency - is not a valid one.

Share this post


Link to post
Share on other sites
No logic what so ever can determine the *available* line rate (connection's maximum upload or download). The user can specify that manually though. So, if possible - let him do that. For now, as it is - any decision based on speed or latency - is not a valid one.

The reason to use small packets is to not incur delay significantly higher than the target delay. If the target delay is 50 ms (it's currently 100 ms, but we're likely to move this down in the future), and it takes 200 ms of serialization delay to send one MTU, it's a really bad idea to send MTU sized packets. If you do, your delay measurements will either be 0 or 200, and you're shooting for 50, so you'll oscillate at not find any suitable cwnd size.

Currently we determine the packet size we use based on the send rate, and the send rate is assumed to reflect the delay we see.

If we consistently see delays that are significantly lower than the target, this seems like a perfectly reasonable valid basis for packet size logic. I'm not sure why you think latency wouldn't be a valid basis.

Share this post


Link to post
Share on other sites
...our ability to ramp up the packet size if the line rate is sufficiently high

great, I'm all for it. "Line rate" is NOT equal to 'uT rate'. It's the max possible *connection rate* a user has from the ISP, and you just cannot "tell" what it is. It has nothing to do with any artificial "set limit" that the user might have set in uT. He has to just set it manually if you really need it in uT.

As a 2.5M peer, I can DL at full speed even if I have a 200 msec target delay. My RCV-Window is set up for it (XP). I expect seeds to provide me with full sized packets to minimize overhead.

If the target delay is 50 ms ... and it takes 200 ms of serialization delay to send one MTU...

On the other (send) direction - TCP sends here 1506 bytes (my MTU is 1492), and uT limits the UL speed just fine to ~20K. uTP should do it too, or at least, try to... I'm sorry to hear that there were technical difficulties in doing so, and can just hope they can be solved, and not cripple data with extra overhead. I guess, for a good quality line, sending of several UDP packets using only one ack can speed up your logic a bit. UDP is completely application dependent, and you can set/enhance the "stardand" here being the first, or only use this "turbo" mode only with uT peers...

Well, just put in a minimum packet size as a setting/parameter, and we'll be happy to report the results on a test-build or a next beta.

Edit:

deja vu ? ... try and make more packets to be concurrently 'in flight' :)

http://forum.utorrent.com/viewtopic.php?pid=409054#p409054

Share this post


Link to post
Share on other sites

This packet size distribution and higher packet_send_rate made small ISPs really angry... Weak and cheap (and very widespread) home routers (DIR-300) fail too... Wifi dies instantly...

Share this post


Link to post
Share on other sites

@eliot: did you try it out on your DIR-300 router/Wifi, and it failed you ? did you use "worst case scenario" with uTP only mode? I'm sure the devs are working on optimizing it as much as possible .

oh, and for all the posters above - an interesting reading:uDT : http://udt.sourceforge.net/

This is an interesting implementation of UDP based Data Transfer protocol that has also configurable congestion control (CCC). The point I was interested in - the way they implement the CCC , and as far as I understood it - they control only the delays between packets, and do not change the sizes.

Share this post


Link to post
Share on other sites

Yes, I beleive in uTP so much... I use bt.transp_disposition=10... :)

And I have no problems here with my 4M/4M ethernet link...

30% of packets of length >1280

40% (SYN) length<79

When link is fully utilized... With 65 uTP peers connected...

I have no routers, no wifi... But I have angry friends in IT... :)

Share this post


Link to post
Share on other sites

Update: Overload test results for build 18408

In this build a new setting I suggest was introduced: net.uTP_dynamic_packet_size. Below, are updated results in uTP, with this setting set to FALSE.

My conclusions:

A. When the new settings is set to FALSE (using large-MTU sized packets) we can see a big improvement in reduced overhead !

B. There is still an issue in case the seeder is limiting his upload. In this case - this setting is NOT working! The immediate result - is small packets and induced LARGE overhead! The smaller the limit is - the higher is the overhead.

C. I recommend the following defaults (related to uTP):

* set net.uTP_dynamic_packet_size = FALSE

* net.utp_initial_packet_size = 8

* bt.tcp_rate_control = false (optional, none uTP related)

Only then - all seeders will be effected and improve the overhead for all the other downloaders

D. I also see as a required change - the need to modify the speed limiters (mainly the upload limiter) so that speed control will not cause packets to be smaller than specified with net.uTP_dynamic_packet_size .

Note:

A strange issue I noticed - is uTorrent sending very large packets over a loopback connection (up to 16KB and even 32KB ) . Those were fragmented (on XP/SP2) and no harm done.

A positive side-effect - was a reduced number of Acks from the peer and less overhead ... :)

A negative side-effect can be - bad influence on congestion :(

So, maybe this is something to think about - how to somehow use it with a real NIC ...

Here are the tests themselves:

Relevant settings changed for all tests:

net.utp_initial_packet_size = 8 (=> MTU size)

net.utp_dynamic_packet_size = false (=>MTU size)

bt.transp_disposition = 10 (=>uTP)

bt.tcp_rate_control = false

1. With the same test as above, in the first post - no real change let's see why:

net.uTP_dynamic_packet_size = FALSE

UL limit set at seed -still small packets, W/O UL limit at seed -only large packets!

stillsmallpacketsulon.th.png nosmallpacketsuloff.th.png

This implies that with an active UL speed limiter (which is the recommended settings) - the new setting does not have effect! :(

Also, on a similar test but with a 10K upload limit ALL packets were 215B size:

201audpul10kdefault.th.png

2. the effect on transfer caps - for a small 11.1M file

net.uTP_dynamic_packet_size = FALSE

With UL limit ON - 13.5M transferred (22% overhead!), When it's OFF - 12.1M (9% overhead)

capsul20k.th.png capsuloffsmall.th.png

3. Testing with an external speed limiter only .

In this test - NetLimiter was used to simulate net load condition with a 20K limit. The following shows the overhead changing when we set net.uTP_dynamic_packet_size to FALSE or TRUE

TRUE: small packets, large overhead, FALSE: Larger packets , small overhead

201audpunl20kdlnetlimitd.th.png 201audpunl20kdlnetlimitj.th.png

Share this post


Link to post
Share on other sites

yes. Sorry, I forget to mention it...

changes were:

net.utp_initial_packet_size = 8 (=> MTU size)

net.utp_dynamic_packet_size = false

bt.transp_disposition = 10 (for the peer)

I will edit it in, thanks

Share this post


Link to post
Share on other sites