Jump to content

Torrent-friendly ISPs frown at uTorrent and uTP


tim716

Recommended Posts

Case in point, 1 packet that's 1500 bytes in size versus 2 packets that are 750 bytes each. Both are 1500 bytes "long", yet the 2nd one (2 packets) has a delay only half as long if another packet can be placed between then.

But if there are bunch of packets in shaper queue or in adapter/modem buffer, and some of them are uTorrent small packets (for ex., with minimal 150 bytes size) with target delay 100 ms, and there are VoIP/game with packets sized ~1k - big packets without prioritizing will have delay 600-700ms, and such service will be really unuseful. So uTP will just flood connection - so it consumes all available bandwidth, and other services will be stunned...

And there is no possibility to prioritize non-uTP kinds of traffic - because there is no possibility to separate uTP into different traffic class by some signature.

Link to comment
Share on other sites

  • Replies 90
  • Created
  • Last Reply
Decreasing packet rate is not the equivalent of decreasing packet size.

Case in point, 1 packet that's 1500 bytes in size versus 2 packets that are 750 bytes each. Both are 1500 bytes "long", yet the 2nd one (2 packets) has a delay only half as long if another packet can be placed between then.

This makes no sense to me. I think I need to understand exactly what you mean by the word *delay*.

If you want to limit delay to 50 ms, then that's 1/20th of a second. 1500 byte packets 20 times a second is a minimum of 240 kilobits/second (~30 KB/sec) upload rate.

Okay, this is interesting. It seems that I have truly misunderstood what was meant by uTP delay. You are using the term *delay* to represent the time quanta between sending packets? In my mind, I was understanding *delay* to mean the delay between the sending of a packet and it's delivery to the destination: Making the term *delay* essentially equivalent to *latency*, something to be measured rather than something to be timed.

Except uTorrent devs wanted a delay of 50ms for 3 packets in a row, so that needs a minimal upload speed of 3 times that -- or 720 kilobits/second (~90 KB/sec). These are usable sustained upload rates, not peak bursts. The whole line link needs probably closer to 1 megabit/second upload speed before it should consistently use max MTU 1500 byte packet sizes.

I fail to understand why arbitrary packet rates need to be chosen or why packet size even needs to change, at all, when uTP congestion control is *supposed* to be about measuring changes to latency and adjusting bandwidth based upon the results. This can be accomplished by simply locking the packet size to MTU and making changes to packet send rates based upon latency metrics.

Except even to do that assumes a line that's otherwise nearly idle. If there's other internet traffic or others sharing the same connection, then there's going to be small uTP packets sometimes used even with a much faster line...such as while downloading at close to max linespeed.

This is just my opinion/guesses.

I am not a uTorrent/BitTorrent developer. I am only a uTorrent moderator. (I'm a big reason why you don't see this forum overrun with spam.) Likewise, rafi is not a uTorrent/BitTorrent developer...at least not any more than tim716 is.

I see. I have actually considered grabbing the source for the uTP protocol and modifying it to act the way I just described, but I don't have a torrent framework to link the uTP protocol to, so I wouldn't really have any interesting real-world tests. Do you know if there is some implementation of bittorrent that is available in source that uses the publicly available uTP source that was recently released? Some little command line bittorrent tool floating around inside the uTorrent labs or something?

Link to comment
Share on other sites

Do you know if there is some implementation of bittorrent that is available in source that uses the publicly available uTP source that was recently released? Some little command line bittorrent tool floating around inside the uTorrent labs or something?

KTorrent 4.0 is the 1st client after µT to support µTP. And it's a BT client under GNU GPL.

http://ktorrent.org/?q=node/42

Link to comment
Share on other sites

KTorrent 4.0 is the 1st client after µT to support µTP. And it's a BT client under GNU GPL.

http://ktorrent.org/?q=node/42

Yes, but the packages are not yet available through ports or any of the other package managers for OS X, and I don't feel like screwing with getting a build environment for QT that KTorrent respects. I suppose I'll just have to wait a while.

I went and read the spec and it is definitely true that uTP is monitoring latency and allowing that to guide the bandwidth decisions. There is a lot of handwaving about this latency coming from the send buffer of the modem, but in practice, it really could mean any type of congestion. There is very little discussion about why packets are kept small, something like one paragraph. It does not seem to be a requirement of the protocol, but rather, an implementation choice of the current developers, one that should be rather easily fixed.

Link to comment
Share on other sites

It does not seem to be a requirement of the protocol, but rather, an implementation choice of the current developers, one that should be rather easily fixed.

Correct, and uT devs also confirmed that in another discussion (related to Transmission).

In my view the latency is related to your physical distance from the peers. In my example it's about 100 msec to Europe and 200 to the US (for the smallest packet). Nothing will change that. Sending a 1500 packet have about +20msec or +10-20% effect on that. Yes, it is also effected by the current congestion on your connection, but it is still about +15% difference.

So for me:

1. The actual uT upload-speed itself is artificially set by the users and has no direct indication on the connection speed or the connection-congestion. So no conclusions as to the real congestion status can be drawn from uTP time measurements (under those conditions)

2. With TCP, uT still uses larger packet. So what's the sense in uTP uses smaller ones at the same time ?

3. uTP & smaller packets were never tested with 2.x releases and compared to larger ones (in a relatively low upload speed-limit or connection). No one has verified it's 'blessed' effect on total connection's- congestion.

Therefor, the bottom line (as I see it) - the huge increased OH/PPS caused by using small packets is not worth this ~15% decrease in latency, and the yet untested effect (if any) on congestion . Better implement uTP with MTU sized packets, at least when the real/true line congestion status is unclear/unmeasurable.

C:\WINDOWS>ping -l 1450 yahoo.com

Pinging yahoo.com [209.191.122.70] with 1450 bytes of data:

Reply from 209.191.122.70: bytes=1450 time=240ms TTL=49
Reply from 209.191.122.70: bytes=1450 time=239ms TTL=49
Reply from 209.191.122.70: bytes=1450 time=238ms TTL=49
Reply from 209.191.122.70: bytes=1450 time=238ms TTL=49

Ping statistics for 209.191.122.70:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 238ms, Maximum = 240ms, Average = 238ms

C:\WINDOWS>ping -l 190 yahoo.com

Pinging yahoo.com [209.191.122.70] with 190 bytes of data:

Reply from 209.191.122.70: bytes=190 time=215ms TTL=49
Reply from 209.191.122.70: bytes=190 time=213ms TTL=49
Reply from 209.191.122.70: bytes=190 time=214ms TTL=49
Reply from 209.191.122.70: bytes=190 time=215ms TTL=49

Ping statistics for 209.191.122.70:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 213ms, Maximum = 215ms, Average = 214ms

C:\WINDOWS>ping -l 1450 google.com

Pinging google.com [74.125.39.106] with 1450 bytes of data:

Reply from 74.125.39.106: bytes=64 (sent 1450) time=103ms TTL=53
Reply from 74.125.39.106: bytes=64 (sent 1450) time=101ms TTL=53
Reply from 74.125.39.106: bytes=64 (sent 1450) time=102ms TTL=53
Reply from 74.125.39.106: bytes=64 (sent 1450) time=101ms TTL=53

Ping statistics for 74.125.39.106:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 101ms, Maximum = 103ms, Average = 101ms

C:\WINDOWS>ping -l 190 google.com

Pinging google.com [74.125.39.106] with 190 bytes of data:

Reply from 74.125.39.106: bytes=64 (sent 190) time=79ms TTL=53
Reply from 74.125.39.106: bytes=64 (sent 190) time=80ms TTL=53
Reply from 74.125.39.106: bytes=64 (sent 190) time=80ms TTL=53
Reply from 74.125.39.106: bytes=64 (sent 190) time=80ms TTL=53

Ping statistics for 74.125.39.106:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 79ms, Maximum = 80ms, Average = 79ms

C:\WINDOWS>

Link to comment
Share on other sites

In my view the latency is related to your physical distance from the peers.

Only when channel isn't saturated - in that condition it'll be negligible time grow, especially on thick channels. If channel is really saturated - difference can be enough high, and may be proportional to packet size (when shaper becomes as bottleneck).

Also, congestion control that tried to reach fixed delay, for ex., 50ms delay, isn't good for clients that has ethernet connection to internet/some IX - I have country-wide ping 6..20ms, and ping to some other countries up to 300 ms. IMHO it'll be better to detect congestion by measuring difference between RTT of long and short packets.

Link to comment
Share on other sites

The gross latency isn't really interesting. The uTP specification uses the term *delay* to describe the difference between the most recent latency and the 2 minute sliding minimal latency.

Here is how this is calculated:

Packet 1 is sent out and timestamped at t_1.

Packet 1 is received and it's timestamp (t_1) is subtracted from the current time of the recipient to produce a time differential (dt_1).

The time differential (dt_1) is returned to the sender.

At the sender, the base_delay is calculated by taking the smallest differential encountered in the last two minutes - let's call this dt_min

base_delay = dt_min = minimum( dt_2minutes_ago, ... , dt_n )

Every time we receive a packet from our counterparty, we receive a calculation of latency (dt_n) for the last packet that they received.

The *delay* of that packet is then calculated by computing the delay difference.

delay_n = ( dt_n - dt_min )

What this delay_n variable represents, then, is the difference between the latency of sending the last packet and the shortest latency that we have seen in any packet that we have sent out over the last two minutes.

In the uTP specification, then, they set a low water mark for the *tolerance* of the uTP protocol. This low water mark (or target delay) is discussed as 100ms in the specification. It appears, although is not completely clear, that if the delay_n comes in below this low water mark, nothing is changed. If, however, a packet shows a delay that is greater than this "somewhat arbitrary" low water mark, then changes are made to the size of the allowable inflight window, decreasing the number of bytes allowed to be in transit at any given time. There is no point into going into details about how the bandwidth is adjusted, here, but the point is that once a delay is determined to surpass this low water mark, the send bandwidth is throttled down.

In no way does any of this require modifying packet size. In fact, modifying packet size will change the latency of an otherwise perfectly ideal connection, making many of the variables less meaningful (such as base_delay and the low water mark of 100ms).

It seems to me that using the MTU as the packet size and setting the low-water mark by doing a statistical analysis of *NORMAL FLUCTUATIONS OF LATENCY* would be the most rational way to implement this protocol. Some connections, such as noisy wifi connections, may vary more than 100ms in uncongested conditions. The correct low-water mark (target delay) for such connections should be determined based upon analysis of latency statistics.

Even when we have an appropriate target_delay (low water mark) for a network, doing straight magnitude comparisons is somewhat naive, if we really are interested in congestion. There is much better information in examining the acceleration of the delay than the magnitude of delay. In congestive situations, the delay will be increasing. In non-congestive situations, it will be somewhat randomly fluctuating. In situations where congestion is vanishing, the delay will be decreasing.

If we used an acceleration approach, then the algorithm might look something a bit more like the following:

We still calculate the delay in the same way, so for each packet, we expect to receive from the recipient a dt_n, allowing us to calculate:

delay_n = ( dt_n - dt_min )

But, instead of comparing this directly to target_delay=100ms, we instead first check on acceleration:

delay_velocity_n = delay_n - delay_(n-1)

delay_acceleration_n = delay_velocity_n - delay_velocity_(n-1)

/// Some C pseudocode follows

// First, lets define the halflife of the statistical fluctuation to be 120,000,000 microseconds (2 minutes)

#define halflife 120000000

// In the following pseudocode, n is the packet number, n-1 is the last packet number, etc..

//

// Assume that statistical_maximum_fluctuation is defined -

// it is most likely an instance variable or a struct member of some sort. It can start at 0

// Get the current time

double curtime = gettimeofday();

// halflife decay of statistical fluctuation creates our comparison threshold

double threshold_fluctuation = statistical_maximum_fluctuation*pow(0.5, (curtime-maxtime)/halflife);

// If acceleration of delay occurs over four packets, we have a trend...

// Regardless of delay, we should throttle..

//

if( delay_acceleration(n)>0 && delay_acceleration(n-1)>0)

throttle_packet_rate_down();

// Only update our statistical fluctuation on delay downticks, and do it when the fluctuation is

// greater than our decayed value (this is what the decay is for, to allow new lower ceilings to replace high ones, over time)

//

else if(delay_velocity(n) < 0 && delay(n) > threshold_fluctuation )

{ statistical_maximum_fluctuation=delay_n; maxtime=curtime; }

// We didn't spot an acceleration trend, but the delay exceeded our non-decayed ceiling, so throttle.

// This is what the existing code does, but with the arbitrary delay_target of 100ms instead of a captured statistical

// value.

//

else if(delay_n > statistical_maximum_fluctuation)

throttle_packet_rate_down();

else { } // leave everything the way it is

EDIT TO ADD: A couple of optimizations.

(A) Out-of-order packets and all data that is carried by them (and their predecessors) may be safely ignored as indicators of congestion (so long as the out-of-order packet has a delay less than the statistical_maximum_fluctuation). If we receive an out-of-order packet, it means that the delay is decreasing (a more recent packet has gone round trip faster than a previous packet), so we can attribute the out-of-orderness to something other than congestion.

(B) Geesh... I had a real nice optimization in my head and forgot what it was.... I'll put it here when I remember what I was thinking.

This bit of pseudocode, above, will most likely eliminate the sawtooth patterns of the current uTP protocol attempting to find the target delay, because that is an arbitrary and artificial target with no real meaning and the magnitudal delay conditions create a strange attractor race condition, pretty much guaranteeing that bandwidth will cycle endlessly around this mystical target, even under pretty normal operating conditions.

By using acceleration and captured statistical fluctuations to make determinations regarding bandwidth, we are taking the arbitrary out of the algorithm. We get rid of the target delay, modulations in packet size and replace them with more robust math.

I am a believer in the utility of this protocol, but I think that a bit of statisitical analysis and perhaps a touch of physics should be used along side full packet sizes, rather than choosing arbitrary constants to optimize around with artificial thresholds that we hope (pray?) will model every network environment encounterable.

Link to comment
Share on other sites

@NiTr0: Sure, My full quote is actually :

... it is also effected by the current congestion on your connection

it's <40msec in-country for me too. The +20msec IS proportional to the packet size in the non-congested case as well. I believe the ~+15% latency will be valid in a congested case too. But, that's what tests are for. Let the devs check it up while they finally test for existence of any effective congestion relief in this release...

@CorpusCallosum/Nitro:

It makes no sense to me to try and formulate anything (or for that matter, re-size anything) when the application itself is capping/limiting the upload speed (as do most users in uT, per it's setup guide). When the actual rate is *at that limit* - there is no correlation between the real congestion of the line, the upload rate and the measured delays (even delta-delays) between different sized Packets.

Only when you'll set an unlimited upload speed it might make some sense. And I wouldn't like uT to have auto-unlimited uTP rate. At least not yet... :P

Link to comment
Share on other sites

I just did a big edit to my previous post, and threw in some code. Curious about your thoughts.

It makes no sense to me to try and formulate anything (or for that matter, re-size anything) when the application itself is capping/limiting the upload speed (as do most users in uT, per it's setup guide). When the actual rate is *at that limit* - there is no correlation between the real congestion of the line, the upload rate and the measured delays (even delta-delays) between different sized Packets.

Yes, that is right. If packet size is being played with, then the delay calculations will be wrong.

Hence, packet size needs to be constant.

Furthermore, an arbitrary target delay is a foolish concept. One needs to calculate some variant of delay fluctuation and use that as a threshold for throttling. My pseudocode does this in a very simplified way, above.

Link to comment
Share on other sites

I just did a big edit to my previous post, and threw in some code. Curious about your thoughts.
Switeck wrote:

...Likewise, rafi is not a uTorrent/BitTorrent developer...at least not any more than tim716 is.

He is right. You better wait for a dev to respond ... :)

Link to comment
Share on other sites

But if there are bunch of packets in shaper queue or in adapter/modem buffer

uTorrent uses a few tricks with uTP monitoring/sendrates/packetsizes to prevent filling the modem buffer, so those resulting conditions shouldn't happen.

Yes, I am using the term *delay* to represent the time quanta between sending packets. Think of it as the 1st hop latency, since uTorrent doesn't care about/can't control the latency of an unrelated individual ip-to-ip connection end-to-end. uTorrent tries to keep your line responsive by not allowing the modem buffer to fill up.

The large 3-packet window was probably chosen firstly because of granularity issues with single packets. It averages out background noise and increases the measuring scale large enough that meaningful time differences can be observed.

Link to comment
Share on other sites

uTorrent uses a few tricks with uTP monitoring/sendrates/packetsizes to prevent filling the modem buffer, so those resulting conditions shouldn't happen.

If the modem buffer holds more than 100ms worth of data (and they all will), then this protocol will not ever fill it. No tricks are needed. The protocol sees the deltas between packet latencies and refuses to send data when that latency increases to more than 100ms above the measured minimum. There is absolutely no reason (or excuse) for lowering packet sizes.

Yes, I am using the term *delay* to represent the time quanta between sending packets.

This is not the way this word is being used in the protocol specification, so I think it's dangerous to use it that way on the forum. Delay between sending packets controls packet-rate and bandwidth. The "delay" that is measured and used to make decisions in the protocol is the difference between the latency of consecutively sent packets. These are very different concepts and completely unrelated to one another.

Think of it as the 1st hop latency, since uTorrent doesn't care about/can't control the latency of an unrelated individual ip-to-ip connection end-to-end. uTorrent tries to keep your line responsive by not allowing the modem buffer to fill up.

Well, interestingly enough, the protocol will optimize it's use of buffers all along it's path, not just between the computer and the modem. I think that is one of the most intriguing thing about the protocol -- BUT, and this is a big but, the traffic shapers along that path don't understand uTP (yet), so it is unclear exactly how they will behave to uTP's throttling system. I suspect that in many cases, we are going to see a bit of "dog chasing tail" as the shapers attempt to shape traffic that is shaping itself around the behavior of the shapers. At least until this becomes mainstream.

The large 3-packet window was probably chosen firstly because of granularity issues with single packets. It averages out background noise and increases the measuring scale large enough that meaningful time differences can be observed.

I have seen reference to this before. Is the 100ms supposed to be the target delay for one packet, the average of three packets, or the cumulative delay target for three packets?

I have pulled the source code for uTP and am digging through it. How different is this source from what is in uTorrent today?

Link to comment
Share on other sites

Hey guys, we have a new build of uTorrent out now. 2.0.3 has a lot of packet size tweaks that should make it much more likely to use a packet size close to MTU and reduce the PPS, as well as increasing the minimum packet size from 150 to 300 (since 150 is just ludicrously low), to improve the worst possible case when the client goes to very small packet sizes. Still, it should only rarely go to small packets now, unlike before.

Could you please test it to see if the PPS has been reduced with this version? Our testing shows that it did.

If you're from one of the Russian ISPs mentioned here, we can autoupdate all your users to the 2.0.3 beta if you give us your IP ranges.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...