Jump to content

Torrent-friendly ISPs frown at uTorrent and uTP


tim716

Recommended Posts

so it isn't strange that people preferred to block source now instead of waiting till developers will do this. Why nobody sends this bug later - I don't know.

I would just imagine that these things would happen in parallel, we've had a number of releasese since february.

About congestion control - I have question: why uTorrent starts to decrease packet size instead of increasing intervals between packets when congestion is detected???

As I explained in my earlier post, the packet size is a function of aggregate upload rate, which is only indirectly dependent on congestion. The congestion (or the one way delays we measure) directly controls the congestion window, and the number of packets in-flight will decrease immediately when there is a spike in delay. This means that packets are sent less frequent. As a secondary affect, on a much larger time-scale, the upload rate decreases and the packet size is dropped. The former happens on RTT timescales (say 200 ms) the second happens at 10 second intervals.

The reason why the packet size is decreased at low transfer rates is because low transfer rates (at the target delay of 100ms) indicate that the serialization delay for packets is significant. If we would use full MTU sized packets at all transfer rates the congestion mechanism would break down for slow connections, since a single packet could induce a delay close to the target just by being sent. This is the fundamental reason why we alter the packet sizes.

This will kill SOHO routers (algorithm is simple: router becomes 'saturated' by long packets; uTorrent start to decrease package size at higher rate, and router dies; all peers disconnects, router brings to life, uTorrent starts connection sequence and situation replays), and cause saw-like traffic & load for ISPs.

I see that this is a problem, and I don't think there is a trivial solution to it. I would very much welcome suggestions on how it could be solved, or how it could be hacked in the near-term while coming up with a long-term solution.

IMHO it'll be good to decrease packet size only when there are high losses of long packets (for ex., at 1400byte packets, and much lower losses on 700bytes at same bps), and control flow speed by changing interval between packets. Because 'congestion' means not only bytes-per-second link overflow, but also packets-per-second overflow of hardware, which actually will be killed in this cause by that congestion control algorithm.

The send rate is controlled through a congestion window. uTP's steady state is no loss, which would mean packet sizes would essentially never change, or only change rarely when the network was congested relatively badly. It would also not solve the problem of not inducing unreasonable delays for slow networks. One potential other route would be to adjust the target delay depending on the connection, but that's very flaky and wouldn't interact well across sockets or instances of uTorrent on the same network (if they would disagree on which target to use).

Also, if you mentioned laboratory testings - do you check how uTP willl load softrouter comparing to TCP

No, we have not run any such tests. To a very large degree, we rely on reports from our users, including you :)

ps. I would much rather discuss these solutions over email, as this form interface is quite clunky.

Link to comment
Share on other sites

  • Replies 90
  • Created
  • Last Reply
>I'm not sure what you call those posters (ISPs/DOHO-router-owners/low-bandwidth-subscribers/me...) reporting the issues here for ages. I guess you simply do not have the time to read those *requests* and/or Firon keeps you in the dark. But blaming the users for not asking you for help ? that's just unfair :(

Well, I guess I am guilty of not following the forums very carefully. It seems like there are better ways of reporting issues this serious though. Like, finding an email address on one of bittorrent's sites (which probably is way harder than it should be).

Your implementation has issues, and you, guys/arvid, just do nothing to resolve them. All in the name of congestion control. As least give a link to a test that demonstrates this with 2.02!

Unfortunately we have to prioritize what we work on. I'm not sure which issue you refer to, but I'm guessing it is the one you reported about the packet size depending on the upload rate even when there was an upload rate limit. I have punted on this for 2 reasons (I believe I've told you this in private, but I might as well repeat it in the forum):

1. It seems to be somewhat of an edge case, since uTP (in theory) doesn't really need to be rate limited. Now, I'm sure we can improve the client in many ways to make this whole thing work better. If your rate limit reflects your capacity, this shouldn't be a problem anyway.

2. There is no easy solution to this problem. How can we estimate the serialization delay if we're not allowed to saturate the link? It's easy to say we should just use MTU sized packets at all times, but maybe people with slow uplinks are under-represented in these forums, because that's the trade-off.

> 1. Double the default packet size

you doubled the lousy 200 bytes size ? great. But not good enough though, as you can see. You still halved the optimal size - MTU. Optimal - at least speed/overhead/PPS - wise.

Note that I'm talking about the defailt packet size, not the packet size. If you have a decent upload rate, you'll be at 1200 bytes packets in 10 seconds and at MTU sized packets in 20.

Now, are you saying the ratio for certain packet sizes are too steep? Should the packet sizes be bumped at lower rates?

If so, what do you think it should be? Currently it depends on the number of packets you can fit in a single target delay.

If you can fit more than 4 packets in one target delay, the packet size is doubled. If you can fit less than one packet per target delay, the packet size is halved.

The number of packets you can fit per target delay is calculated like this:

global_upload_rate (B/s) / packet_size (B) * target_delay (s)

Being able to send less than one packet per target delay (i.e. 100ms) is very likely to screw with the delay mesurements, and make them very unreliable and noisy. It's been quite a while since we ran tests with a very slow uplink, it might be useful to run another test with the current uTP code.

> 2. Make the packet size depend on the aggregate upload rate

great too. It just doesn't work. I hope you went looked at the link in my previous post ? Here it is again

http://forum.utorrent.com/viewtopic.php?id=76330

Interesting. There might be an issue that currently only uTP traffic counts. That should probably take all traffic into account, including TCP. I'll fix this.

>made the packet sizes much higher for most connections. They still stay proportional to your transfer rate.

Sorry, Not good enough. People with low upload bandwidth (with or without SOHO routers) do not need small UDP packets, just because difficulties with your UDP speed limiter. Especially when TCP still uses MTU size. If its a problem for you as you say, and my suggestion (that Greg thinks you do implement) is not a valid one, just do not limit UDP at all. And do it as default behavior (at least at low speeds) , and not as an option.

I'm not quite following you here. Isn't it quite obvious that if it takes 100 ms just to serialize a packet in the modem, the packet size needs to be smaller or the target delay needs to be higher? If not, all of your measures will be noise. Again, this might be very rare. My only piece of data is a number of testruns early on in the development of uTP, using a slow DSL line to test with, we found that for packet sizes much larger than 300 bytes, the noise in our delay measurements were significant, and decreased the effectiveness of the congestioncontroller.

About not obeying the not dynamic size setting http://forum.utorrent.com/viewtopic.php?pid=463418#p463418: Again, because of speed limiter issues ? Is this another bug that you've missed ?

What is the bug here? The sending side determines the packet size. It's not possible to have a setting to ask the sender to use larger packet sizes.

Did you have a look at all at the issue of not pacing packets ? using 16K packets ?

Pacing packets is very hard without sacrificing performance (throughput or CPU). I spent about a week or so testing a few different tricks and there is some packet pacing code in there, but essentially it's not very effective, I couldn't get it to not impact throughput while being active.

I don't get your link. Your screenshot there shows that TCP can send 16kB packets. I don't get how that's relevant. Are you showing that TCP has proper PMTU discovery and uTP doesn't? (which is correct).

The bottom line here is - PLEASE do maintain uTP and fix it. People here will gladly help you test it. No more options are required. You might not like it, but there is a need for MTU sized packets as a default ("8") to resolve most of the issues in this thread and many others.

I suppose we could change the current mechanism to screw over people with slow connections in favor of overhead and ISPs. That might be a reasonable thing to do.

You have to set a goal for yourself of getting at least the same performance out of uTP as with TCP, and in all aspects/connections (overhead, PPS , time to download and so on).

You forgot the goal of minimizing added buffering delay. Which is a competing goal of overhead in this context.

If you will not get it done soon - people will just vote against uTorrent/uTP and stop using it. And ISPs will vote against it by just baning uTP :(

Or maybe somebody will just fix it, since the code is now open.

Link to comment
Share on other sites

All of these problems with congestion and excessive packetrate already passed in the 80's and early 90's, during TCP adoption. When uTorrent developers ask "to prove" the problems with uTP, i must immediately and without further ado send them to the works of Van Jacobson (algorithms of congestion control). To make UDP-based protocol usable right now it is necessary to implement some functions that already exist in TCP, especially congestion and flow control. Otherwise it will not work as a transport protocol, causing disruption to network equipment.

Link to comment
Share on other sites

If you think that you can start war, and war can be winned by you

Yes, I think we can grab a bit of well-paying clients from crappy ISPs who are still doing SOHO grade at their backbones.

We won't filter anything (our technological policies DIRECTLY prevent tech staff from doing this). We won't sue clients for the traffic they PAID for. You can do anything you like of course :)

P.S. Yes, I am from the staff of one of the Russian providers. And yes, I clearly understand ALL the complications of small UDP packets. But I still do think the "filtering" solution is improper by both the technical and political means.

Link to comment
Share on other sites

If we would use full MTU sized packets at all transfer rates the congestion mechanism would break down for slow connections, since a single packet could induce a delay close to the target just by being sent.

In what way do you actually measure delay, if time between packets sending can affect result? IMHO, if clock skew between hosts is measured at negotiation, delay will be calculated invariantly by packet timestamp and measured clock skew.

Also, do you use global congestion control instead of per-session (per-host) congestion control? In cause of global control troubles for one client can possibly make chain reaction in network...

I see that this is a problem, and I don't think there is a trivial solution to it. I would very much welcome suggestions on how it could be solved, or how it could be hacked in the near-term while coming up with a long-term solution.

One of hacks I already proposed - make TCP priority higher than uTP if speeds are equal during test measurement. But this theoretically can cause UDP preferring on channel congestion - so it'll be needed to detect this situation, and throttle uTP if TCP looks clamped (speed of all TCP connections falls down when uTP speed rises)...

ps. I would much rather discuss these solutions over email, as this form interface is quite clunky.

E-mail can be more ease for communication, but it lacks one of main forum feature - multi-side discussion. And really I'm not specialized in protocol development, so multi-side discussion can brings more effective ideas.

We won't filter anything (our technological policies DIRECTLY prevent tech staff from doing this). We won't sue clients for the traffic they PAID for.

If you provide guaranteed-speed (usually corporative) channels - you're right. If you provide multiplexed channels, other users pays for traffic that is consumed by high-bandwidth client that tries to use his channel by 100%.

Also, it looks like you also don't blocking outgoing SMTP and incoming/outgoing to internet NetBIOS requests, that makes a perfect place for spambots and worms in your network; also you don't filter spam on your mail server; etc...

Do you think that users will agree to pay more (due to additional hardware upgrade/maintenance, etc.) for features that are really non useful for them, or even cause headache for them? :)

Link to comment
Share on other sites

>arvid: ...there are better ways of reporting issues this serious though... email ...

I know your email address. You not responding to my last 5 emails makes one think it is NOT that important in your eyes. OR that you forgot mine, OR your junk-mail filter caps my mail traffic for better congestion ...:P

>... packet size depending on the upload rate even when there was an upload rate limit.

>1.... If your rate limit reflects your capacity, this shouldn't be a problem anyway.

>2. ...There is no easy solution to this problem.

As you say - there is no solution . People will set limit as they wish. You don't know the connection bandwidth (unless you ask them to specify it in settings). This is one more good reason to not screw things up even more with small packets. MTU size is just fine.

>Note that I'm talking about the default packet size, not the packet size. If you have a decent upload rate...

> are you saying the ratio for certain packet sizes are too steep? Should the packet sizes be bumped at lower rates?

I'm not sure what you mean by steep, but yes, they should be bumped.

>If so, what do you think it should be?

Always - MTU size. Just like in TCP.

> global_upload_rate (B/s) / packet_size (B) * target_delay (s)

> It's been quite a while since we ran tests with a very slow uplink, it might be useful to run another test with the current uTP code.

Not running tests with such serious matter, and base your 2.x code on theories ? Until you do that - why don't you trust people here that do the tests/measurements the best they can ?

>(2. Make the packet size depend on the aggregate upload rate)

>Interesting. .... I'll fix this.

Yeah, do that. Thanks .

>> just because difficulties with your UDP speed limiter. ...just do not limit UDP at all. And do it as default behavior (at least at low speeds) , and not as an option.

>I'm not quite following you here... ... the effectiveness of the congestioncontroller.

I've shown you like a 1000 times, that you produce large packets at the SAME (low) upload rate but w/o the speed limiter. And tiny packets with the uT speed limiter on. You've claimed it's hard to pace large packets and still control speed. I've suggested/posted a way to do it.

Just compromise, and use MTU size packets at all times. I'm sure congestion will be OK with that as it is with the large TCP packets that are there as well. At least you will get rid of all the rest of the serious side-effects issues.

>>About not obeying the not dynamic size setting ...

>What is the bug here? The sending side determines the packet size. It's not possible to have a setting to ask the sender to use larger packet sizes.

You didn't get it. This settings is NOT being obeyed on the sender side . This is a bug. But the main issue is not to just fix it, but to set the default to MTU/"8" + make it obey that.

>>Did you have a look at all at the issue of not pacing packets ?

>Pacing packets is very hard without sacrificing performance (throughput or CPU). I spent about a week or so testing a few different tricks and there is some packet pacing code in there.

I've referred to that already (This is an implementation issue, and I've suggested a way to do it. I didn't see any comments of yours on that, so I can guess it's doable as Greg said).

>>using 16K packets ?

>I don't get your link. Your screenshot there shows that TCP can send 16kB packets. I don't get how that's relevant. Are you showing that TCP has proper PMTU discovery and uTP doesn't? (which is correct).

You are right. it's not directly relevant. Just that at that thread Greg said he wanted the capture file, since it's probably a bug.

>I suppose we could change the current mechanism to screw over people with slow connections in favor of overhead and ISPs. That might be a reasonable thing to do.

Not sure I understood that. But yes, screw my congestion, in favor of performance. I still believe the congestion will be fine too (and I promise to email you the test results).

>You forgot the goal of minimizing added buffering delay. Which is a competing goal of overhead in this context.

No, I didn't forget. Just gave up on it (plus it was not tested/proved anyhow) and lowered it's priority to 0... It's called - compromise, and being practical.

>Or maybe somebody will just fix it, since the code is now open.

Maybe. I still hope you will fix it. uT being a prototype/demo for that code - does not encourage using it, though

Link to comment
Share on other sites

All of these problems with congestion and excessive packetrate already passed in the 80's and early 90's, during TCP adoption. When uTorrent developers ask "to prove" the problems with uTP, i must immediately and without further ado send them to the works of Van Jacobson (algorithms of congestion control). To make UDP-based protocol usable right now it is necessary to implement some functions that already exist in TCP, especially congestion and flow control. Otherwise it will not work as a transport protocol, causing disruption to network equipment.

If you are under the impression that uTP doesn't have congestion control, please re-read the LEDBAT paper.

Link to comment
Share on other sites

In what way do you actually measure delay, if time between packets sending can affect result? IMHO, if clock skew between hosts is measured at negotiation, delay will be calculated invariantly by packet timestamp and measured clock skew.

Also, do you use global congestion control instead of per-session (per-host) congestion control? In cause of global control troubles for one client can possibly make chain reaction in network...

We're not negotiating clock skew or synchronizing clocks. The mechanism is explained in the LEDBAT paper. It's pretty clever. Think of it as measuting the RTT, but without taking the return path time into account.

The congestion control is per socket, but some state is shared. For instance, the packet size depends on the global transfer rate.

One of hacks I already proposed - make TCP priority higher than uTP if speeds are equal during test measurement. But this theoretically can cause UDP preferring on channel congestion - so it'll be needed to detect this situation, and throttle uTP if TCP looks clamped (speed of all TCP connections falls down when uTP speed rises)...

The main purpose of uTP is not to increase transfer speeds, it's to fix the problem of delay. Prefering TCP would entirely defeat the purpose of uTP. That said, the fact that uTP doesn't add significant delay also means that utilization (and speed) can be increased.

Link to comment
Share on other sites

As you say - there is no solution . People will set limit as they wish. You don't know the connection bandwidth (unless you ask them to specify it in settings). This is one more good reason to not screw things up even more with small packets. MTU size is just fine.

I'm not sure I understand your reasoning here. The statement: "MTU size is just fine" implies "it doesn't matter if uTP works on slow connections". I'm not sure I would agree with that.

Btw. for 2.0.3 I have a fix for this issue. The main reason I've been putting this off is that I have under estimated how many people really set a very low upload rate limit.

Not running tests with such serious matter, and base your 2.x code on theories ?

Who said anything about theories? I just said that the tests were run a long time ago, and were poorly documented.

Until you do that - why don't you trust people here that do the tests/measurement the best they can ?

I run tests like that all the time. I was specifically talking about testing a DSL or cable line that can only upload at very low speeds. And the main metric of the test would be the delay measurement histogram, and see how much it spreads out.

Don't worry, I will re-run these tests to make sure the fixed packet-size logic still covers that case.

I've shown you like a 1000 times, that you produce large packets at the SAME (low) upload rate but w/o the speed limiter. And tiny packets with the uT speed limiter on. You've claimed it's hard to pace large packets and still control speed. I've suggested/posted a way to do it.

Are you referring to your suggesting of pacing packets? I've posted a reply in that thread. It's not as easy as it might sound.

Just compromise, and use MTU size packets at all times. I'm sure congestion will be OK with that as it is with the large TCP packets that are there as well. At least you will get rid of all the rest of the serious side-effects issues.

That might be a reasonable solution. While working out this issue, make uTP break for people with slow links but make it work for people with PPS restrictions.

>>About not obeying the not dynamic size setting ...

>What is the bug here? The sending side determines the packet size. It's not possible to have a setting to ask the sender to use larger packet sizes.

You didn't get it. This settings is NOT being obeyed on the sender side . This is a bug. But the main issue is not to just fix it, but to set the default to MTU/"8" + make it obey that.

I just found a bug yesterday where the default packet size always would start at 1200 Bytes. This has been in since 2.0.1 iirc. Have you seen that as well?

Not sure I understood that. But yes, screw my congestion, in favor of performance. I still believe the congestion will be fine too (and I promise to email you the test results).

It's not your connection, and it's not congestion. The failure mode of running uTP over a very slow connection is that every single packet will induce a delay that will make the congestion controller back-off. It's like being in time-out mode constatly, sending one packet per second. The trade-off is between different kinds of links, it affects different people.

>You forgot the goal of minimizing added buffering delay. Which is a competing goal of overhead in this context.

No, I didn't forget. Just gave up on it (plus it was not tested/proved anyhow) and lowered it's priority to 0... It's called - compromise, and being practical.

Wold you like me to post links to papers with studies on how LEDBAT works showing that it works?

Link to comment
Share on other sites

arvid wrote:

I'm not sure I understand your reasoning here. The statement: "MTU size is just fine" implies ...

It's me doubting how effective is uTP with congestion, mainly on slow connections, or with set upload limit. My reason was - that a low upload limit does NOT necessarily mean you have a slow connection. 20K UL limit with a 1M UL connection does not justify small packets. So uT better not guess at all, and use MTU size in such cases.

And now, that you've set calc_overhead default to false again in 2.02 - it is even more common that people will set UL limit low w/o effecting their download. I've sent to Firon my suggestion on how to auto-adjust the UL limit if/when you decide to revert back to TRUE. I hope he passed it on to you.

Just compromise, and use MTU size packets...

That might be a reasonable solution.

That was my reasoning/compromise too - using MTU size packets as default.

You said - it was broken and you've fixed it - great, it's about time... the above compromise will make sure of it... ;)

tests were run a long time ago, and were poorly documented

I'm sure you know that bugs have a way of being born with new releases. Regression tests should be ran on new code, and 2.xx is new.

I've shown you like a 1000 times, that you produce large packets...

Are you referring to your suggesting of pacing packets?

Not at all. I am pointing (again) that there is an issue/bug of you behaving differently (different packets size/overhead) at the SAME upload speed, in case there is a uT internal UL limit or an external upload limit, both resulting in the same total speed.

Wold you like me to post links to papers with studies on how LEDBAT works showing that it works?

No. No papers. I would like you to demonstrate how 2.0.2 & it's uTP handles well congestion and co-exists (gives up?) with other running apps (like the test I showed - with a downloading application). And this - also on a low UL bandwidth connection.

Don't worry, I will re-run...

I always worry... it's in my nature. Especially when it has to do with uTorrent/BitTorrent inc... ;)

Link to comment
Share on other sites

We're not negotiating clock skew or synchronizing clocks. The mechanism is explained in the LEDBAT paper. It's pretty clever. Think of it as measuting the RTT, but without taking the return path time into account.

Sorry for my bad English, I use wrong term; I meaning clock offset. But also I don't understand, how increased interval between packets sending can involve measured trip time, if packet has timestamp when it was actually sent by uTorrent to the network API layer, and receiver knows clock offset between local clock and sender clock?

The main purpose of uTP is not to increase transfer speeds, it's to fix the problem of delay. Prefering TCP would entirely defeat the purpose of uTP. That said, the fact that uTP doesn't add significant delay also means that utilization (and speed) can be increased.

Also there is other side: providers that offer non-guaranteed channels have traffic prioritizing shapers on their borders (in alter way, there is big speed loss even by small channel overcommit - ~10-15% overcommit will cause really bad results on 'Speedtest', like 2-4 Mbit) - to classify traffic, and offer acceptable link quality for surfing/VoIP/online games even if their output channel is really saturated - these classes of traffic typically consumes minor bandwidth comparing to p2p, and typically uses single connection instead of p2p's hundreds connections; so if global speed at saturation will be limited by per-connection or 'per-link' ('link' means pair of sender-receiver IPs) basis without prioritizing, due to p2p's nature p2p users will consume all bandwidth (for ex., enough possible situation: 200 'links' by 64 kbit will result in ~12Mbit), and other users will have headache (can you imagine, for ex., Skype video chat on 64kbps?).

And, due to generic UDP and uTP traffic nature, it'll be hard (or even impossible) to split traffic to different classes using standard signature matching.

Also I (and other ISPs) noticed strange things with UDP shapers and small packets - but I didn't test this deeply, so I can't assert that these troubles are really present.

Link to comment
Share on other sites

If you are under the impression that uTP doesn't have congestion control, please re-read the LEDBAT paper.

I have found and read the following paper on LEDBAT: http://www.pam2010.ethz.ch/papers/full-length/4.pdf . It confirms that LEDBAT decreases the UDP segment size when it detects higher delays, unlike TCP, which always transmits the segments of maximum size.

What were the reasons for the variation of segment sizes? Why congestion control community did not implement this feature 20 years ago in TCP? I suspect that uTP devs made it with some good intentions, maybe to change transmit window size smoother than TCP. However, hell is paved with good intentions, especially when you do something without proper scientific method. I think that decreasing of packet size is generally a bad idea because it leads to inefficient use of bandwidth, and router's performance is determined by the packetrate, not the bandwidth rate.

It would be better to disable segment size changing feature in the next releases of uTP, at least until developers prove the efficiency of their algorithms and discuss their results with some people from congestion control community. I also suggest them to perform tests of uTP not only for slow DSL links, but also for 100Mbit Ethernet links with traffic shaper in the middle.

Link to comment
Share on other sites

missed that one -

arvid wrote:

I just found a bug yesterday where the default packet size always would start at 1200 Bytes. This has been in since 2.0.1 iirc. Have you seen that as well?

Nope, I have it always set to "8"/MTU-size (which should be the default!) & non-dynamic, and it starts up at 1480 as far as I can tell (until it starts dropping in size when it comes near to the speed limit).

btw, you should look also at the # of ~60-70B packets that looks very high and un-proportional to # the data packets.

Link to comment
Share on other sites

DIR300/NRU uses powerful Ralink RT305x SoC (MIPS at 380MHz), and I don't know what means 'fast router' for you if this is 'slow'.

How "fast" a router is matters little if it's not stable due to bad firmware. Linksys and D-Link are both very guilty in this regard.

http://forum.utorrent.com/viewtopic.php?pid=460036#p460036

Maybe with 3rd party firmware, such routers run fine/great. But we get lots of reports of routers having problems...and most people don't even know about 3rd party router firmware.

This shows just how sorry many routers are:

http://www.smallnetbuilder.com/component/option,com_chart/Itemid,189/chart,124/

...They're having anywhere from minor to major errors before reaching the 200 connection limit of the test!

Link to comment
Share on other sites

If you provide multiplexed channels, other users pays for traffic that is consumed by high-bandwidth client that tries to use his channel by 100%.

We do not oversell that much, constantly keep hand on pulse and adjust backbone as necessary. This is a way to go. It may seem difficult at first but it's the only way to keep the customer, and it becomes easier when more customers are there: rate of traffic per user falls with the total number of users.

Also, pps is almost completely not an issue for us, we will add hardware where necessary and when necessary (I read about UtP problems on forums, but never had negative anomalies in the network due to UtP). Yes, the pps rate slowly grew by about 1.10-1.20 factor, but that was not worth even looking into because we are still below 70% hardware utilization. When it's 70%, we will expand as necessary. If our clients want to get it that way (UtP way I mean)... well, they've paid for it. Just business.

Also, it looks like you also don't blocking outgoing SMTP and incoming/outgoing to internet NetBIOS requests

Completely true. We are not blocking clients, they have the right to do anything they want in the network, but they also bear full responsibility for what they do.

That makes a perfect place for spambots and worms in your network

We provide Internet connectivity, and do that just fine. We are not providing security or spam protection, that's out of the scope of our business. There are plenty of security and antispam companies that have their solutions on the way.

also you don't filter spam on your mail server; etc

We filter incoming SPAM. For the outgoing: we don't provide any SMTP server for our clients. They just use remote SMTPs as they want to. You know, taking calls like "I can't connect to my corporate mail from home you are blocking me you fuckers" is funny only when you don't have many customers. After then it can become annoying.

Do you think that users will agree to pay more (due to additional hardware upgrade/maintenance, etc.) for features that are really non useful for them, or even cause headache for them?

The headache is protocol filtering. For sure. For not filtering, we won't make them pay more. No filtering = less hardware load = more hardware and load predictability = less problems for us.

Link to comment
Share on other sites

2.0.3 will have some tweaks in the behavior of dynamic packet sizing, and while we'll do our own testing internally, we'd like if one of you guys at these ISPs could see if it behaves any better than the previous version. I'll let you know once we have a beta.

Link to comment
Share on other sites

This shows just how sorry many routers are:

...They're having anywhere from minor to major errors before reaching the 200 connection limit of the test!

These results achieved with first firmware releases; and I don't see DIR-300 (especially 'fresh' DIR-300/NRU) here.

Also I didn't noticed any troubles with routers in small offices even if there are some torrent leechers behind SOHO router.

We do not oversell that much, constantly keep hand on pulse and adjust backbone as necessary. This is a way to go. It may seem difficult at first but it's the only way to keep the customer, and it becomes easier when more customers are there: rate of traffic per user falls with the total number of users.

Typical multiplexing ratio for providers is 4-8 (depending of region and users' knowledge/interests) - and at these rates channel isn't in peak for most time. And most of users chooses medium tariffs which have price comparable to guaranteed channels, rather than torrent-maniacs chooses expensive tariffs, which usually have twice higher price but rate up to 5-10-20 times faster.

So I don't think that you will sell 10-20Mbit channels by prices comparable to prices of your uplink - to oversellers will defeat you :) In other case - generic users with medium speed will pay for thick channels of enthusiasts.

Network that consists only of torrent-maniacs that use their channel by 100% 24/7, will die quickly ;)

Completely true. We are not blocking clients, they have the right to do anything they want in the network, but they also bear full responsibility for what they do.

As result, your clients can lose possibility to send anything on many SMTPs because your IPs are already in 'blacklists' of services like SpamCop... And they must have additional firewall software - to avoid infection by worm with new 0-day exploit against NetBIOS...

For the outgoing: we don't provide any SMTP server for our clients. They just use remote SMTPs as they want to.

Non-anonymous SMTP relays really can't be used by spambots. And practically can't be source of spam.

You know, taking calls like "I can't connect to my corporate mail from home you are blocking me you fuckers" is funny only when you don't have many customers. After then it can become annoying.

You can be amazed, but there are enough methods to leave 'good' SMTP connections (at least, 99% of them); and block 99-100% of outgoing SPAM. Two of them: connection rate limiting; SMTP transparent proxy that blocks non-authorized SMTP (i.e. server-server SMTP) and allows authorized SMTP (99-100% of all client-server SMTP transactions).

No filtering = less hardware load = more hardware and load predictability = less problems for us.

You really predict February +30..50% load by uTP/uTorrent 2.0 encounter? You can separate uTP traffic from other UDP on the border's shaper to grant gaming/VoIP/surfing quality on weekend evening peak - or you prefer to pay for additional 50-100Mbit of outer channel for that purposes?

Link to comment
Share on other sites

Note that I'm talking about the defailt packet size, not the packet size. If you have a decent upload rate, you'll be at 1200 bytes packets in 10 seconds and at MTU sized packets in 20.

But if don't have the decent upload rate? :P

http://i5.fastpic.ru/big/2010/0530/72/2f12c5b30db8d8e4204f4de591a88972.png

If you can fit more than 4 packets in one target delay, the packet size is doubled. If you can fit less than one packet per target delay, the packet size is halved.

Maybe all problems because of this?

How works mechanism of enlarging the packet size, if the upload channel to maximum used? ...or doesn't works? :)

If that not works, then a new connections will be have outgoing packets only at 600 bytes (or less).

Also, pps is almost completely not an issue for us, we will add hardware where necessary and when necessary (I read about UtP problems on forums, but never had negative anomalies in the network due to UtP). Yes, the pps rate slowly grew by about 1.10-1.20 factor, but that was not worth even looking into because we are still below 70% hardware utilization. When it's 70%, we will expand as necessary. If our clients want to get it that way (UtP way I mean)... well, they've paid for it. Just business.

How naming your ISP? I want to be your client :)

Link to comment
Share on other sites

Note that I'm talking about the defailt packet size, not the packet size. If you have a decent upload rate, you'll be at 1200 bytes packets in 10 seconds and at MTU sized packets in 20.

Now, are you saying the ratio for certain packet sizes are too steep? Should the packet sizes be bumped at lower rates?

If so, what do you think it should be? Currently it depends on the number of packets you can fit in a single target delay.

If you can fit more than 4 packets in one target delay, the packet size is doubled. If you can fit less than one packet per target delay, the packet size is halved.

The number of packets you can fit per target delay is calculated like this:

global_upload_rate (B/s) / packet_size (B) * target_delay (s)

Being able to send less than one packet per target delay (i.e. 100ms) is very likely to screw with the delay mesurements, and make them very unreliable and noisy. It's been quite a while since we ran tests with a very slow uplink, it might be useful to run another test with the current uTP code.

Okay, so it sounds like decreasing the packet-size is advantageous in measuring delay, but once it is measured, it shouldn't be necessary to keep the packets small. Detecting that the delay changed can be done regardless of packet size, once the delay has been measured. If the delay fluctuates out of tolerance, let the algorithm play with packet-size again to re-measure delay. If it stays within tolerances, keep the packet_size up and just estimate delay based upon the ratio of the packet_size used for the measurement and the larger size used for data transfers at acquiescence.

Am I missing something here? Are you experiencing discontinuities in throughput or delay when packet-size is increased, before reaching the MTU? I can't see why that would happen. Obviously, once the delay is measured with a smaller packet-size, you would have to throttle the packet rate proportional to the increased size of the packets that will be used for bulk transfer, but that should be easy to calculate.

Essentially, what I am asking is, "If you must use smaller packets to measure delay, can't you restrict their usage to measuring and use MTU sized packets the rest of the time?"

Link to comment
Share on other sites

These results achieved with first firmware releases; and I don't see DIR-300 (especially 'fresh' DIR-300/NRU) here.

Also I didn't noticed any troubles with routers in small offices even if there are some torrent leechers behind SOHO router.

The DIR-300 wasn't on the chart, it was too old. Almost nothing they tested was using first firmware releases. I pointed to the chart because of "similar, but NEWER" routers that also failed miserably. On the chart there was a D-Link Wireless N Router (DIR-615 [b2])...I thought it was close enough, but I guess not.

From here:

http://www.dd-wrt.com/wiki/index.php/Supported_Devices

Under DIR-300 it says "clone of DIR600 rev B1".

The DIR-615 D1 and D2 also use Ralink chipsets. Sadly, even the tested DIR-615 model wasn't the one using Ralink components. One has to wonder why D-Link would drop that chipset after only a revision or 2. Cisco Linksys WRT160N v2 to v3 is another example. Maybe it was really good but just a bit too expensive? It really pisses me off that knowing the brand and name of the router isn't enough to tell me what chipset or how much ram it has! (Linksys wrt54g has to be the most notorious for this.)

Point is, we often see posts of people who either remove or replace their routers which alone seems to eliminate their uTorrent problems. It's already been hinted here that for ISPs...consumers using bad routers that load, overload, fail/fall over, only to repeat minutes later is worse than a good router that maintains a high but steady load.

That upload is over 2 megabit/second and should be using MTU sized packets in <1 minute.

Decreasing the uTP packet size would allow a time-sensitive data burst (by an outside program or even networking lag further up the line) to slip other packets in between thse smaller uTP packets without incurring >50 ms of delay. This breaks down below 0.5 megabit/sec upload, where even a 1500 byte packet takes ~22.9ms to send so 3 in a row would take ~68.7ms which already exceeds the desired 50ms "window" that uTorrent devs want to stay in or under.

I see no advantage in small uTP packets if uTorrent is also constantly sending max MTU sized TCP packets at the same time.

"Typical multiplexing ratio for providers is 4-8 (depending of region and users' knowledge/interests) - and at these rates channel isn't in peak for most time."

If that means a 4-8:1 contention ratio, that's far nicer than here! (USA, on ComCast.) For the ISPs I've done research on, (ComCast especially and a few ADSL services in UK and Canada) their contention ratios were typically over 30:1 ...in some areas over 100:1. They claim they must throttle consumer services because of "bandwidth hogs", but at those ratios they are being pretty stingy. Higher contention ratios become even more severe with cable because of the tiny channel sizes. A 10:1 ratio is worse with a shared 10 mbit/sec channel than a 100 or 1000 mbit/sec channel simply because it takes far fewer customers using much of their connection to totally overload the shared bandwidth. An ADSL2+ DSLAM might have 100-500 customers on it, all sharing a 100 mbit/sec fiber link (or T-3 or OC-3 line) to the rest of the ISP. Once again, their combined max download (assuming 5-24 mbit/sec each ADSL) dwarfs the total capacity of the fiber link.

Link to comment
Share on other sites

Am I missing something here? Are you experiencing discontinuities in throughput or delay when packet-size is increased, before reaching the MTU? I can't see why that would happen. Obviously, once the delay is measured with a smaller packet-size, you would have to throttle the packet rate proportional to the increased size of the packets that will be used for bulk transfer, but that should be easy to calculate.

The congestion control relies on being able to have updated delay measurements for every packet.

Link to comment
Share on other sites

You just don't know what you talking about if you talking about "russian crappy ISPs". These ISPs triying to be polite to torrent traffic and were strongly disappointed by uTP bugs. Look at these pictures: http://www.netindex.com/quality/

Russia and St.Petersburg, Russia took first places in QUALITY. And if you think you can provide best quality by feeding hungry trolls you've mistake yourself.

Link to comment
Share on other sites

Am I missing something here? Are you experiencing discontinuities in throughput or delay when packet-size is increased, before reaching the MTU? I can't see why that would happen. Obviously, once the delay is measured with a smaller packet-size, you would have to throttle the packet rate proportional to the increased size of the packets that will be used for bulk transfer, but that should be easy to calculate.

The congestion control relies on being able to have updated delay measurements for every packet.

Yes, I know. What I am saying is that you can still get an accurate measurement of delay with MTU sized packets. If the delay exceeds the uTP threshold, demanding that uTP throttles back, you use the standard uTP tactic of decreasing packet size until it settles on a new rate, then bump the packet size back up and decrease the packet rate.

Always bump the packet size back up and decrease the packet rate, whenever uTP "settles" on a new throughput target. Any change in delay will be just as apparent with MTU sized packets as it is with <MTU sized packets, but overhead is reduced and the protocol will be playing nicer with the infrastructure.

I do see the utility in adjusting packet size during delay measurement. But I should also point out that uTP could modify the packet-rate instead of the packet size and achieve identical goals. Decreasing packet rate should be equivalent to decreasing packet size, in attempting to lower delay. Increasing packet rate until delay increases should act exactly the same as increasing packet size until delay increases.

So why play with packet size at all, if packet rate is exactly as useful in determining delay, but has less overhead and optimizes throughput better as well as playing nicely with infrastructure?

Link to comment
Share on other sites

Decreasing packet rate is not the equivalent of decreasing packet size.

Case in point, 1 packet that's 1500 bytes in size versus 2 packets that are 750 bytes each. Both are 1500 bytes "long", yet the 2nd one (2 packets) has a delay only half as long if another packet can be placed between then.

If you want to limit delay to 50 ms, then that's 1/20th of a second. 1500 byte packets 20 times a second is a minimum of 240 kilobits/second (~30 KB/sec) upload rate. Except uTorrent devs wanted a delay of 50ms for 3 packets in a row, so that needs a minimal upload speed of 3 times that -- or 720 kilobits/second (~90 KB/sec). These are usable sustained upload rates, not peak bursts. The whole line link needs probably closer to 1 megabit/second upload speed before it should consistently use max MTU 1500 byte packet sizes.

Except even to do that assumes a line that's otherwise nearly idle. If there's other internet traffic or others sharing the same connection, then there's going to be small uTP packets sometimes used even with a much faster line...such as while downloading at close to max linespeed.

This is just my opinion/guesses. :P

I am not a uTorrent/BitTorrent developer. I am only a uTorrent moderator. (I'm a big reason why you don't see this forum overrun with spam.) Likewise, rafi is not a uTorrent/BitTorrent developer...at least not any more than tim716 is. :P

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...