Jump to content

About µTP: congestion Control


serveh

Recommended Posts

Hi guys,

I'm implementing the congestion control mechanism used in µTP (LEDBAT ) and I encountered some problems .

As you know, what LEDBAT uses as a sign of congestion is delay opposed to packet loss in TCP congestion control. So if one_way_delay is more than a TARGET, we consider it as a congestion and decrease our sending rate (congestion window). Here is what LEDBAT specification says:

off_target = TARGET - queuing_delay + random_input()

cwnd += GAIN * off_target / cwnd

# flight_size() is the amount of currently not acked data.

max_allowed_cwnd = ALLOWED_INCREASE + TETHER*flight_size()

cwnd = min(cwnd, max_allowed_cwnd)

And here is the values spec suggests for the parameters:

TARGET = 100;

GAIN = 1.0;

ALLOWED_INCREASE = 1.0;

TETHER = 1.5;

Considering these , if flight_size becomes 0 at some point, max_allowed_cwnd becomes 1.0, so whatever calculated cwnd is, final cwnd would be 1.0(The minimum amount of cwnd while a TCP flow exists!). Even worse , if cwnd becomes 1.0, it remains 1.0 forever. Because cwnd = 1.0 means just send 1 packet. So on receiving ack, flight_size would be 0 and again max_allowed_cwnd would be 1.0 !

Please tell me what's that I'm not considering

Link to comment
Share on other sites

Yes, cwnd should be increased by the formula you mentioned , but since max_allowed_cwnd is remained 1.0 and we take minimum of these two, new cwnd remains 1.0 .

However I found out that there was a bug in the old spec and in the new spec it's fixed, so it does not remain 1.0 and increases.

Thank you for your help.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...