Jump to content

Possible bad packet fragility? (last piece per block)


Bosmon

Recommended Posts

Posted

I am having problems uploading to one particular peer at the moment - if this is relevant, they are using Azureus 2306. They are quite happily chugging along one "big piece", in increments of 16k, and then all of a sudden, they make a request for the last block of a different piece, which is then rejected.

[06:31:41] 67.169.187.188 : [Azureus/2306 ]: Got Request: 457:65536->16384

[06:31:41] 67.169.187.188 : [Azureus/2306 ]: Sending Piece 457:49152->16384

[06:31:45] 67.169.187.188 : [Azureus/2306 ]: Sending Piece 457:65536->16384

[06:31:47] 67.169.187.188 : [Azureus/2306 ]: Got Request: 457:81920->16384

[06:31:51] 67.169.187.188 : [Azureus/2306 ]: Sending Piece 457:81920->16384

[06:31:53] 67.169.187.188 : [Azureus/2306 ]: Got Request: 457:98304->16384

[06:31:56] 67.169.187.188 : [Azureus/2306 ]: Sending Piece 457:98304->16384

[06:31:57] 67.169.187.188 : [Azureus/2306 ]: Got Bad Request in SS mode: 70:507904->16384

[06:31:57] 67.169.187.188 : [Azureus/2306 ]: Disconnect: Bad packet

This happens time after time.

I happen to know for a fact piece 70 is one I have advertised - when I was failing to send that one earlier,

the "bad leaper" was to block 136:

[06:25:03] 67.169.187.188 : [Azureus/2306 ]: Sending Piece 70:491520->16384

[06:25:05] 67.169.187.188 : [Azureus/2306 ]: Got Request: 70:507904->16384

[06:25:09] 67.169.187.188 : [Azureus/2306 ]: Sending Piece 70:507904->16384

[06:25:11] 67.169.187.188 : [Azureus/2306 ]: Got Bad Request in SS mode: 136:507904->16384

[06:25:11] 67.169.187.188 : [Azureus/2306 ]: Disconnect: Bad packet

I'm not sure at whose end the problem is, but this is killing performance atm, could it be looked into?

FYI, this torrent is 1200x512K blocks

Cheers,

Boz.

Posted

Nono, as you can see, I clearly have the piece, since I have advertised it in the prior session. I have even previously started to send it! In any case, I am superseeding, I have all the file.

As far as I can see, this problem only occurs in superseed mode so I suspect a particular bug in that mode? I think uTorrent should be prepared to send *any* piece of the file in this mode rather than disconnect, since a client may well not be operating with the semantics that I am a new client after a reconnect, and may be working with some previously advertised piece-set. Anyway, I presume I'm right in saying that once a piece has been advertised, even in SS mode this cannot be "gone back on" - I cannot subsequently claim not to have it!

Boz.

Posted

Well, I would have thought a client could be forgiven for relying on this invariant - after all there is no way to "unadvertise" a piece. Anyway, given the installed Az client base is enormous and the developers are as stubborn as hell, how about being generous in this instance :P With this problem a SS session with an Az client typically lasts around 30 seconds...

I seem to see other problems with the uTorrent super-seed implementation anyway - for example I have now sent 150% of this file in SS mode and STILL have not managed to finish it. With other clients I typically find excess margins of closer to 5-10%.

Cheers,

Boz.

Posted

It's actually easy to unadvertise a piece, by not advertising the piece again when you make a completely new connection to the peer. :P

And the 150% thing is strange, probably because someone disconnected. I've super-seeded and had excellent results, only took maybe 110%

Posted

Back to the original poster's post...it's one thing to unadvertise a piece, but it's another to start disconnecting clients who make requests on pieces you previously advertised. Isn't there some "not found/unavailable" message µTorrent could return instead?

As-is, if the original poster is correct, µTorrent in super-seed mode is simply acting as a hostile client towards Azureus.

Posted

Is this reason why lately the last Packet isn't transfered, and you basically get stuck with 99.99% of the file only not to be able to finish until some random time? Azureus 2.3.0.6 is widely used, so this bug might be more widespread now as I'm guessing.

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...