Jump to content

Ban ip when hash fails X times / ratio check


status

Recommended Posts

i've downloaded µtorrent like a week ago..and love it besides these 2 things...

it wastes toooo much bandwidth with peers that just keep on sending fake pieces

right now for eg im downloading a movie which is +- like

81mb downloaded

108 uploaded

41mb wasted (83 hash fails)

basically a third of what i downloaded is fake...being that 83 are hash fails this could have been prevented

if those ips were banned after like 2/3 hash fails

the ratio check im talking about is for people that dont upload...i think we should have like a "fixed" option

where we can choose whats the lowest another ip can go...

whats the point of me uploading 20mb to someone when he sends me 2 or 3mb? we should have the option to choose this ratio and the speed for these people...

hope to see this in a next version..

thank you

Link to comment
Share on other sites

basically a third of what i downloaded is fake...being that 83 are hash fails this could have been prevented

if those ips were banned after like 2/3 hash fails

This is not a bad idea, and actually the client already does an auto-ban, but perhaps not this granular...

whats the point of me uploading 20mb to someone when he sends me 2 or 3mb? we should have the option to choose this ratio and the speed for these people...

I have to disagree with this one. Adding this type of feature will have the effect on BitTorrent that most other clients have run into, where over time and new clients being written, sharing slowly approaches zero. It is enough to just have upload limits.

status, to answer your question, the point of uploading 20mb to someone is to actually share the file. There will always be a disparity between your upload and download, partly because you have different pieces than the other user. Look at your ratio, don't bother with your individual peer uploads. No need to micromanage ;) Besides, I'm sure if you look at your own downloads closely, your client has done this numerous times. One of my downloads has taken 300mb down and only 218k up to one particular user. But I don't feel bad about this, because I seed (read: share!) extensively.

More options is usually good, but in this case, it can lead to a further deterioration of the entire point of BitTorrent: to share files.

Cheers,

-Infi

Link to comment
Share on other sites

basically a third of what i downloaded is fake...being that 83 are hash fails this could have been prevented

if those ips were banned after like 2/3 hash fails

2 or 3 hash fails is way to little,

it should be more like 15-20 hash fails.

Link to comment
Share on other sites

One problem: 6 people maybe have contributed 16k slices to the piece. Who caused the hash to fail? (That's one reason why I advocate a 256k max piece size.) If one person contributed the whole piece, 3 different hash fails in a row nets a 1-day ban, or even a listing in ipfilter. Client should not attempt the same hash immediately in case it's a NAT problem. A good hash resets the counter.

It'd be nice if a client could report a hash fail to the peer, so it could recheck the piece, but the possibilities for abuse and DoS are endless. ^.~

I think there was a thread discussing this recently.

Link to comment
Share on other sites

@Infi: I understand status' statement not as "don't share" but rather as "share with the honest clints" thus singleling out leecher clients without the need of ID-Bans etc.

In general I'm all for that feature since the idea of p2p is sharing and if nobody shares there's nothing left to download either.

So I'd rather upload to someone who later on shares my upload with s/o else.

The only problem I see is the definition of such filter (?)

You would most likely catch a lot of "innocent" people in these filters

as an example: there's a guy who uploads say with 30kb/sec to 4 peers...so not exactly what I would call a leecher...one client get's like 15kb/sec, one get's 8+ the third gets another 6+ which leaves me as #4 with a trickle between .1 and 1

So he would share like there was no tomorrow but still my filter would treat him as a potential leecher...and that's why I am against these counter measures as long as szenarios like this can't be resolved

-DG

Link to comment
Share on other sites

something similar to this has been discussed previously, but the hashfails you get either suggest that your torrent is poisioned by some government faction, or that you're using some software (ie a router or switch or NIC) that's messing up your packet handling routines thereby causing the above mentioned hash fails.

Link to comment
Share on other sites

infidel...what i'm really saying..isnt stop sharing..

its to share faster with the right people...

as u see most clients (even µtorrent) have limits per upload, lets say..5/6 uploads at the same time in µtorrent's case...even more if the full bandwidth isnt being used by those 5/6 (if they're like 56k's or something...)

the point of bittorrent is to spread the "more rare" pieces of that torrent as a priority and then spread the file as fast as u can...choosing the fastest people..thats why u get faster download speeds if u upload more

about the ban, µtorrent would need to keep track of people it downloads from

for eg:

a piece has 256kb and it downloaded this piece from user1, user2 and user3 and the hash is 100% valid

then it downloaded another piece from user40, user12 and user50 and the hash failed (one of these 3 is sending you crap)..u can check who one by one...

in my theory download another piece with each one of these users...2 valid ones that u already downloaded from before

for eg user1, user2 and USER12 where USER12 is only to be tested for hash fails...in case the hash fails...

test the same user again but with another 2 valid users that u downloaded previous pieces with hash OK

after 2/3 hash fails if u keep getting hash fails ban this user cose he is the one with problems

this wouldnt use almost any cpu usage being that it would still download regularly...just a bit more of memory since it would need to keep in memory a fixed list of active users like... 10 good ones? and all the bad ones in another list to be checked

or instead of checking user by user a percentage could also be used..since µtorrent already saves how much mb it uploads/downloads to each client it could also save how many hash fails/Ok's with each one.....tough i'd like the first one cose no one else can be banned by mistake

Link to comment
Share on other sites

Firon hasn't popped into this thread yet, but I'm pretty sure that he'll tell us all that the feature is already in µT. As for the excess wasted bandwidth, I think you'll find that it wasn't all from the one peer. :/

One problem: 6 people maybe have contributed 16k slices to the piece. Who caused the hash to fail? (That's one reason why I advocate a 256k max piece size.)

16k piece size = bad, not part of the BT spec, and not supported by all clients.

If your pieces are too small, it takes way longer to hash, and the .torrent is bigger. 256k piece .torrent is almost 4 times the size of a 1MB piece .torrent. It's not worth the time to hash it. Also, many sites won't accept .torrents over a certain size limit so you need to set the piece size up to limit the size of the .torrent..

Link to comment
Share on other sites

16k piece size = bad, not part of the BT spec, and not supported by all clients.

Actually, there's piece sizes and chunk sizes -- and they're not the same thing!

BitTorrent clients aren't supposed to attempt to download piece sizes larger than 16 KB at a time, but chunk sizes can be 4 MB or more. It takes multiple pieces to fill up 1 chunk, and unfortunately that means tracing the bad seed/peer is all-the-harder.

Link to comment
Share on other sites

Yeah, I was trying to use 'slice' to differenciate it from piece/chunk, which are often interchangably used for the larger unit. I don't mind larger torrents since it means less data lost from any corruption and hash fails, but that's IMHO.

Does anything actually support 16k piece size? I thought the minimum was 32k. (I bet loading a 32k-piece Fedora 4 torrent would be a decent stress test for a client...)

Somewhat more on topic: It's certianly possible to play detective and track peers to try to narrow it down to one bad peer, I mostly worry that it could put stress on small swarms or even lead to hard-to-track bugs. Then again, a bad peer is more likely to cause swarm problems anyway. If you have to go through this vetting process often (especially if you're on the recieving end) it could put a real crimp in download speed, but I don't think poisoned torrents are that widespread yet...

It could probably be scrambled too, but at least it would offer some protection.

Link to comment
Share on other sites

  • 1 month later...

Well, one thing that I do when I get enough hashfails from one client, that's to ban his ass.

(µTorrent + peerguardian = banninating goodness)

I won't tolerate people sharing complete bullshit, and neither should µTorrent, so I'd propose a

torrent-life ban for peers sharing bad data.

Link to comment
Share on other sites

actually there are times it would help either disabling it (per torrent?) or increasing the number of allowed hash fails

for example i had a torrent with only one seeder and he had some trouble uploading so there were hash fails and utorrent always banned him/the only seeder and when ban expired same thing again, it was a real pain to dl even though he was on fat pipe, i wouldnt mind even 50% wasted bandwidth in that case, i have plenty :)

and afaik the only way to reset temp ban was to quit utorrent (pita) because stopping/restarting torrent did nothing with temp ban.

Link to comment
Share on other sites

there's a difference between "some broken pieces" and

"only broken pieces". current temp ban limits on hash fails are too strict in some cases, thats why it would help if it was customizable

i noticed in current betas its occuring way more often than it used to before

Link to comment
Share on other sites

isnt thats a bit tight for people with fast connection, especially if its not "failed per time interval"?

its getting kind of ridiculous seeing 6 hash fails and then 10 clients getting banned just because they happened to contribute to those piece

it practically cripples the torrent because it leaves only 1-2 slow-ass peers, at that rate i'd rather have the banning completely disabled :)

Link to comment
Share on other sites

uT has no way to identify which peer that contributed to the piece caused the corruption, so it increments them all.

That's the way things are, and there isn't really that much that can be done.

Right now it's a "Blame the Azureus devs for wanting their client to have encryption first to make it look like it's an Azureus feature and everyone else is adopting it."

Link to comment
Share on other sites

  • 3 months later...

Hoping this thread is not dead, I will add to it.

I also encounter many hash fails, and like someone else mentioned, it is not because of someone intentionally trying to corrupt the torrent as sometimes it occurs on a fresh torrent and I cannot get good data from the only seed but can get good data from another peer who did receive his offering from the same seed. It therefore appears the data is being corrupted in transit between me and the seed but not between another peer and the seed, nor between me and the other peer.

Another thing is that many pieces get queued, well over 50 in most cases and if but one peer is sending corrupt data, intentionally or not, over 50 pieces, 1mB, 2mB, or 4mB in size depending on the torrent are eventually going to fail hash, and at the same time cause other peers who were involved in completing those pieces to show an increase in their contributing to a hash fail and possibly being banned erronously.

It would be helpful in my opinion, but ignorant of how the protocol actually works, if each peer would account for a full piece instead of many peers contributing to the same piece. I understand that the whole process is one of allowing many to share in the distribution of a large file, but as there are many pieces why does not each peer share a full piece instead of jumping around and giving a block of several pieces. Previously, using Azureus I have seen how this works watching a peer who possesses an entire piece I need give but a few blocks and then go to another piece I need and do the same, but never finish giving one complete piece. This makes me suspect of the peer but it appears that many and perhaps all the peers do the same thing. If a peer has a complete piece it should be given completely by that one peer and then if it fails hash request it from a different peer, and peg only one peer with having caused a hash fail. Then should that peer accumulate 5 hash fails it would be obvious that he/she shoud be banned and no others.

The problem with queuing so many pieces is that it allows a peer, intentionally or non-intentionally to create a basis for many hash fails prior to being banned, and which can continue long after being banned, taking many other peers out as well who have not in actuality provided any corrupt data at all.

This appears to be a problem that many other besides myself encounter, and can and should be solved.

I might add that I also don't subscribe to the theory that all programs have bugs. Only poorly thought out or written programs have bugs. The problem today is we overly complicate things trying to be protective and in the end create something which becomes beyond the ability of the creator to comprehend completely. Not trying to get down on Microsoft, but years of working with computers running Unix, or unix, whichever you prefer, I encountered very few problems that were not quickly solvable.

Sorry for getting off track, but corrupt data is a problem for many of us and it creates wasted bandwidth in adddition to wasted time so why not gather some good ideas and work diligently towards a solution. If not, I guess we can try to find a way to increase bandwith and speed over the internet to make the problem less important.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...