Jump to content

Good settings for 600/200mbit?


Recommended Posts

Hi guys,

I have a gigabit connection which in reality, can support about 600mbits down and 200-250 up.

I've been trying to tweak utorrent to max out my upload speed. Download is not an issue, I can easily get 20-40MBytes/s out of it. However when it comes down to seeding, this is the issue:

Theoretically at least 20MBytes/s or more shouldn't be an issue, and so it isn't as long as I have less than 20-30 torrents. But I have around 700 torrents that I'd like to continue to seed and I don't like queued seeding. I'm on a private tracker and I have some rare stuff which only I seed, and if I don't seed them, ppl can't download so I want to keep seeding.

However, when my active torrents go above about 70, my UL speed starts to decay. First it goes down to 11MBytes/s, then 6, finally around 100 torrents it barely floats around 4MBytes/s which divided to 100 torrents is nothing, to put it politely. I must be able to do better than this.

My machine has 6x 500G SAS drives in RAID 5, HP 410i controller, Linux OS, optimized readahead to 16M, and I also have around 24 cores... iostat and iowait don't show anything particularly interesting, but I notice that utorrent (ran via Wine) occupies one core and maxes it out. That's about all the anomaly I can think of.

Could you recommend a good set of utorrent settings to suit this scenario?

Sincere thanks to anyone willing to help me in any way. And please, don't send me to hell right away ;-)

Link to comment
Share on other sites

This is solved...

Turns out it wasn't utorrent after all.

Anyone wondering what made the difference, here it goes:

IO logic and cache tuning:

echo "deadline" > /sys/block/cciss\!c0d1/queue/scheduler
/sbin/blockdev --setra 16384 /dev/cciss/c0d1

File system mount option changes (XFS): defaults,noatime,nodiratime,logbsize=131072,logbufs=8

Now my troughput is blazing and have low IOWAIT... Cannot believe it was so simple...

(Altough it may be worth to mention that probably this will make real difference only if you have a healthy amount of RAM to back up your muscles. In my case 12GB and a 1GB controller cache...)

Hope this will help someone...

Link to comment
Share on other sites


Altough speed issue has been solved more or less, I still have this problem that utorrent uses only one core (allright, if I dig into the process tree and associated affinity table, I can see some threads related to wineserver and utorrent vegetating on other cores, but that's about ZIP, considering that - for goodness's sake - I have like 24 cores...). I wonder if we could somehow reach a point where utorrent would be truly paralellized? Or is this apparent lack of serious SMP capability just a Linux/wine artefact? Anyway... It's "working" now. But it could be So much better, I believe, if it were using all the resources it has...

For example: it could fork separate threads, let's call the "torrent boxes" that "host" and "handle" a fixed number of torrents. These torrent boxes would then connect to a central cache, which would have to be rewritten to spawn child pool server process per every "n" torrent box, and draw data out of a central hashed pool. It would of course be important that the hashpool and pool server processes be implemented in a non-blocking way, similar to Squid's AUFS, to facilitate rapid block extraction and replacement. In the meantime, every 10 or so seconds a load balancer thread would re-arrange the affinity of the box threads if it finds that one CPU is overloaded compared to the other CPU-s, regarding box threads. Or... since there's no way that one "torrent box" would ever require blocks from another torrent box's cache pool, we could make this a shortcut and say, that a "torrent box" is a full-fledged torrent client object, having it's own non-blocking cache management, thus eliminating the need for a central cache entrielly, and cutting down on complexity... And so on...

Oh well... dreams.

Link to comment
Share on other sites


This topic is now archived and is closed to further replies.

  • Create New...