Jump to content

arvid

Established Members
  • Posts

    125
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

arvid's Achievements

Advanced Member

Advanced Member (3/3)

0

Reputation

  1. Does it work correctly in 3.3? Would you mind testing the latest version of 3.4? We have introduced some performance improvements to downloading metadata.
  2. @ioannis: that line is the TCP download rate limit, which is part of the "mixed mode" algorithm. The mixed mode algorithm refers to using TCP and uTP peers at the same time. Without any such algorithm, TCP peers would (essentially) always starve out any uTP peer, and the whole point of uTP would be defeated. The mixed mode algorithm attempts to strike a fair balance in bandwidth usage between uTP and TCP at a higher level, and also feeds back congestion notifications from uTP into throttling TCP (before it runs over the cliff and fills up the modem's send buffer). Unfortunately, the mixed mode algorithm is far from trivial. We've chosen a fairly conservative approach which involves linearly increasing the allowed TCP rate until uTP sees congestion. That's the line you see.
  3. This feature is not enabled for private torrents. I've just added an advanced option to turn it off altogether, which will be in 3.2.2
  4. What are the symptoms you're seeing? It works for me. Make sure your peers are not on your local network, because they won't be rate limited by default.
  5. Thanks. The connection issue has been resolved and will be in a new 3.2 build shortly
  6. That was a typo. Firon meant 3.1.3 beta.
  7. The hash failures turned out to be specific for the ted torrents. Probably what happened was that ted.com changed their videos (maybe new encodes), and our torrent generator did not re-generate the torrent files.
  8. @ChichipioWilson: Thanks for an excellent report! I believe I've knocked these off today, but I'll verify it some more tomorrow. Hopefully we can do a refresh tomorrow night
  9. thanks for all the feedback on 3.1 so far. These are the theories we're working on right now. Feel free to comment on this if you have some relevant evidence that may contradict this. The disk subsystem appears to be slow, which causes a number of symptoms: 1. Stuck at 99.9% (which is a special case to mean "we're done downloading, now we're just waiting for all the pieces in the write cache to be flushed to disk"). This mode has been changed to be an explicit torrent status. It will even tell you the number of disk jobs left to be flushed (this is in the next release, as soon as firon gets around to putting it up). 2. stuck at checking 0%, similarly, if the disk subsystem is slow, the read jobs won't return very soon and may get stalled for quite a while. There was also a GUI bug causing checking torrent specifically to be a bit messed up (also fixed in the coming RC2). Also, keep in mind that we only check one torrent at a time, if you force recheck all your torrents, it's only expected that one of them actually are moving. I think it would make sense to make this torrent state more explicit as well, by making the idle ones say "queued checking" or something. Now, why would the disk io subsystem be slow (or even lock up?). I'm not entirely sure. If anyone get a legitimate hang, where there's no disk activity and uT is still not able to flush, please dump the process, record its build number and post it here so I can figure out what it's doing. There are however a few factors that we've seen that may contribute to the disk io being slow: 1. sparse_files are not used by default. This means, whenever you start downloading a 5 Gig file, the first flush to disk will block, and windows will write 2.5 Gigs of zeroes to the filesystem on average. This can take quite a while on a normal laptop hard drive, several minutes actually. When windows is doing this, it's holding up uT's disk thread, causing disk overload to reach 100% (as soon as the write cache fills up). There's no clear communication in the UI that this is going on, and it may appear to be a bug. I think the first thing to do is to improve user messaging of what's going on. We've also turned on sparse files by default on win7+, this will eliminate the up-front cost of downloading a large file. 2. with vista, uTorrent will set its disk I/O priority level to below normal. This is to not interfere with interactive applications that may need more urgent access to the disk. If another application is doing a lot of consistent disk I/O, on vista and win7, uTorrent is likely to get much lower rate of disk operation completion. Especially when shutting down this might be annoying, since persistent disk I/O from other applications may stall the utorrent process from terminating indefinitely. One we we're mitigating this is to bump our I/O priority back to normal once we start the shutdown sequence, and flushing the cache. 3. files used to be opened in unbuffered mode when being written to, by default (this change was introduced somewhere in 2.2 or 3.0 iirc). Benchmarks on a win7 laptop suggests that this would impact write speed negatively. We've changed the default for this back to what it was pre 2.2, to not open files for writing in unbuffered mode. We made a few improvements to the disk I/O subsystem before the RC, mostly simplification and deleting logic that wasn't necessary, as well as fixing a bug introduced a few months earlier causing it to flush the whole write cache every second or so. The flushing bug would severely harm the write performance of the 3.1 betas. There was a significant seed performance improvement too. Previously, we would throw away the pieces that we downloaded as they were written to disk, and if a peer would request them, we would read them back in again. Now, we're avoiding that extra read step and just put the buffers straight into the read cache. Another improvement in 3.1 is a GUI refresh overhaul. This is all behind the scenes. We used to update the GUI in a way that did not scale very well with the number of torrents. If anyone ever tried to add 20000 torrents to uT, they probably know what I'm talking about. Our goal with 3.1 is that it should easily handle that many torrents, and now the GUI is. The one remaining issue that we're working on right now is to be able to save resume data for that many torrents, without allocating a gig of ram and stalling the GUI while saving it. This GUI overhaul is what's causing most of the GUI bugs that you've been seeing, where torrents sometimes aren't updated in the download list properly, or aren't removed when they should. The new scheme is a bit more complicated, because all changes are edge-triggered, and we might have missed some of the edges.
  10. Thanks for the report! very well written. I believe the resume.dat is a red herring. That's simply a bug where we log an error when there actually isn't one. I would imagine that if it's a matter of running out of physical memory or virtual address space, the working set or virtual memory size of the process would be significantly inflated, much more than you see. Or at least that other processes would use up all the remaining physical RAM. It might be caused by a handle leak as well. Do you see an ever-increasing number of GDI handles? (this is a column that can be enabled in task manager). Since this error actually happens on torrents though, I imagine it's some limit hit by the disk I/O subsystem. Maybe there is a file handle leak, making us open too many files.
  11. This setting controls the time a cache entry will stay in the cache. The default is to time out after 9 minutes. That is exactly what it is doing. This is what the read cache does, it assumes that a request for one chunk in a piece is likely to be followed by another request from the same piece. If you are uploading significantly slower than others in the swarm, or there are just a lot of other seeds the downloader might get chunks from, the downloader may have satisfied all other chunks from other peers by the time you've completed the first chunk. In this case, the read cache is completely ineffective. You might want to turn off the read cache if this is a problem for you. These are the fixes we will do to uTorrent to mitigate this problem: 1. have a fixed read cache line size of, say, 128 kiB, rather than the full piece 2. for fast peers, request entire pieces to improve cache performance for the sending peer Please let us know if you have any other ideas that could mitigate this problem. One possibility is to try to predict the sequentiality of read requests, and adjust the cache line size accordingly. This is complicated and not clear that would actually buy us anything. For instance, windows internally just uses 128 kiB cache line sizes (unless the sequential file flag is set). Since this is not a regression, and we're very close to a 3.0 stable, these changes will unfortunately not make it into 3.0. But I would like to thank you for bringing this to our attention.
  12. Thanks for the report! It's on the backlog now and should be fixed within 2 weeks. At least in 3.0, we might back-port it as well, but we're getting close to 3.0 beta.
  13. Because it will "come back" in 3.0, where it works. Unfortunately it's not that simple. We have reverted a bunch of code around the systray icon handling, but it seems to be triggered by something seemingly unrelated. Nobody here has been able to reproduce it either (apart from the first fix), which makes it a lot harder to track down.
  14. What we need: 1. Select n torrents 2. Right click menu -> Set download location 3. Specify a directory 4. uTorrent moves ALL the torrents (and their files) to their RESPECTIVE sub-dirs (NOT dumping all in a single dir!) As I've said before, all we need is EXACTLY what happens when the files are moved after a torrent has finished downloading. You already have the code... we just need it as an option after downloads have finished. I just tested this in 2.2.1, and it definitely works for me. The behavior when you select multiple torrents is that we ask once for each torrent.
  15. Does it ask before install? If no - this feature seems unsafe(junk/malicious apps distribution). It doesn't install it, it doesn't even ask. It only installs the bundle if you click on the app icon next to the app in the list view.
×
×
  • Create New...