Jump to content

arvid

Established Members
  • Posts

    125
  • Joined

  • Last visited

Everything posted by arvid

  1. Does it work correctly in 3.3? Would you mind testing the latest version of 3.4? We have introduced some performance improvements to downloading metadata.
  2. @ioannis: that line is the TCP download rate limit, which is part of the "mixed mode" algorithm. The mixed mode algorithm refers to using TCP and uTP peers at the same time. Without any such algorithm, TCP peers would (essentially) always starve out any uTP peer, and the whole point of uTP would be defeated. The mixed mode algorithm attempts to strike a fair balance in bandwidth usage between uTP and TCP at a higher level, and also feeds back congestion notifications from uTP into throttling TCP (before it runs over the cliff and fills up the modem's send buffer). Unfortunately, the mixed mode algorithm is far from trivial. We've chosen a fairly conservative approach which involves linearly increasing the allowed TCP rate until uTP sees congestion. That's the line you see.
  3. This feature is not enabled for private torrents. I've just added an advanced option to turn it off altogether, which will be in 3.2.2
  4. What are the symptoms you're seeing? It works for me. Make sure your peers are not on your local network, because they won't be rate limited by default.
  5. Thanks. The connection issue has been resolved and will be in a new 3.2 build shortly
  6. That was a typo. Firon meant 3.1.3 beta.
  7. The hash failures turned out to be specific for the ted torrents. Probably what happened was that ted.com changed their videos (maybe new encodes), and our torrent generator did not re-generate the torrent files.
  8. @ChichipioWilson: Thanks for an excellent report! I believe I've knocked these off today, but I'll verify it some more tomorrow. Hopefully we can do a refresh tomorrow night
  9. thanks for all the feedback on 3.1 so far. These are the theories we're working on right now. Feel free to comment on this if you have some relevant evidence that may contradict this. The disk subsystem appears to be slow, which causes a number of symptoms: 1. Stuck at 99.9% (which is a special case to mean "we're done downloading, now we're just waiting for all the pieces in the write cache to be flushed to disk"). This mode has been changed to be an explicit torrent status. It will even tell you the number of disk jobs left to be flushed (this is in the next release, as soon as firon gets around to putting it up). 2. stuck at checking 0%, similarly, if the disk subsystem is slow, the read jobs won't return very soon and may get stalled for quite a while. There was also a GUI bug causing checking torrent specifically to be a bit messed up (also fixed in the coming RC2). Also, keep in mind that we only check one torrent at a time, if you force recheck all your torrents, it's only expected that one of them actually are moving. I think it would make sense to make this torrent state more explicit as well, by making the idle ones say "queued checking" or something. Now, why would the disk io subsystem be slow (or even lock up?). I'm not entirely sure. If anyone get a legitimate hang, where there's no disk activity and uT is still not able to flush, please dump the process, record its build number and post it here so I can figure out what it's doing. There are however a few factors that we've seen that may contribute to the disk io being slow: 1. sparse_files are not used by default. This means, whenever you start downloading a 5 Gig file, the first flush to disk will block, and windows will write 2.5 Gigs of zeroes to the filesystem on average. This can take quite a while on a normal laptop hard drive, several minutes actually. When windows is doing this, it's holding up uT's disk thread, causing disk overload to reach 100% (as soon as the write cache fills up). There's no clear communication in the UI that this is going on, and it may appear to be a bug. I think the first thing to do is to improve user messaging of what's going on. We've also turned on sparse files by default on win7+, this will eliminate the up-front cost of downloading a large file. 2. with vista, uTorrent will set its disk I/O priority level to below normal. This is to not interfere with interactive applications that may need more urgent access to the disk. If another application is doing a lot of consistent disk I/O, on vista and win7, uTorrent is likely to get much lower rate of disk operation completion. Especially when shutting down this might be annoying, since persistent disk I/O from other applications may stall the utorrent process from terminating indefinitely. One we we're mitigating this is to bump our I/O priority back to normal once we start the shutdown sequence, and flushing the cache. 3. files used to be opened in unbuffered mode when being written to, by default (this change was introduced somewhere in 2.2 or 3.0 iirc). Benchmarks on a win7 laptop suggests that this would impact write speed negatively. We've changed the default for this back to what it was pre 2.2, to not open files for writing in unbuffered mode. We made a few improvements to the disk I/O subsystem before the RC, mostly simplification and deleting logic that wasn't necessary, as well as fixing a bug introduced a few months earlier causing it to flush the whole write cache every second or so. The flushing bug would severely harm the write performance of the 3.1 betas. There was a significant seed performance improvement too. Previously, we would throw away the pieces that we downloaded as they were written to disk, and if a peer would request them, we would read them back in again. Now, we're avoiding that extra read step and just put the buffers straight into the read cache. Another improvement in 3.1 is a GUI refresh overhaul. This is all behind the scenes. We used to update the GUI in a way that did not scale very well with the number of torrents. If anyone ever tried to add 20000 torrents to uT, they probably know what I'm talking about. Our goal with 3.1 is that it should easily handle that many torrents, and now the GUI is. The one remaining issue that we're working on right now is to be able to save resume data for that many torrents, without allocating a gig of ram and stalling the GUI while saving it. This GUI overhaul is what's causing most of the GUI bugs that you've been seeing, where torrents sometimes aren't updated in the download list properly, or aren't removed when they should. The new scheme is a bit more complicated, because all changes are edge-triggered, and we might have missed some of the edges.
  10. Thanks for the report! very well written. I believe the resume.dat is a red herring. That's simply a bug where we log an error when there actually isn't one. I would imagine that if it's a matter of running out of physical memory or virtual address space, the working set or virtual memory size of the process would be significantly inflated, much more than you see. Or at least that other processes would use up all the remaining physical RAM. It might be caused by a handle leak as well. Do you see an ever-increasing number of GDI handles? (this is a column that can be enabled in task manager). Since this error actually happens on torrents though, I imagine it's some limit hit by the disk I/O subsystem. Maybe there is a file handle leak, making us open too many files.
  11. This setting controls the time a cache entry will stay in the cache. The default is to time out after 9 minutes. That is exactly what it is doing. This is what the read cache does, it assumes that a request for one chunk in a piece is likely to be followed by another request from the same piece. If you are uploading significantly slower than others in the swarm, or there are just a lot of other seeds the downloader might get chunks from, the downloader may have satisfied all other chunks from other peers by the time you've completed the first chunk. In this case, the read cache is completely ineffective. You might want to turn off the read cache if this is a problem for you. These are the fixes we will do to uTorrent to mitigate this problem: 1. have a fixed read cache line size of, say, 128 kiB, rather than the full piece 2. for fast peers, request entire pieces to improve cache performance for the sending peer Please let us know if you have any other ideas that could mitigate this problem. One possibility is to try to predict the sequentiality of read requests, and adjust the cache line size accordingly. This is complicated and not clear that would actually buy us anything. For instance, windows internally just uses 128 kiB cache line sizes (unless the sequential file flag is set). Since this is not a regression, and we're very close to a 3.0 stable, these changes will unfortunately not make it into 3.0. But I would like to thank you for bringing this to our attention.
  12. Thanks for the report! It's on the backlog now and should be fixed within 2 weeks. At least in 3.0, we might back-port it as well, but we're getting close to 3.0 beta.
  13. Because it will "come back" in 3.0, where it works. Unfortunately it's not that simple. We have reverted a bunch of code around the systray icon handling, but it seems to be triggered by something seemingly unrelated. Nobody here has been able to reproduce it either (apart from the first fix), which makes it a lot harder to track down.
  14. What we need: 1. Select n torrents 2. Right click menu -> Set download location 3. Specify a directory 4. uTorrent moves ALL the torrents (and their files) to their RESPECTIVE sub-dirs (NOT dumping all in a single dir!) As I've said before, all we need is EXACTLY what happens when the files are moved after a torrent has finished downloading. You already have the code... we just need it as an option after downloads have finished. I just tested this in 2.2.1, and it definitely works for me. The behavior when you select multiple torrents is that we ask once for each torrent.
  15. Does it ask before install? If no - this feature seems unsafe(junk/malicious apps distribution). It doesn't install it, it doesn't even ask. It only installs the bundle if you click on the app icon next to the app in the list view.
  16. I don't actually think that was an accident. The ideabank is mostly meant for feature requests/ideas, not bug reports. I'll copy it in here. This forum is a much better place for bug reports like this. Is this still an issue in the latest 2.2.1?
  17. It does. In this case the bug was introduced while adding the new feature of moving the files when doing set download location though. I believe we have a dialog like that. It applies to each file individually, but we only ask once. I was talking to firon about that yesterday. I'm thinking that with this last fix, you'll probably not notice any significant difference from the old behavior as long as you use it like that. If you have had time to download anything, you'd still have to say "no" on the dialog though. Maybe something like "no, and never ask me again" might make sense.
  18. Here's a bit of an update with the 2.2.1 fixes: Last night I managed to reproduce the missing systray icon issue. However, I'm not sure it's the same issue as some people have reported. I discovered that if you turn off "slawys show icon in systray", then minimize uT (to systray), exit via the systray context menu. When starting uT up again, it's still minimized by it did not add the systray icon back. Also last night, Art fixed the file move logic to not break when moving a file to the same location (e.g. when setting the move completed path to the same as download path) as well as not deleting the destination files if there are no source files (i.e. the set download location deletes my files-issue). These two fixes will be in the next release, which (hopefully) will be later today. Thanks for the reports!
  19. This dump only has a single thread, the disk I/O thread, which is blocked in the call to set a file to not be sparse anymore. I'm guessing you have a large file which was created as sparse, and now the OS is blocking while writing a gigabytes of zeroes. However, this doesn't explain uTorrent to freeze. The UI should still be responsive even when the disk thread is blocked. I can't tell what the UI is doing though, since it's not part of the dump. Is it possible that the disk I/O slows down swapping to the point where things appear to freeze? (because swapping parts of the process back in is slow). Do you have a dump of the crash as well?
  20. Thanks for everyones' feedback! @moogly: I just fixed the space for that string. It will be in the next build We're getting close to declaring 2.2.1 stable. If there are any outstanding serious issues or regressions that we might be overlooking, please let us know!
  21. I finally discovered the socks5 error handling which would cause an infinite loop (freeze and high CPU usage). This is now fixed and should be in the latest release. as for the disappearing systray icon, I've reverted the async. code back to always be synchronous, which I believe should fix the issue.
  22. I can't reproduce this. Running with ss5 does not freeze uT for me, no matter how long I wait. It does obviously break connections because it doesn't support UDP, but uT doesn't break other than you can't download anything. Can you describe the symptoms you see in more detail? when you say freeze, does the UI stop refreshing? are buttons no longer clickable?
  23. It's not really a firewall. It doesn't protect the user from malicious incoming traffic. It makes the user a less attractive victim for being owned and used in a DDoS attack. The firewall analogy is odd. This is not about protecting the user. It's about making the software less likely to be abused for launching attacks. So is the port filter. It's on by default, there's no clear indication of it being there and it's not trivial to turn off. The vast majority of functionality in utorrent works this way. Although I agree that these are reasonable expectations for features the user is likely to want to change, I don't think it's true across the board. Say, uTP target delay, the vast majority of uTorrent users won't even understand what that is, and much less likely to change it. If there was a great need to change it, we should really just change the default instead (which in this case we might at some point). Or things like diskio.smart_hash, definitely something everyone should leave on (unless you're specifically trouble shooting some issue). I get the impression that you think that running on a low port will significantly reduce the number of peers that will connect to you. Keep in mind that you can still connect to peers via the DHT! Could you run some simple tests to show this? I would be surprised if this is the case for the most part. In my experience, torrents with no trackers are not very common.
  24. @Rafi: Regarding the TCP graphs. The idea was that whenever uTP traffic was excluded from the rate limiter, the graph would not necessarily align with the limit anymore. I added the TCP graphs so that you could tell that at least the TCP traffic was limited by it. It seems to work fine for me, I just have to make sure I have some active TCP peers (by disabling uTP for instance)
  25. As to the experiment with not rate limiting uTP by default, here's some of our reasoning. I'm very interested if anyone has a good (or better) solution to this problem. Obviously excluding uTP from the rate limiting didn't work as well as we expected. The problem is that a very significant portion of uTorrent users set their upload rate limit so low that their download speed suffers from it. In 2.0.1, we enabled calc_overhead by default, which should give people a faster download rate if the upload rate limit is set correctly, instead people were seeing slower download rates. The reason for this is that their upload rate limits are set so low that not even the ACK traffic caused by their download would fit within it. The purpose of calc overhead is to let people set the upload rate limit much closer to their actual capacity, without having the ACK traffic make them exceed it. It seems that people's upload rate limits are already set low enough to compensate for this, so changing the default for calc_overhead would only make sense by also increase people's upload rate limits. The latter is obviously quite intrusive. The reasoning for lifting the rate limit on uTP by default was to take a step back and looking at why people set upload rate limits in the first place. The main purpose of upload rate limits is to make your modem not be flooded, and your internet becoming painfully slow. Other reasons are, if you have a metered connection, to not eat it all up too quickly. We already added the "Transfer cap" for this latter case (maybe we haven't done a very good job advertising it). Since uTP isn't supposed to have the issue of filling up your router's or modem's send buffer, we figured that rate limiting it isn't very important. (there's a lot of evidence that it works, even though it might still have some problems in certain cases, please let us know if it doesn't work as intended for you!). Other reasons to rate limit might be to ease the load on your hard drive and other more power-user sort of things. We figured that a checkbox would be easy enough to find if your computer savvy. So, currently we're in the situation where we rate limit uTP (and TCP) way lower than necessary, and the overall performance of bittorrent swarms are lower than what it could be. lifting the rate limit on uTP was supposed to help, but it didn't seem to. Maybe we can make the settings pane easier to use and explain some of these issues?
×
×
  • Create New...