Jump to content

diskio.flush_files not working?


h0rnytoad1

Recommended Posts

hi,

i just updated to 1.5, i'm checking ram usage with task manager, µtorrent doesnt go over 10megs wich is good, but the system used ram goes down to the 50mb line often out of 512mb, then gets cleared up. My donwload is 330kb/s and µ is going fullspeed.

i think the diskio.flush_files doesnt work right, the timing might be too slow or just not working at all. i read in another post that µ flushedfiles every 60seconds, closing each file handle , but i dont see this happenning here.

i tried each beta since 425 and all above 426 have had this problem. it might also be priorise rarest pieces. but beta 425 worked right.

just to make sure it wasnt some odd setting, i deleted the µ folder in appdata, to reset everything to default.

all advanced options are reset to defaults, nothing changed.

edit: 1 torrent just finished downloading, once the file was renamed on disk by µ (had .!ut extension), the ram was freed. but its going down once again. im pretty sure flush doesnt work anymore.

Link to comment
Share on other sites

well thats what happens to me. so you say flushfiles works but, maybe something else doesnt.

i'm telling you beta 425 was very good with ram for me. i tried 426 and then 427 after the hash bug was found out in 426, but came back to 425 b/c of high ram usage.

im sure there's something different and its not working right.

i left the read and write caches to -1, but how much would i have to enter if i wanted them to be of 10megs each ? 10000 would that be 10megs?

i'll try with that and see if it still does it.

Link to comment
Share on other sites

Windows uses as much RAM as possible for the system cache, even with flush files (seems to be unavoidable).

it is avoidable in fact. System cache handling is explained here

shortly

1) allocate buffer with VirtualAlloc

2) open file with CreateFile with FILE_FLAG_NO_BUFFERING.

it really bypasses system cache. It is pretty fast and buffer can be small: hdd and its driver buffer data too. But data reading is quirky a bit.

or you can use _wopen from msvcrt.dll with _O_SEQUENTIAL flag and read it in common way, this won't use up syscache but speed will suck a bit.

I'm experiencing such a problem when creating torrent for 6GB-batch. Syscache eat my ram and all slug. Also this can affect hashcheck of big data.

May be this post should be moved to feature requests.

Link to comment
Share on other sites

i just checked the faq #6.20 : "diskio.write_queue_size defaults to -1, which is automatic cache management. µTorrent will automatically adjust the cache size based on your download speed (max measure download speed * 7). If you would like to adjust this value manually, set it 8 times your maximum download speed in kilobytes, but try to avoid going below 1000, or above 32768 in most situations. "

So the -1 setting for write cache (diskio.write_queue_size) is based on the max download speed (max measure download speed * 7), but according to that equation, 330kb/s * 7 is only 2310kb, yet the ram still gets all used up and forces µ to write to disc (ie: system cache) every 7 seconds at fullspeed.

(ps: it would be a good idea to put the variable names in bold in the FAQ to make it easier to read, like i did here).

but anyway, i put in 30000, since my connexion can do about 20megs per min at fullspeed (330*60) it will force µt to flush every minute or longuer. wich isn't so bad on the hardware while maintaining a nonslugish system.

but i maintain that diskio.flush_file doesnt flush once per minute (ie: clear ram as it did before in beta 425) or it would show up in my ram, its only logical.

Link to comment
Share on other sites

To work like BitComet would require a recode of the cache and disk i/o handling.

I agree, it's a big job after all. Now I'm running myu at low speed 20kb/s and I'm happy for not having an extra cache. Syscache grows at high reading speeds. I meant rewriting code for torrent creation and hash check, which require only simple sequential scan of file from the viewpoint of disk i/o.... Hmm... after all this doesn't happen frequently.

There is another idea. It is possible to shrink syscache at runtime. ms claims that it is possible to shrink syscache permanently but investigations reveal that this isn't true. It is possible to specify syscache size via api. In theory this is enough to prevent syscache from growing but investigations reveal that this isn't true. CacheSet - here it is - utility that can set syscache size. After setting cache size cache shrinks, then grows again X__X But if you perform this procedure repeatedly you can keep syscache size at desired point. T__T

Once again; I pointed to _O_SEQUENTIAL flag with which one can read file in common way without syscache growth. How about it?

Link to comment
Share on other sites

  • 1 month later...

Hey i just wanted to ask is this mater going to be solved in the near future.From what i've read so far it seems it's going to be a tough one but is windows cache bypassing going to be implemented in the next versions.I've the exact same problem with azureus, so i prefer utorrent since it's lighter.I'm temporarily using cachemanXP but i hope that it won't be necessary soon :)

Now utorrent caching works like a breeze and guess that's the only thing left.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...