- cross-posted to:
- linux@lemmy.ml
- cross-posted to:
- linux@lemmy.ml
This is a relief to find! I just looked at htop
and panicked over the high amount of “used” memory.
This is a relief to find! I just looked at htop
and panicked over the high amount of “used” memory.
This doesn’t explain what a disk cache (afaik often refered to as page cache) is, so here goes: When any modern OS reads a file from the disk, it (or the parts of it that are actually read) gets copied to RAM. These “pages” only get thrown out (“dropped”) when RAM is needed for something else, usually starting with the one that hasn’t been accessed for the longest time.
You can see the effect of this by opening a program (or large file), closing it, and opening it again. The second time it will start faster because nothing needs to get read from disk.
Since the pages in RAM are identical to the sectors on disk (unless the file has been modified in RAM by writing to it), they can be dropped immediately and the RAM can be used for something else if needed. The downside being obviously that the dropped file needs to be read again from disk when it is needed the next time.
deleted by creator
How can I adjust my programs to utilize disk caches?
As I said, every file read from disk, be it an executable, image or whatever gets cached in RAM automatically and always.
Having said that, if you read a file using read(2) (or like any API that uses read() internally, which is most), then you end up with two copies of the file in RAM, the version your OS put in the disk cache, and the copy you created in your process’s memory. You can avoid this second copy by using mmap(2). In this case the copy of the file in the disk cache gets mapped into your process’s memory, so the RAM is shared between your copy and the disk cache copy.
You can also give hints to the disk cache subsystem in the kernel using fadvise(2). Don’t though unless you know what you’re doing.