Reading data from external storage devices like hard disks is much slower than reading data from RAM, so Linux uses caching mechanisms to keep data in RAM once they have been read in, and accesses them from there. Page frames are the natural units on which the page cache operates, and I have discussed in this chapter how the kernel keeps track of which portions of a block device are cached in RAM. You have been introduced to the concept of address spaces which allow for linking cached data with their source, and how address spaces are manipulated and queried. Following that, I have examined the algorithms employed by Linux to handle the technical details of bringing content into the page cache.

Traditionally, Unix caches used smaller units than complete pages, and this technique survived until today in the form of the buffer cache. While the main caching load is handled by the page cache, there are still some users of the buffer cache, and you have therefore also been introduced to the corresponding mechanisms.

Using RAM to cache data read from a disk is one aspect of the interaction between RAM and disks, but there's also another side to the story: The kernel must also take care of synchronizing modified data in RAM back to the persistent storage on disk; the next chapter will introduce you to the corresponding mechanisms.

Continue reading here: Data Synchronization

Was this article helpful?

0 0