Structure of the Page Cache

As its name suggests, the page cache deals with memory pages that divide virtual memory and RAM memory into small segments. This not only makes it easier for the kernel to manipulate the large address space, but also supports a whole series of functions such as paging, demand loading, memory mapping, and the like. The task of the page cache is to obtain some of the available physical page frames to speed up the operations performed on block devices on a page basis. Of course, the way the page cache behaves is transparent to user applications as they do not know whether they are interacting directly with a block device or with a copy of their data held in memory — the read and write system calls return identical results in both cases.

Naturally, the situation is somewhat different for the kernel. In order to support the use of cached pages, anchors must be positioned at the various points in the code that interact with the page cache. The operation required by the user process must always be performed regardless of whether the desired page resides in the cache or not. When a cache hit occurs, the appropriate action is performed quickly (this is the very purpose of the cache). In the event of a cache miss, the required page must first be read from the underlying block device, and this takes longer. Once the page has been read, it is inserted in the cache and is, therefore, quickly available for subsequent access.

The time spent searching for a page in the page cache must be minimized to ensure that cache misses are as cheap as possible — if a miss occurs, the compute time needed to perform the search is (more or less) wasted. The efficient organization of the cached pages is, therefore, a key aspect of page cache design.

Continue reading here: Managing and Finding Cached Pages

Was this article helpful?

0 0