Synchronization of data with the underlying block device as described in the previous chapter makes the situation easier for the kernel when the available RAM memory limit has been reached. Writing back cached data allows some memory pages to be released in order to make RAM available for more important functions. Since the data involved can be read in again from the block device as and when required, this does take time, but no information is lost.

Naturally, this procedure also has its limits. At some time the point is reached where the caches and buffers can no longer be shrunk. Furthermore, it does not work for pages whose content is generated dynamically and that have no backing store.

As in typical systems (with the exception of some embedded or handheld PCs) considerably more hard disk capacity is generally available than RAM memory space, the kernel — in conjunction with the capability of the processor to manage virtual address spaces that are larger than the existing RAM memory — can ''commandeer'' parts of the disk in order to use them as memory expansions. Since hard disks are considerably slower than RAM memory, swapping is purely an emergency solution that keeps the system running but at considerably reduced speed.

The term swapping originally referred to the swapping-out of an entire process — with all its data, program code, and the like — and not to the page-by-page, selective exporting of process data to secondary expanded RAM memory. While this strategy was adopted in very early versions of Unix, where it was perhaps sometimes appropriate, such behavior is now inconceivable. The resultant latency times during context switching would make interactive working not just sluggish but intolerably slow. However, a distinction is not made between swapping and paging below. Both stand for the fine-grained swapping-out of process data. This is now established usage of the terms not just amongst experts but also (and above all) in the kernel sources.

Two questions must be answered when considering how to implement swapping and page reclaim in the kernel:

1. According to what scheme should pages be reclaimed; that is, how does the kernel decide which pages it should reclaim in order to ensure maximum possible benefit and least possible disadvantage?

2. How are pages that have been swapped out organized in the swap area, and how does the kernel write pages to the swap area and read them in again later? How does it synchronize pages with their backing device?

The question as to which memory pages are swapped out and which ones remain in RAM is crucial to system performance. If the kernel selects a frequently used page, a page in memory is then briefly freed for other purposes. However, because the original data are soon needed again, another page must be swapped out to create a free page to hold the data that have just been swapped out and are now required again. This is obviously not very efficient and must therefore be prevented.

Continue reading here: Swappable Pages

Was this article helpful?

0 0