Mmmemoryc

void swapin_readahead(swp_entry_t entry, unsigned long addr,struct vm_area_struct *vma) {

struct page *new_page; unsigned long offset;

* Get the number of handles we should do readahead io to.

num = valid_swaphandles(entry, &offset); for (i =0; i < num; offset++, i++) {

new_page = read_swap_cache_async(swp_entry(swp_type(entry), offset), vma, addr);

page_cache_release(new_page);

lru_add_drain(); /* Push any new pages onto the LRU now */

The kernel invokes valid_swaphandles to calculate the number of readahead pages. Typically, 2page_cluster pages are read, where page_cluster is a global variable that is set to 2 for systems with less than 16 MiB of memory and to 3 for all others. This produces a readahead window of four or eight pages (/proc/sys/vm/page-cluster allows for tuning the variable from userspace, and to disable swap-in readahead by setting it to zero). However, the value calculated by valid_swaphandles must be reduced in the following situations:

□ If the requested page is near the end of the swap area, the number of readahead pages must be reduced to prevent reading beyond the area boundary.

□ If the readahead window includes free or unused pages, the kernel reads only the valid data before these pages.

read_swap_cache_async successively submits read requests for the selected pages to the block layer. If the function returns a null pointer because no memory page could be allocated, the kernel aborts swap-in because clearly no memory is available for further pages and the readahead mechanism is therefore less important than the memory shortage prevailing in the system.

0 0

Post a comment