Mmvmscanc

if (nr_freed < nr_taken && !current_is_kswapd() &&

sc->order > PAGE_ALLOC_COSTLY_ORDER) { congestion_wait(WRITE, HZ/10);

nr_freed += shrink_page_list(&page_list, sc,

PAGEOUT_IO_SYNC);

If not all pages that were supposed to be reclaimed could have been reclaimed, that is, if nr_freed < nr_taken, some pages on the list have been locked and could not be written out in asynchronous mode.13 If the kernel is performing the current reclaim pass in direct reclaim mode, that is, was not called from the swapping daemon kswapd, and reclaims to fulfill a high-order allocation, then it first waits for any congestion on the block devices to settle. Afterward, another writeout pass is performed in synchronous mode. This has the drawback that higher-order allocations are somewhat delayed, but since they do not happen so often, this is not an issue. Allocations smaller than page_alloc_costly_order that arise much more frequently are not disturbed.

Finally, the non-reclaimable pages must be returned to the LRU lists. Lumpy reclaim and failed writeout attempts might have led to active pages on the local list, so both the active and the inactive LRU lists are possible destinations. To preserve the LRU order, the kernel iterates over the local list from tail to head. Depending on whether the page is active or not, it is returned to the start of the appropriate LRU list using either add_page_to_active_list or add_page_to_inactive_list. Once again, the usage counter of each page must be decremented by 1 because it was incremented accordingly at the start of the procedure. The now familiar page vectors are used to ensure that this is done as quickly as possible because they perform processing block-by-block.

13There can also be other reasons for this, for instance, a failed writeout, but the reason mentioned is the essential cause.

Continue reading here: Performing Page Reclaim

Was this article helpful?

0 0