Mmvmscanc

nr_taken = isolate_lru_pages(sc->swap_cluster_max, &zone->inactive_list, &page_list, &nr_scan, sc->order, (sc->order > PAGE_ALLOC_COSTLY_ORDER)?

ISOLATE_BOTH : ISOLATE_INACTIVE); nr_active = clear_active_flags(&page_list);

/* Handle page accounting */

nr_freed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);

Recall that isolate_lru_pages also picks pages adjacent to the page frame of a page on the free list if lumpy reclaim is used. If the allocation order of the request that led to the current reclaim pass is larger than the threshold order specified in page_alloc_costly_order, lumpy reclaim is allowed to use both active and inactive pages when picking pages surrounding the tag page. For small allocation orders, only inactive pages may be used. The reason behind this is that larger allocations usually cannot be satisfied if the kernel is restricted to inactive pages — the chance that an active page is contained in large intervals is simply too big on a busy kernel. page_alloc_costly_order is per default set to 3, which means that the kernel considers allocations of 8 and more continuous pages as complicated.

Although all pages on the inactive list are guaranteed to be inactive, lumpy reclaim can lead to active pages on the result list of isolate_lru_pages. To account these pages properly, the auxiliary function clear_active_flags iterates over all pages, counts the active ones, and clears the page flag PG_active from any of them. Finally, the page list can be pushed onward to shrink_page_list for writeout. Notice that the asynchronous mode is employed.

Notice that it is not certain that all pages selected for reclaim can actually be reclaimed. shrink_page_list leaves such pages on the passed list and returns the number of pages for which it succeeded to initiate writeout. This figure must be added to the total number of swapped-out pages to determine when work may be terminated.

Direct reclaim requires one more step:

Continue reading here: Mmvmscanc

Was this article helpful?

0 0