Growing the Cache

Figure 3-51 shows the code flow diagram for cache_grow.

Figure 3-51: Code flow diagram for cache_grow.

The arguments of kmem_cache_alloc are passed to cache_grow. It is also possible to specify an explicit node from which the fresh memory pages are to be supplied.

The color and offset are first calculated:

mm/slab.c static int cache_grow(struct kmem_cache *cachep, gfp_t flags, int nodeid, void *objp)

13 = cachep->nodelists[nodeid];

l3->colour_next = 0; offset *= cachep->colour_off;

The kernel restarts counting at 0 when the maximum number of colors is reached; this automatically results in a zero offset.

The required memory space is allocated page-by-page by the buddy system using the kmem_getpages helper function. The sole purpose of this function is to invoke the alloc_pages_node function discussed in Section 3.5.4 with the appropriate parameters. The PG_slab bit is also set on each page to indicate that the page belongs to the buddy system. When a slab is used to satisfy short-lived or reclaimable allocations, the flag_gfp_reclaimable is passed down to the buddy system. Recall from Section 3.5.2

that this is important to allocate the pages from the appropriate migrate list.

The allocation of the management head for the slab is not very exciting. The relevant alloc_slabmgmt function reserves the required space if the head is stored off-slab; if not, the space is already reserved on the slab. In both situations, the colouroff, s_mem, and inuse elements of the slab data structure must be initialized with the appropriate values.

The kernel then establishes the associations between the pages of the slab and the slab or cache structure by invoking slab_map_pages. This function iterates over all page instances of the pages newly allocated for the slab and invokes page_set_cache and page_set_slab for each page. These two functions manipulate (or misuse) the lru element of a page instance as follows:

mm/slab.c static inline void page_set_cache(struct page *page, struct kmem_cache *cache) {

static inline void page_set_slab(struct page *page, struct slab *slab) {

cache_init_objs initializes the objects of the new slab by invoking the constructor for each object assuming it is present. (As only a very few parts of the kernel make use of this option, there is normally little to do in this respect.) The kmem_bufctl list of the slab is also initialized by storing the value i + 1 at array position i: because the slab is as yet totally unused, the next free element is always the next consecutive element. As per convention, the last array element holds the constant bufctl_end.

The slab is now fully initialized and can be added to the slabs_free list of the cache. The number of new objects generated is also added to the number of free objects in the cache (cachep->free_objects).

Continue reading here: Freeing Objects

Was this article helpful?

0 0