Releasing an Object from a Cache

The kmem_cache_free( ) function releases an object previously obtained by the slab allocator. Its parameters are cachep (the address of the cache descriptor) and objp (the address of the object to be released). As with kmem_cache_alloc( ), we discuss the uniprocessor case separately from the multiprocessor case.

7.2.13.1 The uniprocessor case

The function starts by disabling the local interrupts and then determines the address of the descriptor of the slab containing the object. It uses the list.prev subfield of the descriptor of the page frame storing the object:

unsigned int objnr;

local_irq_save(save_flags);

slabp = (slab_t *) mem_map[__pa(objp) >> PAGE_SHIFT].list.prev;

Then the function computes the index of the object inside its slab, derives the address of its object descriptor, and adds the object to the head of the slab's list of free objects:

objnr = (objp - slabp->s_mem) / cachep->objsize; ((kmem_bufctl_t *)(slabp+1))[objnr] = slabp->free; slabp->free = objnr;

Finally, the function checks whether the slab has to be moved to another list:

if (--slabp->inuse == 0) { /* slab is now fully free */ list_del(&slabp->list);

list_add(&slabp->list, &cachep->slabs_free); } else if (slabp->inuse+1 == cachep->num) { /* slab was full */ list_del(&slabp->list);

local_irq_restore(save_flags); return;

7.2.13.2 The multiprocessor case

The function starts by disabling the local interrupts; then it checks whether there is a free slot in the local array of object pointers:

cpucache_t * cc; local_irq_save(save_flags);

cc = cachep->cpudata[smp_processor_id( )]; if (cc->avail == cc->limit) {

cc->avail -= cachep->batchcount;

free_block(cachep, &((void *)(cc+1))[cc->avail], cachep->batchcount);

local_irq_restore(save_flags);

return;

If there is at least one free slot in the local array, the function just sets it to the address of the object being freed. Otherwise, the function invokes free_block( ) to release a bunch of cachep-

>batchcount objects to the slab allocator cache.

The free_block(cachep,objpp,len) function acquires the cache spin lock and then releases len objects starting from the local array entry at address objpp:

spin_lock(&cachep->spinlock); for ( ; len > 0; len--, objpp++) {

slab_t * slabp = (slab_t *) mem_map[__pa(*objpp) >> PAGE_SHIFT].list.prev;

unsigned int objnr = (*objpp - slabp->s_mem) / cachep->objsize; ((kmem_bufctl_t *)(slabp+1))[objnr] = slabp->free; slabp->free = objnr;

if (--slabp->inuse == 0) { /* slab is now fully free */ list_del(&slabp->list);

list_add(&slabp->list, &cachep->slabs_free); } else if (slabp->inuse+1 == cachep->num) { /* slab was full */ list_del(&slabp->list);

list_add(&slabp->list, &cachep->slabs_partial);

spin_unlock(&cachep->spinlock);

The code that releases the objects to the slabs is identical to that of the uniprocessor case, so we don't discuss it further.

Continue reading here: General Purpose Objects

Was this article helpful?

0 0