Cache Organization

The dentry structures not only make working with filesystem structures easier, but are also crucial to system performance. They accelerate work with the VFS by keeping communication with the underlying filesystem implementations to a minimum.

Each request forwarded to the underlying implementations by the VFS leads to the creation of a new dentry object to hold the request results. These objects are held in a cache so that they can be accessed faster the next time they are needed and operations can be performed more quickly. How is the cache organized? It comprises the following two components to organize dentry objects in memory:

1. A hash table (dentry_hashtable) containing all dentry objects.

2. An LRU (least recently used) list in which objects no longer used are granted a last period of grace before they are removed from memory.

Recall that the hash table is implemented in keeping with the classical pattern. The function d_hash from fs/dcache.c is used to determine the hash position of a dentry object.

Handling of the LRU list is a bit trickier. The list is headed by the global variable dentry_unused, and the objects it contains are linked by the d_lru element of struct dentry.

dentry objects are placed on the LRU list when their usage counter (d_count) has reached 0 — this indicates that no application is actively using the object. New entries are always placed at the beginning of the list. In other words, the further back an entry is in the list, the older it is — the classic LRU principle. The prune_dcache function is invoked from time to time, for instance, when a filesystem is unmounted or when the kernel needs more memory. Old objects are removed and memory is freed. Note that it can temporarily happen that dentry objects are on the unused list although they are in active use and their usage count is bigger than zero. This is because the kernel does some optimizations: When a dentry that was on the unused list comes back into use, it is not immediately taken off the unused list since this saves some locking and thus increases performance. Operations like prune_dcache, which are costly anyway, make up for this: When they encounter an object with positive usage count, they just remove it from the list, but do not free it.

Because LRU list objects are still simultaneously present in the hash table, they can be found by lookup operations searching for the entry they represent. Once an entry is found, the object is removed from the LRU list because it is now in active use. The usage counter is also incremented.

Continue reading here: Info

Was this article helpful?

0 0