Filesystem Types

Filesystems may be grouped into three general classes:

1. Disk-based filesystems are the classic way of storing files on nonvolatile media to retain their contents between sessions. In fact, most filesystems have evolved from this category. Some well-known examples are Ext2/3, Reiserfs, FAT, and iso9660. All make use of block-oriented media and must therefore answer the question, how to store file contents and structure information on the directory hierarchies. Of no interest to us here is the way in which communication takes place with the underlying block device — the corresponding device drivers in the kernel provide a uniform interface for this purpose. From the filesystem point of view, the underlying devices are nothing more than a list of storage blocks for which an appropriate organization scheme must be adopted.

2. Virtual filesystems are generated in the kernel itself and are a simple way of enabling userspace programs to communicate with users. The proc filesystem is the best example of this class. It requires no storage space on any kind of hardware device; instead, the kernel creates a hierarchical file structure whose entries contain information on a particular part of the system. The file /proc/version, for example, has a nominal length of 0 bytes when viewed with the ls command.

[email protected]> ls -l /proc/version

-r--r--r-- 1 root root 0 May 27 00:36 /proc/version

However, if the file contents are output with cat, the kernel generates a list of information on the system processor; this list is extracted from the data structures in kernel memory.

[email protected]> cat /proc/version

Linux version 2.6.24 ([email protected]) (gcc version 4.2.1 (SUSE Linux)) #1 Tue Jan 29 03:58:03 GMT 2008

3. Network filesystems are a Halfway House between disk-based and virtual filesystems. They permit access to data on a computer attached to the local computer via a network. In this case, the data are, in fact, stored on a hardware device on a different system. This means that the kernel need not be concerned with the details of file access, data organization, and hardware communication — this is taken care of by the kernel of the remote computer. All operations on files in this filesystem are carried out over a network connection. When a process writes data to a file, the data are sent to the remote computer using a specific protocol

(determined by the network filesystem). The remote computer is then responsible for storing the transmitted data and for informing the sender that the data have arrived.

Nevertheless, the kernel needs information on the size of files, their position within the directory hierarchy, and other important characteristics, even when it is working with network filesystems. It must also provide functions to enable user processes to perform typical file-related operations such as open, read, or delete. As a result of the VFS layer, userspace processes see no difference between a local filesystem and a filesystem available only via a network.

Continue reading here: The Common File Model

Was this article helpful?

0 0