As a multitasking system, Linux is able to run several processes at the same time. Normally, the individual processes must be kept as separate as possible so that they do not interfere with each other. This is essential to protect data and to ensure system stability. However, there are situations in which applications must communicate with each other; for example,
□ when data generated by one process are transferred to another.
□ when data are shared.
□ when processes are forced to wait for each other.
□ when resource usage needs to be coordinated.
These situations are handled using several classic techniques that were introduced in System V and have since proven their worth, so much so that they are now part and parcel of Linux. Because not only userspace applications but also the kernel itself are faced with such situations — particularly on multiprocessor systems — various kernel-internal mechanisms are in place to handle them.
If several processes share a resource, they can easily interfere with each other — and this must be prevented. The kernel therefore provides mechanisms not only for sharing data but also for coordinating access to data. Again, the kernel employs mechanisms adopted from System V.
Resources need to be protected not only in userspace applications but especially in the kernel itself. On SMP systems, the individual CPUs may be in kernel mode at the same time and, theoretically, may want to manipulate all existing data structures. To prevent the CPUs from getting into each other's way, it is necessary to protect some kernel areas by means of locks; these ensure that access is restricted to one CPU at a time.
Continue reading here: Race Conditions
Was this article helpful?