
Incomputer science,mutual exclusion is a property ofconcurrency control, which is instituted for the purpose of preventingrace conditions. It is the requirement that onethread of execution never enters acritical section while aconcurrent thread of execution is already accessing said critical section, which refers to an interval of time during which a thread of execution accesses ashared resource orshared memory.
The shared resource is adata object, which two or more concurrent threads are trying to modify (where two concurrent read operations are permitted but, no two concurrent write operations or one read and one write are permitted, since it leads to data inconsistency). Mutual exclusion algorithms ensure that if a process is already performing write operation on a data object [critical section] no other process/thread is allowed to access/modify the same object until the first process has finished writing upon the data object [critical section] and released the object for other processes to read and write upon.
The requirement of mutual exclusion was first identified and solved byEdsger W. Dijkstra in his seminal 1965 paper "Solution of a problem in concurrent programming control",[1][2] which is credited as the first topic in the study of concurrent algorithms.[3]
A simple example of why mutual exclusion is important in practice can be visualized using asingly linked list of four items, where the second and third are to be removed. The removal of a node that sits between two other nodes is performed by changing thenextpointer of the previous node to point to the next node (in other words, if nodei is being removed, then thenext pointer of nodei – 1 is changed to point to nodei + 1, thereby removing from the linked list any reference to nodei). When such a linked list is being shared between multiple threads of execution, two threads of execution may attempt to remove two different nodes simultaneously, one thread of execution changing thenext pointer of nodei – 1 to point to nodei + 1, while another thread of execution changes thenext pointer of nodei to point to nodei + 2. Although both removal operations complete successfully, the desired state of the linked list is not achieved: nodei + 1 remains in the list, because thenext pointer of nodei – 1 points to nodei + 1.
This problem (called arace condition) can be avoided by using the requirement of mutual exclusion to ensure that simultaneous updates to the same part of the list cannot occur.
The term mutual exclusion is also used in reference to the simultaneous writing of amemory address by one thread while the aforementioned memory address is being manipulated or read by one or more other threads.
The problem which mutual exclusion addresses is a problem of resource sharing: how can a software system control multiple processes' access to a shared resource, when each process needs exclusive control of that resource while doing its work? The mutual-exclusion solution to this makes the shared resource available only while the process is in a specificcode segment called thecritical section. It controls access to the shared resource by controlling each mutual execution of that part of its program where the resource would be used.
A successful solution to this problem must have at least these two properties:
Deadlock freedom can be expanded to implement one or both of these properties:
Every process's program can be partitioned into four sections, resulting in four states. Program execution cycles through these four states in order:[5]

If a process wishes to enter the critical section, it must first execute the trying section and wait until it acquires access to the critical section. After the process has executed its critical section and is finished with the shared resources, it needs to execute the exit section to release them for other processes' use. The process then returns to its non-critical section.
Onuni-processor systems, the simplest solution to achieve mutual exclusion is to disableinterrupts during a process's critical section. This will prevent anyinterrupt service routines from running (effectively preventing a process from beingpreempted). Although this solution is effective, it leads to many problems. If a critical section is long, then thesystem clock will drift every time a critical section is executed because the timer interrupt is no longer serviced, so tracking time is impossible during the critical section. Also, if a process halts during its critical section, control will never be returned to another process, effectively halting the entire system. A more elegant method for achieving mutual exclusion is thebusy-wait.
Busy-waiting is effective for both uniprocessor andmultiprocessor systems. The use of shared memory and anatomictest-and-set instruction provide the mutual exclusion. A process cantest-and-set on a location in shared memory, and since the operation is atomic, only one process can set the flag at a time. Any process that is unsuccessful in setting the flag can either go on to do other tasks and try again later, release the processor to another process and try again later, or continue to loop while checking the flag until it is successful in acquiring it.Preemption is still possible, so this method allows the system to continue to function—even if a process halts while holding the lock.
Several other atomic operations can be used to provide mutual exclusion of data structures; most notable of these iscompare-and-swap (CAS). CAS can be used to achievewait-free mutual exclusion for any shareddata structure by creating alinked list where each node represents the desired operation to be performed. CAS is then used to change thepointers in the linked list[6] during the insertion of a new node. Only one process can be successful in its CAS; all other processes attempting to add a node at the same time will have to try again. Each process can then keep a local copy of the data structure, and upon traversing the linked list, can perform each operation from the list on its local copy.
In addition to hardware-supported solutions, some software solutions exist that usebusy waiting to achieve mutual exclusion. Examples include:
These algorithms do not work ifout-of-order execution is used on the platform that executes them. Programmers have to specify strict ordering on the memory operations within a thread.[8]
It is often preferable to use synchronization facilities provided by anoperating system’s multithreading library, which can take advantage of hardware support when available but fall back on software mechanisms when necessary. For example, when an operating system’slock facility is used and a thread attempts to acquire a lock that is already held, the operating system may suspend the thread via acontext switch and schedule another runnable thread, or place the processor into a low-power state if no other thread is available to run. As a result, most modern mutual exclusion techniques aim to reducelatency and busy-waiting by relying on queuing and context switching. However, if the overhead of suspending and later restoring a thread is demonstrably greater than the time the thread would spend waiting for the lock to become available in a specific scenario, thenspinlocks can be an acceptable solution in that context.[9][10]
One binarytest&set register is sufficient to provide the deadlock-free solution to the mutual exclusion problem. But a solution built with a test&set register can possibly lead to the starvation of some processes which become caught in the trying section.[4] In fact, distinct memory states are required to avoid lockout. To avoid unbounded waiting,n distinct memory states are required.[11]
Most algorithms for mutual exclusion are designed with the assumption that no failure occurs while a process is running inside the critical section. However, in reality such failures may be commonplace. For example, a sudden loss of power or faulty interconnect might cause a process in a critical section to experience an unrecoverable error or otherwise be unable to continue. If such a failure occurs, conventional, non-failure-tolerant mutual exclusion algorithms may deadlock or otherwise fail key liveness properties. To deal with this problem, several solutions using crash-recovery mechanisms have been proposed.[12]
The solutions explained above can be used to build the synchronization primitives below:
Many forms of mutual exclusion have side-effects. For example, classicsemaphores permitdeadlocks, in which one process gets a semaphore, another process gets a second semaphore, and then both wait till the other semaphore to be released. Other common side-effects includestarvation, in which a process never gets sufficient resources to run to completion;priority inversion, in which a higher-priority thread waits for a lower-priority thread; and high latency, in which response to interrupts is not prompt.
Much research is aimed at eliminating the above effects, often with the goal of guaranteeingnon-blocking progress. No perfect scheme is known. Blocking system calls used to sleep an entire process. Until such calls becamethreadsafe, there was no proper mechanism for sleeping a single thread within a process (seepolling).[citation needed]