Concurrency refers to the ability of a system to execute multiple tasks through simultaneous execution or time-sharing (context switching), sharing resources and managing interactions. Concurrency improves responsiveness, throughput, and scalability in modern computing, including:[1][2][3][4][5]
Concurrency is a broader concept that encompasses several related ideas, including:[1][2][3][4][5]
Because computations in a concurrent system can interact with each other while being executed, the number of possible execution paths in the system can be extremely large, and the resulting outcome can beindeterminate. Concurrent use of sharedresources can be a source of indeterminacy leading to issues such asdeadlocks, andresource starvation.[7]
Design of concurrent systems often entails finding reliable techniques for coordinating their execution, data exchange,memory allocation, and execution scheduling to minimizeresponse time and maximisethroughput.[8]
Concurrency theory has been an active field of research intheoretical computer science. One of the first proposals wasCarl Adam Petri's seminal work onPetri nets in the early 1960s. In the years since, a wide variety of formalisms have been developed for modeling and reasoning about concurrency.
A number of formalisms for modeling and understanding concurrent systems have been developed, including:[9]
Some of these models of concurrency are primarily intended to support reasoning and specification, while others can be used through the entire development cycle, including design, implementation, proof, testing and simulation of concurrent systems. Some of these are based onmessage passing, while others have different mechanisms for concurrency.
The proliferation of different models of concurrency has motivated some researchers to develop ways to unify these different theoretical models. For example, Lee and Sangiovanni-Vincentelli have demonstrated that a so-called "tagged-signal" model can be used to provide a common framework for defining thedenotational semantics of a variety of different models of concurrency,[11] while Nielsen, Sassone, and Winskel have demonstrated thatcategory theory can be used to provide a similar unified understanding of different models.[12]
The Concurrency Representation Theorem in the actor model provides a fairly general way to represent concurrent systems that are closed in the sense that they do not receive communications from outside. (Other concurrency systems, e.g.,process calculi can be modeled in the actor model using atwo-phase commit protocol.[13]) The mathematical denotation denoted by a closed systemS is constructed increasingly better approximations from an initial behavior called⊥S using a behavior approximating functionprogressionS to construct a denotation (meaning ) forS as follows:[14]
In this way,S can be mathematically characterized in terms of all its possible behaviors.
Various types oftemporal logic[15] can be used to help reason about concurrent systems. Some of these logics, such aslinear temporal logic andcomputation tree logic, allow assertions to be made about the sequences of states that a concurrent system can pass through. Others, such asaction computational tree logic,Hennessy–Milner logic, andLamport'stemporal logic of actions, build their assertions from sequences ofactions (changes in state). The principal application of these logics is in writing specifications for concurrent systems.[7]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(April 2007) (Learn how and when to remove this message) |
Concurrent programming encompasses programming languages and algorithms used to implement concurrent systems. Concurrent programming is usually considered[by whom?] to be more general thanparallel programming because it can involve arbitrary and dynamic patterns of communication and interaction, whereas parallel systems generally[according to whom?] have a predefined and well-structured communications pattern. The base goals of concurrent programming includecorrectness,performance androbustness. Concurrent systems such asOperating systems andDatabase management systems are generally designed[by whom?] to operate indefinitely, including automatic recovery from failure, and not terminate unexpectedly (seeConcurrency control). Some[example needed] concurrent systems implement a form of transparent concurrency, in which concurrent computational entities may compete for and share a single resource, but the complexities of this competition and sharing are shielded from the programmer.
Because they use shared resources, concurrent systems in general[according to whom?] require the inclusion of some[example needed] kind ofarbiter somewhere in their implementation (often in the underlying hardware), to control access to those resources. The use of arbiters introduces the possibility ofindeterminacy in concurrent computation which has major implications for practice including correctness and performance. For example, arbitration introducesunbounded nondeterminism which raises issues withmodel checking because it causes explosion in the state space and can even cause models to have an infinite number of states.
Some concurrent programming models includecoprocesses anddeterministic concurrency. In these models, threads of control explicitlyyield their timeslices, either to the system or to another process.
{{cite journal}}
:Cite journal requires|journal=
(help)