This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Concurrent computing" – news ·newspapers ·books ·scholar ·JSTOR(February 2014) (Learn how and when to remove this message) |
Concurrent computing is a form ofcomputing in which severalcomputations are executedconcurrently—during overlapping time periods—instead ofsequentially—with one completing before the next starts.
This is a property of a system—whether aprogram,computer, or anetwork—where there is a separate execution point or "thread of control" for each process. Aconcurrent system is one where a computation can advance without waiting for all other computations to complete.[1]
Concurrent computing is a form ofmodular programming. In itsparadigm an overall computation isfactored into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing includeEdsger Dijkstra,Per Brinch Hansen, andC.A.R. Hoare.[2]
This section has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages) (Learn how and when to remove this message)
|
The concept of concurrent computing is frequently confused with the related but distinct concept ofparallel computing,[3][4] although both can be described as "multiple processes executingduring the same period of time". In parallel computing, execution occurs at the same physical instant: for example, on separateprocessors of amulti-processor machine, with the goal of speeding up computations—parallel computing is impossible on a (one-core) single processor, as only one computation can occur at any instant (during any single clock cycle).[a] By contrast, concurrent computing consists of processlifetimes overlapping, but execution does not happen at the same instant. The goal here is to model processes that happen concurrently, like multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel.[5]: 1
For example, concurrent processes can be executed on one core by interleaving the execution steps of each process viatime-sharing slices: only one process runs at a time, and if it does not complete during its time slice, it ispaused, another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant.[citation needed]
Concurrent computationsmay be executed in parallel,[3][6] for example, by assigning each process to a separate processor or processor core, ordistributing a computation across a network.
The exact timing of when tasks in a concurrent system are executed depends on thescheduling, and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2:[citation needed]
The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished,concurrent/sequential andparallel/serial are used as opposing pairs.[7] A schedule in which tasks execute one at a time (serially, no parallelism), without interleaving (sequentially, no concurrency: no task begins until the prior task ends) is called aserial schedule. A set of tasks that can be scheduled serially isserializable, which simplifiesconcurrency control.[citation needed]
The main challenge in designing concurrent programs isconcurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.[6] Potential problems includerace conditions,deadlocks, andresource starvation. For example, consider the following algorithm to make withdrawals from a checking account represented by the shared resourcebalance:
boolwithdraw(intwithdrawal){if(balance>=withdrawal){balance-=withdrawal;returntrue;}returnfalse;}
Supposebalance = 500, and two concurrentthreads make the callswithdraw(300) andwithdraw(350). If line 3 in both operations executes before line 5 both operations will find thatbalance >= withdrawal evaluates totrue, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources benefit from the use of concurrency control, ornon-blocking algorithms.
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(December 2006) (Learn how and when to remove this message) |
There are advantages of concurrent computing:
Introduced in 1962,Petri nets were an early attempt to codify the rules of concurrent execution. Dataflow theory later built upon these, andDataflow architectures were created to physically implement the ideas of dataflow theory. Beginning in the late 1970s,process calculi such asCalculus of Communicating Systems (CCS) andCommunicating Sequential Processes (CSP) were developed to permit algebraic reasoning about systems composed of interacting components. Theπ-calculus added the capability for reasoning about dynamic topologies.
Input/output automata were introduced in 1987.
Logics such as Lamport'sTLA+, and mathematical models such astraces andActor event diagrams, have also been developed to describe the behavior of concurrent systems.
Software transactional memory borrows fromdatabase theory the concept ofatomic transactions and applies them to memory accesses.
Concurrent programming languages and multiprocessor programs must have aconsistency model (also known as a memory model). The consistency model defines rules for how operations oncomputer memory occur and how results are produced.
One of the first consistency models wasLeslie Lamport'ssequential consistency model. Sequential consistency is the property of a program that its execution produces the same results as a sequential program. Specifically, a program is sequentially consistent if "the results of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program".[10]
This sectionneeds expansion. You can help byadding to it.(February 2014) |
A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as anoperating system process, or implementing the computational processes as a set ofthreads within a single operating system process.
In some concurrent computing systems, communication between the concurrent components is hidden from the programmer (e.g., by usingfutures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:
Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing is greater than for a procedure call. These differences are often overwhelmed by other performance factors.
Concurrent computing developed out of earlier work on railroads andtelegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over a given set of wires (improving efficiency), such as viatime-division multiplexing (1870s).
The academic study of concurrent algorithms started in the 1960s, withDijkstra (1965) credited with being the first paper in this field, identifying and solvingmutual exclusion.[11]
Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to worldwide networks. Examples follow.
At the programming language level:
At the operating system level:
At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices.
Concurrent programming languages are programming languages that use language constructs forconcurrency. These constructs may involvemulti-threading, support fordistributed computing,message passing,shared resources (includingshared memory) orfutures and promises. Such languages are sometimes described asconcurrency-oriented languages orconcurrency-oriented programming languages (COPL).[12]
Today, the most commonly used programming languages that have specific constructs for concurrency areJava andC#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided bymonitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model,Erlang was probably the most widely used in industry as of 2010.[citation needed]
Many concurrent programming languages have been developed more as research languages (e.g.,Pict) rather than as languages for production use. However, languages such asErlang,Limbo, andoccam have seen industrial use at various times in the last 20 years. A non-exhaustive list of languages which use or provide concurrent programming facilities:
Many other languages provide support for concurrency in the form of libraries, at levels roughly comparable with the above list.