Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Concurrency (computer science)

From Wikipedia, the free encyclopedia
Ability to execute a task in a non-serial manner
"Concurrent computer" redirects here. For the company, seeConcurrent Computer Corporation.
For a more practical discussion, seeConcurrent computing. For other uses, seeConcurrency (disambiguation).

Concurrency refers to the ability of a system to execute multiple tasks through simultaneous execution or time-sharing (context switching), sharing resources and managing interactions. Concurrency improves responsiveness, throughput, and scalability in modern computing, including:[1][2][3][4][5]

Related concepts

[edit]

Concurrency is a broader concept that encompasses several related ideas, including:[1][2][3][4][5]

  • Parallelism (simultaneous execution on multiple processing units). Parallelism executes tasks independently on multiple CPU cores. Concurrency allows for multiplethreads of control at the program level, which can use parallelism or time-slicing to perform these tasks. Programs may exhibit parallelism only, concurrency only, both parallelism and concurrency, neither.[6]
  • Multi-threading andmulti-processing (shared system resources)
  • Synchronization (coordinating access to shared resources)
  • Coordination (managing interactions between concurrent tasks)
  • Concurrency Control (ensuring data consistency and integrity)
  • Inter-process Communication (IPC, facilitating information exchange)

Issues

[edit]

Because computations in a concurrent system can interact with each other while being executed, the number of possible execution paths in the system can be extremely large, and the resulting outcome can beindeterminate. Concurrent use of sharedresources can be a source of indeterminacy leading to issues such asdeadlocks, andresource starvation.[7]

Design of concurrent systems often entails finding reliable techniques for coordinating their execution, data exchange,memory allocation, and execution scheduling to minimizeresponse time and maximisethroughput.[8]

Theory

[edit]

Concurrency theory has been an active field of research intheoretical computer science. One of the first proposals wasCarl Adam Petri's seminal work onPetri nets in the early 1960s. In the years since, a wide variety of formalisms have been developed for modeling and reasoning about concurrency.

Models

[edit]

A number of formalisms for modeling and understanding concurrent systems have been developed, including:[9]

Some of these models of concurrency are primarily intended to support reasoning and specification, while others can be used through the entire development cycle, including design, implementation, proof, testing and simulation of concurrent systems. Some of these are based onmessage passing, while others have different mechanisms for concurrency.

The proliferation of different models of concurrency has motivated some researchers to develop ways to unify these different theoretical models. For example, Lee and Sangiovanni-Vincentelli have demonstrated that a so-called "tagged-signal" model can be used to provide a common framework for defining thedenotational semantics of a variety of different models of concurrency,[11] while Nielsen, Sassone, and Winskel have demonstrated thatcategory theory can be used to provide a similar unified understanding of different models.[12]

The Concurrency Representation Theorem in the actor model provides a fairly general way to represent concurrent systems that are closed in the sense that they do not receive communications from outside. (Other concurrency systems, e.g.,process calculi can be modeled in the actor model using atwo-phase commit protocol.[13]) The mathematical denotation denoted by a closed systemS is constructed increasingly better approximations from an initial behavior calledS using a behavior approximating functionprogressionS to construct a denotation (meaning ) forS as follows:[14]

DenoteS ≡ ⊔i∈ωprogressionSi(⊥S)

In this way,S can be mathematically characterized in terms of all its possible behaviors.

Logics

[edit]

Various types oftemporal logic[15] can be used to help reason about concurrent systems. Some of these logics, such aslinear temporal logic andcomputation tree logic, allow assertions to be made about the sequences of states that a concurrent system can pass through. Others, such asaction computational tree logic,Hennessy–Milner logic, andLamport'stemporal logic of actions, build their assertions from sequences ofactions (changes in state). The principal application of these logics is in writing specifications for concurrent systems.[7]

Practice

[edit]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(April 2007) (Learn how and when to remove this message)

Concurrent programming encompasses programming languages and algorithms used to implement concurrent systems. Concurrent programming is usually considered[by whom?] to be more general thanparallel programming because it can involve arbitrary and dynamic patterns of communication and interaction, whereas parallel systems generally[according to whom?] have a predefined and well-structured communications pattern. The base goals of concurrent programming includecorrectness,performance androbustness. Concurrent systems such asOperating systems andDatabase management systems are generally designed[by whom?] to operate indefinitely, including automatic recovery from failure, and not terminate unexpectedly (seeConcurrency control). Some[example needed] concurrent systems implement a form of transparent concurrency, in which concurrent computational entities may compete for and share a single resource, but the complexities of this competition and sharing are shielded from the programmer.

Because they use shared resources, concurrent systems in general[according to whom?] require the inclusion of some[example needed] kind ofarbiter somewhere in their implementation (often in the underlying hardware), to control access to those resources. The use of arbiters introduces the possibility ofindeterminacy in concurrent computation which has major implications for practice including correctness and performance. For example, arbitration introducesunbounded nondeterminism which raises issues withmodel checking because it causes explosion in the state space and can even cause models to have an infinite number of states.

Some concurrent programming models includecoprocesses anddeterministic concurrency. In these models, threads of control explicitlyyield their timeslices, either to the system or to another process.

See also

[edit]

References

[edit]
  1. ^abOperating System Concepts. Wiley. 29 July 2008.ISBN 978-0470128725.
  2. ^abComputer Organization and Design: The Hardware/Software Interface. The Morgan Kaufmann Series in Computer Architecture and Design. Morgan Kaufmann. 2012.ISBN 978-0123747501.
  3. ^abDistributed Systems: Concepts and Design. Pearson. 2012.ISBN 978-0132143011.
  4. ^abQuinn, Michael Jay (1994).Parallel Computing: Theory and Practice. McGraw-Hill.ISBN 978-0070512948.
  5. ^abZomaya, Albert Y. (1996).Parallel and Distributed Computing Handbook. McGraw Hill Professional.ISBN 978-0070730205.
  6. ^Parallel and Concurrent Programming in Haskell. O'Reilly Media. 2013.ISBN 9781449335922.
  7. ^abCleaveland, Rance; Scott Smolka (December 1996)."Strategic Directions in Concurrency Research".ACM Computing Surveys.28 (4): 607.doi:10.1145/242223.242252.S2CID 13264261.
  8. ^Campbell, Colin; Johnson, Ralph; Miller, Ade; Toub, Stephen (August 2010).Parallel Programming with Microsoft .NET. Microsoft Press.ISBN 978-0-7356-5159-3.
  9. ^Filman, Robert; Daniel Friedman (1984).Coordinated Computing - Tools and Techniques for Distributed Software. McGraw-Hill.ISBN 978-0-07-022439-1.
  10. ^Keller, Jörg; Christoph Keßler; Jesper Träff (2001).Practical PRAM Programming. John Wiley and Sons.
  11. ^Lee, Edward; Alberto Sangiovanni-Vincentelli (December 1998)."A Framework for Comparing Models of Computation"(PDF).IEEE Transactions on CAD.17 (12):1217–1229.doi:10.1109/43.736561.
  12. ^Mogens Nielsen; Vladimiro Sassone; Glynn Winskel (1993)."Relationships Between Models of Concurrency".REX School/Symposium.
  13. ^Frederick Knabe. A Distributed Protocol for Channel-Based Communication with Choice PARLE 1992.
  14. ^William Clinger (June 1981). "Foundations of Actor Semantics". Mathematics Doctoral Dissertation. MIT.hdl:1721.1/6935.{{cite journal}}:Cite journal requires|journal= (help)
  15. ^Roscoe, Colin (2001).Modal and Temporal Properties of Processes. Springer.ISBN 978-0-387-98717-0.

Further reading

[edit]
  • Lynch, Nancy A. (1996).Distributed Algorithms. Morgan Kaufmann.ISBN 978-1-55860-348-6.
  • Tanenbaum, Andrew S.; Van Steen, Maarten (2002).Distributed Systems: Principles and Paradigms. Prentice Hall.ISBN 978-0-13-088893-8.
  • Kurki-Suonio, Reino (2005).A Practical Theory of Reactive Systems. Springer.ISBN 978-3-540-23342-8.
  • Garg, Vijay K. (2002).Elements of Distributed Computing. Wiley-IEEE Press.ISBN 978-0-471-03600-5.
  • Magee, Jeff; Kramer, Jeff (2006).Concurrency: State Models and Java Programming. Wiley.ISBN 978-0-470-09355-9.
  • Distefano, S., & Bruneo, D. (2015).Quantitative assessments of distributed systems: Methodologies and techniques (1st ed.). Somerset: John Wiley & Sons Inc.ISBN 9781119131144
  • Bhattacharyya, S. S. (2013;2014;).Handbook of signal processing systems (Second;2;2nd 2013; ed.). New York, NY: Springer.10.1007/978-1-4614-6859-2ISBN 9781461468592
  • Wolter, K. (2012;2014;).Resilience assessment and evaluation of computing systems (1. Aufl.;1; ed.). London;Berlin;: Springer.ISBN 9783642290329

External links

[edit]
General
Process calculi
Classic problems
Retrieved from "https://en.wikipedia.org/w/index.php?title=Concurrency_(computer_science)&oldid=1284856907"
Category:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp