Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Task parallelism

From Wikipedia, the free encyclopedia
This article includes a list ofgeneral references, butit lacks sufficient correspondinginline citations. Please help toimprove this article byintroducing more precise citations.(May 2011) (Learn how and when to remove this message)
Form of parallelization of computer code

Task parallelism (also known asfunction parallelism andcontrol parallelism) is a form ofparallelization ofcomputer code across multipleprocessors inparallel computing environments. Task parallelism focuses on distributingtasks—concurrently performed byprocesses orthreads—across different processors. In contrast todata parallelism which involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data.[1] A common type of task parallelism ispipelining, which consists of moving a single set of data through a series of separate tasks where each task can execute independently of the others.

Description

[edit]

In a multiprocessor system, task parallelism is achieved when each processor executes a different thread (or process) on the same or different data. The threads may execute the same or different code. In the general case, different execution threads communicate with one another as they work, but this is not a requirement. Communication usually takes place by passing data from one thread to the next as part of aworkflow.[2]

As a simple example, if a system is running code on a 2-processor system (CPUs "a" & "b") in aparallel environment and we wish to do tasks "A" and "B", it is possible to tell CPU "a" to do task "A" and CPU "b" to do task "B" simultaneously, thereby reducing therun time of the execution. The tasks can be assigned usingconditional statements as described below.

Task parallelism emphasizes the distributed (parallelized) nature of the processing (i.e. threads), as opposed to the data (data parallelism). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.[3]

Thread-level parallelism (TLP) is theparallelism inherent in an application that runs multiplethreads at once. This type of parallelism is found largely in applications written for commercialservers such as databases. By running many threads at once, these applications are able to tolerate the high amounts of I/O and memory system latency their workloads can incur - while one thread is delayed waiting for a memory or disk access, other threads can do useful work.

The exploitation of thread-level parallelism has also begun to make inroads into the desktop market with the advent ofmulti-core microprocessors. This has occurred because, for various reasons, it has become increasingly impractical to increase either the clock speed or instructions per clock of a single core. If this trend continues, new applications will have to be designed to utilize multiple threads in order to benefit from the increase in potential computing power. This contrasts with previous microprocessor innovations in which existing code was automatically sped up by running it on a newer/faster computer.

Example

[edit]

Thepseudocode below illustrates task parallelism:

program:...if CPU = "a" then    do task "A"else if CPU="b" then    do task "B"end if...end program

The goal of the program is to do some net total task ("A+B"). If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it as follows.

  • In anSPMD (single program, multiple data) system, bothCPUs will execute the code.
  • In a parallel environment, both will have access to the same data.
  • The "if" clause differentiates between the CPUs. CPU "a" will read true on the "if" and CPU "b" will read true on the "else if", thus having their own task.
  • Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.

Code executed by CPU "a":

program:...do task "A"...end program

Code executed by CPU "b":

program:...do task "B"...end program

This concept can now be generalized to any number of processors.

Language support

[edit]

Task parallelism can be supported in general-purpose languages by either built-in facilities or libraries. Notable examples include:

Examples of fine-grained task-parallel languages can be found in the realm ofHardware Description Languages likeVerilog andVHDL.

See also

[edit]

References

[edit]
  1. ^Reinders, James (10 September 2007)."Understanding task and data parallelism".ZDNet. Retrieved8 May 2017.
  2. ^Quinn, Michael J. (2007).Parallel programming in C with MPI and openMP (Tata McGraw-Hill ed.). New Delhi: Tata McGraw-Hill Pub.ISBN 978-0070582019.
  3. ^Hicks, Michael."Concurrency Basics"(PDF).University of Maryland: Department of Computer Science. Retrieved8 May 2017.
General
Levels
Multithreading
Theory
Elements
Coordination
Programming
Hardware
APIs
Problems
Retrieved from "https://en.wikipedia.org/w/index.php?title=Task_parallelism&oldid=1237880513"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp