
Instruction-level parallelism (ILP) is theparallel or simultaneous execution of a sequence ofinstructions in acomputer program. More specifically, ILP refers to the average number of instructions run per step of this parallel execution.[2]: 5
ILP must not be confused withconcurrency. In ILP, there is a single specificthread of execution of aprocess. On the other hand, concurrency involves the assignment of multiple threads to aCPU's core in a strict alternation, or in true parallelism if there are enough CPU cores, ideally one core for each runnable thread.
There are two approaches to instruction-level parallelism:hardware andsoftware.
ILP was implemented either as Hardware-level dynamic parallelism or software-level ILP static parallelism. With hardware-level parallelism, the processor decides which instructions to execute in parallel, at thetime the code is already running, whereas software-level parallelism means thecompiler plans,ahead of time, which instructions to execute in parallel.[3]. Modernx86 processors use multiple techniques to achieve hardware-level parallelism, while theItanium architecture made significant software-level parallelism possible, but also rellied on it for its code to be efficient.
Consider the following program:
e = a + bf = c + dm = e * f
Operation 3 depends on the results of operations 1 and 2, so it cannot be calculated until both of them are completed. However, operations 1 and 2 do not depend on any other operation, so they can be calculated simultaneously. If we assume that each operation can be completed in one unit of time, then these three instructions can be completed in a total of two units of time, giving an ILP of 3/2.
A goal ofcompiler andprocessor designers is to identify and take advantage of as much ILP as possible. Ordinary programs are typically written under a sequential execution model where instructions execute one after the other and in the order specified by the programmer. ILP allows the compiler and the processor to overlap the execution of multiple instructions or even to change the order in which instructions are executed.
How much ILP exists in programs is very application-specific. In certain fields, such as graphics andscientific computing, the amount can be very large. However, workloads such ascryptography may exhibit much less parallelism.
Micro-architectural techniques that are used to exploit ILP include:
ILP is exploited by both the compiler and hardware, but the compiler also provides inherent and implicit ILP in programs to hardware by compile-time optimizations. Some optimization techniques for extracting available ILP in programs includeinstruction scheduling,register allocation/renaming, and memory-access optimization.
Dataflow architectures are another class of architectures where ILP is explicitly specified; for a recent[when?] example, see theTRIPS architecture.
In recent[when?] years, ILP techniques have been used to provide performance improvements in spite of the growing disparity between processor operating frequencies and memory access times (early ILP designs such as theIBM System/360 Model 91 used ILP techniques to overcome the limitations imposed by a relatively small register file). Presently[when?], a cache miss penalty to main memory costs several hundreds of CPU cycles. While in principle it is possible to use ILP to tolerate even such memory latencies, the associated resource and power dissipation costs are disproportionate. Moreover, the complexity and often the latency of the underlying hardware structures results in reduced operating frequency, further reducing any benefits. Hence, the aforementioned techniques prove inadequate to keep the CPU from stalling for the off-chip data. Instead, the industry is heading towards exploiting higher levels of parallelism that can be exploited through techniques such asmultiprocessing andmultithreading.[4]