Incomputer architecture,memory-level parallelism (MLP) is the ability to have pending multiplememory operations, in particularcache misses ortranslation lookaside buffer (TLB) misses, at the same time.
In a single processor, MLP may be considered a form ofinstruction-level parallelism (ILP). However, ILP is often conflated withsuperscalar, the ability to execute more than one instruction at the same time, e.g. a processor such as the IntelPentium Pro is five-way superscalar, with the ability to start executing five different microinstructions in a given cycle, but it can handle four different cache misses for up to 20 different load microinstructions at any time.
It is possible to have a machine that is not superscalar but which nevertheless has high MLP.
Arguably a machine that has no ILP, which is not superscalar, which executes one instruction at a time in a non-pipelined manner, but which performs hardware prefetching (not software instruction-level prefetching) exhibits MLP (due to multiple prefetches outstanding) but not ILP. This is because there are multiple memoryoperations outstanding, but notinstructions. Instructions are often conflated with operations.
Furthermore, multiprocessor and multithreaded computer systems may be said to exhibit MLP and ILP due to parallelism—but not intra-thread, single process, ILP and MLP. Often, however, we restrict the terms MLP and ILP to refer to extracting such parallelism from what appears to be non-parallel single threaded code.
{{cite conference}}
:External link in|postscript=
(help)CS1 maint: postscript (link)