This article'sfactual accuracy may be compromised due to out-of-date information. Please help update this article to reflect recent events or newly available information.(March 2017) |
| Flynn's taxonomy |
|---|
| Single data stream |
| Multiple data streams |
| SIMD subcategories[1] |
| See also |

Single instruction, multiple data (SIMD) is a type ofparallel computing (processing) inFlynn's taxonomy. SIMD describes computers withmultiple processing elements that perform the same operation on multiple data points simultaneously. SIMD can be internal (part of the hardware design) and it can be directly accessible through aninstruction set architecture (ISA), but it should not be confused with an ISA.
Such machines exploitdata level parallelism, but notconcurrency: there are simultaneous (parallel) computations, but each unit performs exactly the same instruction at any given moment (just with different data). A simple example is to add many pairs of numbers together, all of the SIMD units are performing an addition, but each one has different pairs of values to add. SIMD is especially applicable to common tasks such as adjusting the contrast in adigital image or adjusting the volume ofdigital audio. Most moderncentral processing unit (CPU) designs include SIMD instructions to improve the performance ofmultimedia use. In recent CPUs, SIMD units are tightly coupled with cache hierarchies and prefetch mechanisms, which minimize latency during large block operations. For instance, AVX-512-enabled processors can prefetch entire cache lines and apply fused multiply-add operations (FMA) in a single SIMD cycle.

SIMD has three different subcategories inFlynn's 1972 Taxonomy, one of which issingle instruction, multiple threads (SIMT). SIMT should not be confused withsoftware threads orhardware threads, both of which are task time-sharing (time-slicing). SIMT is true simultaneous parallel hardware-level execution, such as in theILLIAC IV.
SIMD should not be confused withVector processing, characterized by theCray 1 and clarified inDuncan's taxonomy. Thedifference between SIMD and vector processors is primarily the presence of a Cray-styleSET VECTOR LENGTH instruction.
The first known operational use to date ofSIMD within a register was theTX-2, in 1958. It was capable of 36-bit operations and two 18-bit or four 9-bit sub-word operations.
The first commercial use of SIMD instructions was in theILLIAC IV, which was completed in 1972.
Vector supercomputers of the early 1970s such as theCDC Star-100 and theTexas Instruments ASC could operate on a "vector" of data with a single instruction. Vector processing was especially popularized byCray in the 1970s and 1980s. Vector processing architectures are now considered separate from SIMD computers:Duncan's Taxonomy includes them whereasFlynn's Taxonomy does not, due to Flynn's work (1966, 1972) pre-dating theCray-1 (1977). The complexity of Vector processors however inspired a simpler arrangement known asSIMD within a register.
The first era of modern SIMD computers was characterized bymassively parallel processing-stylesupercomputers such as theThinking MachinesConnection Machine CM-1 and CM-2. These computers had many limited-functionality processors that would work in parallel. For example, each of 65,536 single-bit processors in a Thinking Machines CM-2 would execute the same instruction at the same time, allowing, for instance, to logically combine 65,536 pairs of bits at a time, using a hypercube-connected network or processor-dedicated RAM to find its operands. Supercomputing moved away from the SIMD approach when inexpensive scalarmultiple instruction, multiple data (MIMD) approaches based on commodity processors such as theIntel i860 XP became more powerful, and interest in SIMD waned.[3]
The current era of SIMD processors grew out of the desktop-computer market rather than the supercomputer market. As desktop processors became powerful enough to support real-time gaming and audio/video processing during the 1990s, demand grew for this type of computing power, and microprocessor vendors turned to SIMD to meet the demand.[4] This resurgence also coincided with the rise ofDirectX and OpenGL shader models, which heavily leveraged SIMD under the hood. The graphics APIs encouraged programmers to adopt data-parallel programming styles, indirectly accelerating SIMD adoption in desktop software. Hewlett-Packard introducedMultimedia Acceleration eXtensions (MAX) instructions intoPA-RISC 1.1 desktops in 1994 to accelerate MPEG decoding.[5] Sun Microsystems introduced SIMD integer instructions in its "VIS" instruction set extensions in 1995, in itsUltraSPARC I microprocessor. MIPS followed suit with their similarMDMX system.
The first widely deployed desktop SIMD was with Intel'sMMX extensions to thex86 architecture in 1996. This sparked the introduction of the much more powerfulAltiVec system in theMotorolaPowerPC and IBM'sPOWER systems. Intel responded in 1999 by introducing the all-newSSE system. Since then, there have been several extensions to the SIMD instruction sets for both architectures. Advanced vector extensions AVX,AVX2 andAVX-512 are developed by Intel. AMD supports AVX,AVX2, andAVX-512 in their current products.[6]
With SIMD, an order of magnitude increase in code size is not uncommon, when compared to equivalent scalar or equivalent vector code, and an order of magnitudeor greater effectiveness (work done per instruction) is achievable with Vector ISAs.[7]
ARM'sScalable Vector Extension takes another approach, known inFlynn's Taxonomy more commonly known today as"Predicated" (masked) SIMD. This approach is not as compact asvector processing but is still far better than non-predicated SIMD. Detailed comparative examples are given atVector processor § Vector instruction example. In addition, all versions of the ARM architecture have offered Load and Store multiple instructions, to Load or Store a block of data from a continuous block of memory, into a range or non-continuous set of registers.[8]
| Year | Example |
|---|---|
| 1974 | ILLIAC IV - an Array Processor comprising scalar 64-bit PEs |
| 1974 | ICL Distributed Array Processor (DAP) |
| 1976 | Burroughs Scientific Processor |
| 1981 | Geometric-Arithmetic Parallel Processor fromMartin Marietta (continued atLockheed Martin, then atTeranex andSilicon Optix) |
| 1983–1991 | Massively Parallel Processor (MPP), fromNASA/Goddard Space Flight Center |
| 1985 | Connection Machine, models 1 and 2 (CM-1 and CM-2), fromThinking Machines Corporation |
| 1987–1996 | MasPar MP-1 and MP-2 |
| 1991 | Zephyr DC from Wavetracer |
| 2001 | Xplor from Pyxsys, Inc. |
Small-scale (64 or 128 bits) SIMD became popular on general-purpose CPUs in the early 1990s and continued through 1997 and later with Motion Video Instructions (MVI) forAlpha. SIMD instructions can be found, to one degree or another, on most CPUs, includingIBM'sAltiVec and Signal Processing Engine (SPE) forPowerPC,Hewlett-Packard's (HP)PA-RISCMultimedia Acceleration eXtensions (MAX),Intel'sMMX and iwMMXt,Streaming SIMD Extensions (SSE),SSE2,SSE3SSSE3 andSSE4.x,AMD's3DNow!,ARC's ARC Video subsystem,SPARC'sVIS and VIS2,Sun'sMAJC,ARM'sNeon technology,MIPS'MDMX (MaDMaX) andMIPS-3D.
Intel'sAVX-512 SIMD instructions process 512 bits of data at once.
The IBM, Sony, Toshiba co-developedCell processor'sSynergistic Processing Element's (SPE's) instruction set is heavily SIMD based.
Some, but not all, GPUs are SIMD-based. AMD GPUs since theTeraScale (microarchitecture) are SIMD-based, a feature that has remained in the 2020s microartectures RDNA and CDNA,[9] with a layer ofsingle instruction, multiple threads (SIMT) above.[10] On the other hand, Nvidia's CUDA architectures use scalar cores with SIMT.[11]
Philips, nowNXP, developed several SIMD processors namedXetal. The Xetal has 320 16-bit processor elements especially designed for vision tasks.
Apple's M1 and M2 chips also incorporate SIMD units deeply integrated with their GPU and Neural Engine, using Apple-designed SIMD pipelines optimized for image filtering, convolution, and matrix multiplication. This unified memory architecture helps SIMD instructions operate on shared memory pools more efficiently. (The CPU part implements ordinary NEON).


SIMD instructions are widely used to process 3D graphics, although moderngraphics cards with embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them especially useful for data processing and compression. They are also used in cryptography.[12][13][14] The trend of general-purpose computing on GPUs (GPGPU) may lead to wider use of SIMD in the future. Recent compilers such asLLVM,GNU Compiler Collection (GCC), and Intel's ICC offer aggressive auto-vectoring options. Developers can often enable these with flags like-O3 or-ftree-vectorize, which guide the compiler to restructure loops for SIMD compatibility.
Adoption of SIMD systems inpersonal computer software was at first slow, due to a number of problems. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Other systems, likeMMX and3DNow!, offered support for data types that were not interesting to a wide audience and had expensive context switching instructions to switch between using theFPU and MMXregisters. Compilers also often lacked support, requiring programmers to resort toassembly language coding.
SIMD onx86 had a slow start. The introduction of3DNow! byAMD andSSE byIntel confused matters somewhat, but today the system seems to have settled down (after AMD adopted SSE) and newer compilers should result in more SIMD-enabled software. Intel and AMD now both provide optimized mathlibraries that use SIMD instructions, and open source alternatives likelibSIMD,SIMDx86 andSLEEF have started to appear (see alsolibm).[15]
Apple Computer had somewhat more success, even though they entered the SIMD market later than the rest.AltiVec offered a rich system and can be programmed using increasingly sophisticated compilers fromMotorola,IBM andGNU, therefore assembly language programming is rarely needed. Additionally, many of the systems that would benefit from SIMD were supplied by Apple itself, for exampleiTunes andQuickTime. However, in 2006, Apple computers moved to Intel x86 processors. Apple'sAPIs anddevelopment tools (XCode) were modified to supportSSE2 andSSE3 as well as AltiVec. Apple was the dominant purchaser of PowerPC chips from IBM andFreescale Semiconductor. Even though Apple has stopped using PowerPC processors in their products, further development of AltiVec is continued in several PowerPC andPower ISA designs from Freescale and IBM.
SIMD within a register, orSWAR, is a range of techniques and tricks used for performing SIMD in general-purpose registers on hardware that does not provide any direct support for SIMD instructions. This can be used to exploit parallelism in certain algorithms even on hardware that does not support SIMD directly.
It is common for publishers of the SIMD instruction sets to make their ownC andC++ language extensions withintrinsic functions or special datatypes (withoperator overloading) guaranteeing the generation of vector code. Intel, AltiVec, and ARM NEON provide extensions widely adopted by the compilers targeting their CPUs. (More complex operations are the task of vector math libraries.)
TheGNU C Compiler takes the extensions a step further by abstracting them into a universal interface that can be used on any platform by providing a way of defining SIMD datatypes.[16] TheLLVM Clang compiler also implements the feature, with an analogous interface defined in the IR.[17] Rust'spacked_simd crate (and the experimentalstd::simd) uses this interface, and so doesSwift 2.0+.
C++ has an experimental interfacestd::experimental::simd that works similarly to the GCC extension. LLVM's libcxx seems to implement it.[citation needed] For GCC and libstdc++, a wrapper library that builds on top of the GCC extension is available.[18]
Microsoft added SIMD to.NET in RyuJIT.[19] TheSystem.Numerics.Vector package, available on NuGet, implements SIMD datatypes.[20] Java also has a new proposed API for SIMD instructions available inOpenJDK 17 in an incubator module.[21] It also has a safe fallback mechanism on unsupported CPUs to simple loops.
Instead of providing an SIMD datatype, compilers can also be hinted to auto-vectorize some loops, potentially taking some assertions about the lack of data dependency. This is not as flexible as manipulating SIMD variables directly, but is easier to use.OpenMP 4.0+ has a#pragma omp simd hint.[22] This OpenMP interface has replaced a wide set of nonstandard extensions, includingCilk's#pragma simd,[23] GCC's#pragma GCC ivdep, and many more.[24]
An example of the use of platform-specific, generic vector-based, and generic hint-based interfaces is "SIMD everywhere", a collection of C/C++ headers implementing of platform-specific intrinsics for other platforms (e.g. SSE intrinsics for ARM NEON).[25]
Consumer software is typically expected to work on a range of CPUs covering multiple generations, which could limit the programmer's ability to use new SIMD instructions to improve the computational performance of a program. The solution is to include multiple versions of the same code that uses either older or newer SIMD technologies, and pick one that best fits the user's CPU at run-time (dynamic dispatch). There are two main camps of solutions:
FMV, manually coded in assembly language, is quite commonly used in a number of performance-critical libraries such as glibc and libjpeg-turbo.Intel C++ Compiler,GNU Compiler Collection since GCC 6, andClang since clang 7 allow for a simplified approach, with the compiler taking care of function duplication and selection. GCC and clang requires explicittarget_clones labels in the code to "clone" functions,[26] while ICC does so automatically (under the command-line option/Qax). TheRust programming language also supports FMV. The setup is similar to GCC and Clang in that the code defines what instruction sets to compile for, but cloning is manually done via inlining.[27]
As using FMV requires code modification on GCC and Clang, vendors more commonly use library multi-versioning: this is easier to achieve as only compiler switches need to be changed.Glibc supports LMV and this functionality is adopted by the Intel-backed Clear Linux project.[28]
In 2013 John McCutchan announced that he had created a high-performance interface to SIMD instruction sets for theDart programming language, bringing the benefits of SIMD to web programs for the first time. The interface consists of two types:[29]
Instances of these types are immutable and in optimized code are mapped directly to SIMD registers. Operations expressed in Dart typically are compiled into a single instruction without any overhead. This is similar to C and C++ intrinsics. Benchmarks for4×4matrix multiplication,3D vertex transformation, andMandelbrot set visualization show near 400% speedup compared to scalar code written in Dart.
Intel announced at IDF 2013 that they were implementing McCutchan's specification for bothV8 andSpiderMonkey.[30] However, by 2017, SIMD.js was taken out of theECMAScript standard queue in favor of pursuing a similar interface inWebAssembly.[31] Support for SIMD was added to the WebAssembly 2.0 specification, which was finished on 2022 and became official on December 2024.[32] LLVM's autovectoring, when compiling C or C++ to WebAssembly, can target WebAssembly SIMD to automatically make use of SIMD, while SIMD intrinsic are also available.[33]
It has generally proven difficult to find sustainable commercial applications for SIMD-only processors in general-purpose computing.
One that has had some measure of success is theGAPP, which was developed byLockheed Martin and taken to the commercial sector by their spin-offTeranex. The GAPP's recent incarnations have become a powerful tool in real-timevideo processing applications like conversion between various video standards and frame rates (NTSC to/fromPAL, NTSC to/fromhigh-definition television (HDTV) formats, etc.),deinterlacing, imagenoise reduction, adaptivevideo compression, and image enhancement.
A more ubiquitous application for SIMD is found invideo games: nearly every modernvideo game console since1998 has incorporated a SIMD processor somewhere in its architecture. ThePlayStation 2 was unusual in that one of its vector-float units could function as an autonomousdigital signal processor (DSP) executing its own instruction stream, or as a coprocessor driven by ordinary CPU instructions. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors.Microsoft'sDirect3D 9.0 now chooses at runtime processor-specific implementations of its own math operations, including the use of SIMD-capable instructions.
A later processor that used vector processing is theCell processor used in the Playstation 3, which was developed byIBM in cooperation withToshiba andSony. It uses a number of SIMD processors (anon-uniform memory access (NUMA) architecture, each with independentlocal store and controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications. It differs from traditional ISAs by being SIMD from the ground up with no separate scalar registers.
Ziilabs produced an SIMD type processor for use on mobile devices, such as media players and mobile phones.[34]
Larger scale commercial SIMD processors are available from ClearSpeed Technology, Ltd. and Stream Processors, Inc.ClearSpeed's CSX600 (2004) has 96 cores each with two double-precision floating point units while the CSX700 (2008) has 192. Stream Processors is headed by computer architectBill Dally. Their Storm-1 processor (2007) contains 80 SIMD cores controlled by aMIPS CPU.
{{cite web}}: CS1 maint: archived copy as title (link)On both NVIDIA and AMD, matrix instructions break the SIMT abstraction model and work across a whole wavefront (or "warp" on NVIDIA).