
Larrabee is thecodename for a cancelledGPGPU chip thatIntel was developing separately from itscurrent line of integrated graphics accelerators. It is named after eitherMount Larrabee orLarrabee State Park in the state of Washington.[1][2] The chip was to be released in 2010 as the core of a consumer3D graphics card, but these plans were cancelled due to delays and disappointing early performance figures.[3][4] The project to produce aGPU retail product directly from the Larrabee research project was terminated in May 2010[5] and its technology was passed on to theXeon Phi. TheIntel MIC multiprocessor architecture announced in 2010 inherited many design elements from the Larrabee project, but does not function as a graphics processing unit; the product is intended as aco-processor forhigh performance computing.
Almost a decade later, on June 12, 2018; the idea of an Intel dedicated GPU was revived again with Intel's desire to create a discrete GPU by 2020.[6] This project would eventually become theIntel Xe andIntel Arc series, released in September 2020 and March 2022, respectively - but both were unconnected to the work on the Larrabee project.
On December 4, 2009, Intel officially announced that the first-generation Larrabee would not be released as a consumer GPU product.[7] Instead, it was to be released as a development platform for graphics andhigh-performance computing. The official reason for the strategic reset was attributed to delays in hardware and software development.[8] On May 25, 2010, the Technology@Intel blog announced that Larrabee would not be released as a GPU, but instead would be released as a product for high-performance computing competing with theNvidia Tesla.[9]
The project to produce a GPU retail product directly from the Larrabee research project was terminated in May 2010.[5] TheIntel MIC multiprocessor architecture announced in 2010 inherited many design elements from the Larrabee project, but does not function as a graphics processing unit; the product is intended as a co-processor for high performance computing. The prototype card was namedKnights Ferry, a production card built at a 22 nm process namedKnights Corner was planned for production in 2012 or later.[citation needed]

Larrabee can be considered a hybrid between amulti-coreCPU and aGPU, and has similarities to both. Itscoherent cachehierarchy andx86 architecture compatibility are CPU-like, while its wideSIMD vector units and texture sampling hardware are GPU-like.
As a GPU, Larrabee would have supported traditional rasterized3D graphics (Direct3D &OpenGL) for games. However, its hybridization of CPU and GPU features should also have been suitable forgeneral purpose GPU (GPGPU) orstream processing tasks. For example, it might have performedray tracing orphysics processing,[10] inreal time for games or offline for scientific research as a component of asupercomputer.[11]
Larrabee's early presentation drew some criticism from GPU competitors. AtNVISION 08, anNvidia employee called Intel'sSIGGRAPH paper about Larrabee "marketing puff" and quoted an industry analyst (Peter Glaskowsky) who speculated that the Larrabee architecture was "like aGPU from 2006".[12] By June 2009, Intel claimed that prototypes of Larrabee were on par with theNvidia GeForce GTX 285.[13]Justin Rattner, IntelCTO, delivered a keynote at the Supercomputing 2009 conference on November 17, 2009. During his talk he demonstrated an overclocked Larrabee processor topping one teraFLOPS in performance. He claimed this was the first public demonstration of a single-chip system exceeding one teraFLOPS. He pointed out this was early silicon, thereby leaving open the question on eventual performance for the architecture. Because this was only one fifth that of available competing graphics boards, Larrabee was cancelled "as a standalone discrete graphics product" on December 4, 2009.[3]
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(November 2017) (Learn how and when to remove this message) |
Larrabee was intended to differ from older discrete GPUs such as theGeForce 200 series and theRadeon 4000 series in three major ways:
This had been expected to make Larrabee more flexible than current GPUs, allowing more differentiation in appearance between games or other 3D applications. Intel'sSIGGRAPH 2008 paper mentioned several rendering features that were difficult to achieve on current GPUs: render target read,order-independent transparency,irregular shadow mapping, and real-timeraytracing.[14]
More recent GPUs such as ATI'sRadeon HD 5xxx and Nvidia'sGeForce 400 series feature increasingly broad general-purpose computing capabilities via DirectX11 DirectCompute and OpenCL, as well as Nvidia's proprietaryCUDA technology, giving them many of the capabilities of Larrabee.
The x86 processor cores in Larrabee differed in several ways from the cores in current Intel CPUs such as theCore 2 Duo orCore i7:
Theoretically Larrabee's x86 processor cores would have been able to run existing PC software, or even operating systems. A different version of the processor might sit in motherboard CPU sockets usingQuickPath,[17] but Intel never announced any plans for this. Though Larrabee's native C/C++ compiler included auto-vectorization and many applications were able to execute correctly after having been recompiled, maximum efficiency was expected to have required code optimization using C++ vector intrinsics or inline Larrabee assembly code.[14] However, as in all GPGPUs, not all software would have benefited from utilization of a vector processing unit. One tech journalism site claims that Larrabee's graphics capabilities were planned to be integrated in CPUs based on theHaswell microarchitecture.[18]
Larrabee's philosophy of using many small, simple cores was similar to the ideas behind theCell processor. There are some further commonalities, such as the use of a high-bandwidth ring bus to communicate between cores.[14] However, there were many significant differences in implementation which were expected to make programming Larrabee simpler.
Intel began integrating a line of GPUs onto motherboards under theIntel GMA brand in 2004. Being integrated onto motherboards (newer versions, such as those released with Sandy Bridge, are incorporated onto the same die as the CPU) these chips were not sold separately. Though the low cost andpower consumption of Intel GMA chips made them suitable for small laptops and less demanding tasks, they lack the 3D graphics processing power to compete with contemporary Nvidia and AMD/ATI GPUs for a share of the high-end gaming computer market, theHPC market, or a place in popularvideo game consoles. In contrast, Larrabee was to be sold as a discrete GPU, separate from motherboards, and was expected to perform well enough for consideration in the next generation of video game consoles.[19][20]
The team working on Larrabee was separate from the Intel GMA team. The hardware was designed by a newly formed team at Intel'sHillsboro, Oregon, site, separate from those that designed theNehalem. The software and drivers were written by a newly formed team. The 3D stack specifically was written by developers atRAD Game Tools (includingMichael Abrash).[21]
The Intel Visual Computing Institute researched basic and applied technologies that could be applied to Larrabee-based products.[22]

Intel'sSIGGRAPH 2008 paper describescycle-accurate simulations (limitations of memory, caches and texture units was included) of Larrabee's projected performance.[14] Graphs show how many 1 GHz Larrabee cores are required to maintain 60 frame/s at 1600×1200 resolution in several popular games. Roughly 25 cores are required forGears of War with no antialiasing, 25 cores forF.E.A.R with 4× antialiasing, and 10 cores forHalf-Life 2: Episode Two with 4× antialiasing. Intel claimed that Larrabee would likely run faster than 1 GHz, so these numbers do not represent actual cores, rather virtual timeslices of such. Another graph shows that performance on these games scales nearly linearly with the number of cores up to 32 cores. At 48 cores the performance drops to 90% of what would be expected if the linear relationship continued.[23]
A June 2007PC Watch article suggested that the first Larrabee chips would feature 32 x86 processor cores and come out in late 2009, fabricated on a45 nanometer process. Chips with a few defective cores due toyield issues would be sold as a 24-core version. Later in 2010, Larrabee would be shrunk for a32 nanometer fabrication process to enable a 48-core version.[24]
The last statement of performance can be calculated (theoretically this is maximum possible performance) as follows: 32 cores × 16 single-precision float SIMD/core × 2 FLOP (fused multiply-add) × 2 GHz = 2 TFLOPS theoretically.
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(November 2017) (Learn how and when to remove this message) |
A public demonstration of the Larrabeeray-tracing capabilities took place at theIntel Developer Forum in San Francisco on September 22, 2009. An experimental version ofEnemy Territory: Quake Wars titled Quake Wars:Ray Traced was shown in real-time. The scene contained a ray traced water surface that reflected the surrounding objects, like a ship and several flying vehicles, accurately.[25][26][27]
A second demo was given at the SC09 conference in Portland at November 17, 2009 during a keynote by Intel CTOJustin Rattner. A Larrabee card was able to achieve 1006 GFLops in the SGEMM 4Kx4K calculation.
An engineering sample of a Larrabee card was procured and reviewed byLinus Sebastian in a video published May 14, 2018. He was unable to make the card give video output however, with the motherboard displaying POST code D6.[28] In 2022 another card was demonstrated by YouTuber Roman “der8auer” Hartung, which was shown to be working and outputting a display signal but was not capable of 3D acceleration due to missing drivers.[29]
{{citation}}: CS1 maint: numeric names: authors list (link){{citation}}: CS1 maint: numeric names: authors list (link)