Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Heterogeneous System Architecture

From Wikipedia, the free encyclopedia
Computing system

Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration ofcentral processing units andgraphics processors on the same bus, with sharedmemory andtasks.[1] The HSA is being developed by theHSA Foundation, which includes (among many others)AMD andARM. The platform's stated aim is to reducecommunication latency between CPUs, GPUs and othercompute devices, and make these various devices more compatible from a programmer's perspective,[2]: 3 [3] relieving the programmer of the task of planning the moving of data between devices' disjoint memories (as must currently be done withOpenCL orCUDA).[4]

CUDA and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance.[5]Heterogeneous computing is widely used insystem-on-chip devices such astablets,smartphones, other mobile devices, andvideo game consoles.[6] HSA allows programs to use the graphics processor forfloating point calculations without separate memory or scheduling.[7]

Rationale

[edit]

The rationale behind HSA is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other than GPUs, such as other manufacturers'DSPs, as well.

  • Steps performed when offloading calculations to the GPU on a non-HSA system
    Steps performed when offloading calculations to theGPU on a non-HSA system
  • Steps performed when offloading calculations to the GPU on a HSA system, using the HSA functionality
    Steps performed when offloading calculations to the GPU on a HSA system, using the HSA functionality

Modern GPUs are very well suited to performsingle instruction, multiple data (SIMD) andsingle instruction, multiple threads (SIMT), while modern CPUs are still being optimized for branching. etc.

Overview

[edit]
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(May 2014) (Learn how and when to remove this message)

Originally introduced byembedded systems such as theCell Broadband Engine, sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units – central processing units (CPUs),graphics processing units (GPUs),digital signal processors (DSPs), or any type ofapplication-specific integrated circuits (ASICs). The system architecture allows any accelerator, for instance agraphics processor, to operate at the same processing level as the system's CPU.

Among its main features, HSA defines a unifiedvirtual address space for compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to sharepage tables so that devices can exchange data by sharingpointers. This is to be supported by custommemory management units.[2]: 6–7  To render interoperability possible and also to ease various aspects of programming, HSA is intended to beISA-agnostic for both CPUs and accelerators, and to support high-level programming languages.

So far, the HSA specifications cover:

HSA Intermediate Layer

[edit]

HSAIL (Heterogeneous System Architecture Intermediate Language), avirtual instruction set for parallel programs

HSA memory model

[edit]
  • compatible withC++11, OpenCL,Java and.NET memory models
  • relaxed consistency
  • designed to support both managed languages (e.g. Java) and unmanaged languages (e.g.C)
  • will make it much easier to develop 3rd-party compilers for a wide range of heterogeneous products programmed inFortran, C++,C++ AMP, Java, et al.

HSA dispatcher and run-time

[edit]
  • designed to enable heterogeneous task queueing: a work queue per core, distribution of work into queues, load balancing by work stealing
  • any core can schedule work for any other, including itself
  • significant reduction of overhead of scheduling work for a core

Mobile devices are one of the HSA's application areas, in which it yields improved power efficiency.[6]

Block diagrams

[edit]

The illustrations below compare CPU-GPU coordination under HSA versus under traditional architectures.

  • Standard architecture with a discrete GPU attached to the PCI Express bus. Zero-copy between the GPU and CPU is not possible due to distinct physical memories.
    Standard architecture with a discreteGPU attached to thePCI Express bus.Zero-copy between the GPU and CPU is not possible due to distinct physical memories.
  • HSA brings unified virtual memory and facilitates passing pointers over PCI Express instead of copying the entire data.
    HSA brings unified virtual memory and facilitates passing pointers over PCI Express instead of copying the entire data.
  • In partitioned main memory, one part of the system memory is exclusively allocated to the GPU. As a result, zero-copy operation is not possible.
    In partitioned main memory, one part of the system memory is exclusively allocated to the GPU. As a result, zero-copy operation is not possible.
  • Unified main memory, where GPU and CPU are HSA-enabled. This makes zero-copy operation possible.
    Unified main memory, where GPU and CPU are HSA-enabled. This makes zero-copy operation possible.[8]
  • The CPU's MMU and the GPU's IOMMU must both comply with HSA hardware specifications.
    The CPU'sMMU and the GPU'sIOMMU must both comply with HSA hardware specifications.

Software support

[edit]
AMD GPUs contain certain additional functional units intended to be used as part of HSA. In Linux, kernel driveramdkfd provides required support.[9][10]

Some of the HSA-specific features implemented in the hardware need to be supported by theoperating system kernel and specific device drivers. For example, support for AMDRadeon andAMD FirePro graphics cards, andAPUs based onGraphics Core Next (GCN), was merged into version 3.19 of theLinux kernel mainline, released on 8 February 2015.[10] Programs do not interact directly withamdkfd[further explanation needed], but queue their jobs utilizing the HSA runtime.[11] This very first implementation, known asamdkfd, focuses on"Kaveri" or "Berlin" APUs and works alongside the existing Radeon kernel graphics driver.

Additionally,amdkfd supportsheterogeneous queuing (HQ), which aims to simplify the distribution of computational jobs among multiple CPUs and GPUs from the programmer's perspective. Support forheterogeneous memory management (HMM), suited only for graphics hardware featuring version 2 of the AMD'sIOMMU, was accepted into the Linux kernel mainline version 4.14.[12]

Integrated support for HSA platforms has been announced for the "Sumatra" release ofOpenJDK, due in 2015.[13]

AMD APP SDK is AMD's proprietary software development kit targetingparallel computing, available for Microsoft Windows and Linux. Bolt is a C++ template library optimized for heterogeneous computing.[14]

GPUOpen comprehends a couple of other software tools related to HSA.CodeXL version 2.0 includes an HSA profiler.[15]

Hardware support

[edit]

AMD

[edit]

As of February 2015[update], only AMD's "Kaveri" A-series APUs (cf."Kaveri" desktop processors and"Kaveri" mobile processors) and Sony'sPlayStation 4 allowed theintegrated GPU to access memory via version 2 of the AMD's IOMMU. Earlier APUs (Trinity and Richland) included the version 2 IOMMU functionality, but only for use by an external GPU connected via PCI Express.[citation needed]

Post-2015 Carrizo and Bristol Ridge APUs also include the version 2 IOMMU functionality for the integrated GPU.[citation needed]

The following table shows features ofAMD's processors with 3D graphics, includingAPUs (see also:List of AMD processors with 3D graphics).

[ VisualEditor ]
PlatformHigh, standard and low powerLow and ultra-low power
CodenameServerBasicToronto
MicroKyoto
DesktopPerformanceRaphaelPhoenix
MainstreamLlanoTrinityRichlandKaveriKaveri Refresh (Godavari)CarrizoBristol RidgeRaven RidgePicassoRenoirCezanne
Entry
BasicKabiniDalí
MobilePerformanceRenoirCezanneRembrandtDragon Range
MainstreamLlanoTrinityRichlandKaveriCarrizoBristol RidgeRaven RidgePicassoRenoir
Lucienne
Cezanne
Barceló
Phoenix
EntryDalíMendocino
BasicDesna, Ontario, ZacateKabini, TemashBeema, MullinsCarrizo-LStoney RidgePollock
EmbeddedTrinityBald EagleMerlin Falcon,
Brown Falcon
Great Horned OwlGrey HawkOntario, ZacateKabiniSteppe Eagle,Crowned Eagle,
LX-Family
Prairie FalconBanded KestrelRiver Hawk
ReleasedAug 2011Oct 2012Jun 2013Jan 20142015Jun 2015Jun 2016Oct 2017Jan 2019Mar 2020Jan 2021Jan 2022Sep 2022Jan 2023Jan 2011May 2013Apr 2014May 2015Feb 2016Apr 2019Jul 2020Jun 2022Nov 2022
CPUmicroarchitectureK10PiledriverSteamrollerExcavator"Excavator+"[16]ZenZen+Zen 2Zen 3Zen 3+Zen 4BobcatJaguarPumaPuma+[17]"Excavator+"ZenZen+"Zen 2+"
ISAx86-64 v1x86-64 v2x86-64 v3x86-64 v4x86-64 v1x86-64 v2x86-64 v3
SocketDesktopPerformanceAM5
MainstreamAM4
EntryFM1FM2FM2+FM2+[a],AM4AM4
BasicAM1FP5
OtherFS1FS1+,FP2FP3FP4FP5FP6FP7FL1FP7
FP7r2
FP8
FT1FT3FT3bFP4FP5FT5FP5FT6
PCI Express version2.03.04.05.04.02.03.0
CXL
Fab. (nm)GF32SHP
(HKMGSOI)
GF28SHP
(HKMG bulk)
GF14LPP
(FinFET bulk)
GF12LP
(FinFET bulk)
TSMCN7
(FinFET bulk)
TSMCN6
(FinFET bulk)
CCD: TSMCN5
(FinFET bulk)

cIOD: TSMCN6
(FinFET bulk)
TSMC4nm
(FinFET bulk)
TSMCN40
(bulk)
TSMCN28
(HKMG bulk)
GF 28SHP
(HKMG bulk)
GF14LPP
(FinFET bulk)
GF12LP
(FinFET bulk)
TSMCN6
(FinFET bulk)
Die area (mm2)228246245245250210[18]156180210CCD: (2x) 70
cIOD: 122
17875(+ 28FCH)107?125149~100
MinTDP (W)351712101565354.543.95106128
Max APUTDP (W)10095654517054182565415
Max stock APU base clock (GHz)33.84.14.13.73.83.63.73.84.03.34.74.31.752.222.23.22.61.23.352.8
Max APUs per node[b]11
Max core dies per CPU1211
Max CCX per core die1211
Max cores per CCX482424
MaxCPU[c]cores per APU481682424
Maxthreads per CPU core1212
Integer pipeline structure3+32+24+24+2+11+3+3+1+21+1+1+12+24+24+2+1
i386, i486, i586, CMOV, NOPL, i686,PAE,NX bit, CMPXCHG16B,AMD-V,RVI,ABM, and 64-bit LAHF/SAHFYesYes
IOMMU[d]v2v1v2
BMI1,AES-NI,CLMUL, andF16CYesYes
MOVBEYes
AVIC,BMI2,RDRAND, and MWAITX/MONITORXYes
SME[e],TSME[e],ADX,SHA,RDSEED,SMAP,SMEP, XSAVEC, XSAVES, XRSTORS, CLFLUSHOPT, CLZERO, and PTE CoalescingYesYes
GMET, WBNOINVD, CLWB, QOS, PQE-BW, RDPID, RDPRU, and MCOMMITYesYes
MPK,VAESYes
SGX
FPUs percore10.5110.51
Pipes per FPU22
FPU pipe width128-bit256-bit80-bit128-bit256-bit
CPUinstruction setSIMD levelSSE4a[f]AVXAVX2AVX-512SSSE3AVXAVX2
3DNow!3DNow!+
PREFETCH/PREFETCHWYesYes
GFNIYes
AMX
FMA4, LWP,TBM, andXOPYesYes
FMA3YesYes
AMD XDNAYes
L1 data cache per core (KiB)64163232
L1 data cacheassociativity (ways)2488
L1 instruction caches percore10.5110.51
Max APU total L1 instruction cache (KiB)2561281922565122566412896128
L1 instruction cacheassociativity (ways)23482348
L2 caches percore10.5110.51
Max APU total L2 cache (MiB)424161212
L2 cacheassociativity (ways)168168
Max on-dieL3 cache per CCX (MiB)416324
Max 3D V-Cache per CCD (MiB)64
Max total in-CCDL3 cache per APU (MiB)4816644
Max. total 3D V-Cache per APU (MiB)64
Max. boardL3 cache per APU (MiB)
Max totalL3 cache per APU (MiB)48161284
APU L3 cacheassociativity (ways)1616
L3 cache schemeVictimVictim
Max.L4 cache
Max stockDRAM supportDDR3-1866DDR3-2133DDR3-2133,DDR4-2400DDR4-2400DDR4-2933DDR4-3200,LPDDR4-4266DDR5-4800,LPDDR5-6400DDR5-5200DDR5-5600,LPDDR5x-7500DDR3L-1333DDR3L-1600DDR3L-1866DDR3-1866,DDR4-2400DDR4-2400DDR4-1600DDR4-3200LPDDR5-5500
MaxDRAM channels per APU21212
Max stockDRAMbandwidth (GB/s) per APU29.86634.13238.40046.93268.256102.40083.200120.00010.66612.80014.93319.20038.40012.80051.20088.000
GPUmicroarchitectureTeraScale 2 (VLIW5)TeraScale 3 (VLIW4)GCN 2nd genGCN 3rd genGCN 5th gen[19]RDNA 2RDNA 3TeraScale 2 (VLIW5)GCN 2nd genGCN 3rd gen[19]GCN 5th genRDNA 2
GPUinstruction setTeraScale instruction setGCN instruction setRDNA instruction setTeraScale instruction setGCN instruction setRDNA instruction set
Max stock GPU base clock (MHz)60080084486611081250140021002400400538600?847900120060013001900
Max stock GPU baseGFLOPS[g]480614.4648.1886.71134.517601971.22150.43686.4102.486???345.6460.8230.41331.2486.4
3D engine[h]Up to 400:20:8Up to 384:24:6Up to 512:32:8Up to 704:44:16[20]Up to 512:32:8768:48:8128:8:480:8:4128:8:4Up to 192:12:8Up to 192:12:4192:12:4Up to 512:?:?128:?:?
IOMMUv1IOMMUv2IOMMUv1?IOMMUv2
Video decoderUVD 3.0UVD 4.2UVD 6.0VCN 1.0[21]VCN 2.1[22]VCN 2.2[22]VCN 3.1?UVD 3.0UVD 4.0UVD 4.2UVD 6.2VCN 1.0VCN 3.1
Video encoderVCE 1.0VCE 2.0VCE 3.1VCE 2.0VCE 3.4
AMD Fluid MotionNoYesNoNoYesNo
GPU power savingPowerPlayPowerTunePowerPlayPowerTune[23]
TrueAudioYes[24]?Yes
FreeSync1
2
1
2
HDCP[i]?1.42.22.3?1.42.22.3
PlayReady[i]3.0 not yet3.0 not yet
Supported displays[j]2–32–433 (desktop)
4 (mobile, embedded)
42344
/drm/radeon[k][26][27]YesYes
/drm/amdgpu[k][28]Yes[29]Yes[29]
  1. ^For FM2+ Excavator models: A8-7680, A6-7480 & Athlon X4 845.
  2. ^A PC would be one node.
  3. ^An APU combines a CPU and a GPU. Both have cores.
  4. ^Requires firmware support.
  5. ^abRequires firmware support.
  6. ^No SSE4. No SSSE3.
  7. ^Single-precision performance is calculated from the base (or boost) core clock speed based on aFMA operation.
  8. ^Unified shaders :texture mapping units :render output units
  9. ^abTo play protected video content, it also requires card, operating system, driver, and application support. A compatible HDCP display is also needed for this. HDCP is mandatory for the output of certain audio formats, placing additional constraints on the multimedia setup.
  10. ^To feed more than two displays, the additional panels must have nativeDisplayPort support.[25] Alternatively active DisplayPort-to-DVI/HDMI/VGA adapters can be employed.
  11. ^abDRM (Direct Rendering Manager) is a component of the Linux kernel. Support in this table refers to the most current version.

ARM

[edit]

ARM'sBifrost microarchitecture, as implemented in the Mali-G71,[30] is fully compliant with the HSA 1.1 hardware specifications. As of June 2016[update], ARM has not announced software support that would use this hardware feature.

See also

[edit]

References

[edit]
  1. ^Tarun Iyer (30 April 2013)."AMD Unveils its Heterogeneous Uniform Memory Access (hUMA) Technology".Tom's Hardware.
  2. ^abGeorge Kyriazis (30 August 2012).Heterogeneous System Architecture: A Technical Review(PDF) (Report). AMD. Archived fromthe original(PDF) on 28 March 2014. Retrieved26 May 2014.
  3. ^"What is Heterogeneous System Architecture (HSA)?". AMD. Archived fromthe original on 21 June 2014. Retrieved23 May 2014.
  4. ^Joel Hruska (26 August 2013)."Setting HSAIL: AMD explains the future of CPU/GPU cooperation".ExtremeTech.Ziff Davis.
  5. ^Linaro (21 March 2014)."LCE13: Heterogeneous System Architecture (HSA) on ARM".slideshare.net.
  6. ^ab"Heterogeneous System Architecture: Purpose and Outlook".gpuscience.com. 9 November 2012. Archived fromthe original on 1 February 2014. Retrieved24 May 2014.
  7. ^"Heterogeneous system architecture: Multicore image processing using a mix of CPU and GPU elements".Embedded Computing Design. Retrieved23 May 2014.
  8. ^"Kaveri microarchitecture".SemiAccurate. 15 January 2014.
  9. ^Michael Larabel (21 July 2014)."AMDKFD Driver Still Evolving For Open-Source HSA On Linux".Phoronix. Retrieved21 January 2015.
  10. ^ab"Linux kernel 3.19, Section 1.3. HSA driver for AMD GPU devices".kernelnewbies.org. 8 February 2015. Retrieved12 February 2015.
  11. ^"HSA-Runtime-Reference-Source/README.md at master".github.com. 14 November 2014. Retrieved12 February 2015.
  12. ^"Linux Kernel 4.14 Announced with Secure Memory Encryption and More". 13 November 2017. Archived fromthe original on 13 November 2017.
  13. ^Alex Woodie (26 August 2013)."HSA Foundation Aims to Boost Java's GPU Prowess".HPCwire.
  14. ^"Bolt on github".GitHub. 11 January 2022.
  15. ^AMD GPUOpen (19 April 2016)."CodeXL 2.0 includes HSA profiler". Archived fromthe original on 27 June 2018. Retrieved21 April 2016.
  16. ^"AMD Announces the 7th Generation APU: Excavator mk2 in Bristol Ridge and Stoney Ridge for Notebooks". 31 May 2016. Retrieved3 January 2020.
  17. ^"AMD Mobile "Carrizo" Family of APUs Designed to Deliver Significant Leap in Performance, Energy Efficiency in 2015" (Press release). 20 November 2014. Retrieved16 February 2015.
  18. ^"The Mobile CPU Comparison Guide Rev. 13.0 Page 5 : AMD Mobile CPU Full List". TechARP.com. Retrieved13 December 2017.
  19. ^ab"AMD VEGA10 and VEGA11 GPUs spotted in OpenCL driver". VideoCardz.com. Retrieved6 June 2017.
  20. ^Cutress, Ian (1 February 2018)."Zen Cores and Vega: Ryzen APUs for AM4 – AMD Tech Day at CES: 2018 Roadmap Revealed, with Ryzen APUs, Zen+ on 12nm, Vega on 7nm". Anandtech. Retrieved7 February 2018.
  21. ^Larabel, Michael (17 November 2017)."Radeon VCN Encode Support Lands in Mesa 17.4 Git". Phoronix. Retrieved20 November 2017.
  22. ^ab"AMD Ryzen 5000G 'Cezanne' APU Gets First High-Res Die Shots, 10.7 Billion Transistors In A 180mm2 Package".wccftech. 12 August 2021. Retrieved25 August 2021.
  23. ^Tony Chen; Jason Greaves,"AMD's Graphics Core Next (GCN) Architecture"(PDF),AMD, retrieved13 August 2016
  24. ^"A technical look at AMD's Kaveri architecture". Semi Accurate. Retrieved6 July 2014.
  25. ^"How do I connect three or More Monitors to an AMD Radeon™ HD 5000, HD 6000, and HD 7000 Series Graphics Card?". AMD. Retrieved8 December 2014.
  26. ^Airlie, David (26 November 2009)."DisplayPort supported by KMS driver mainlined into Linux kernel 2.6.33". Retrieved16 January 2016.
  27. ^"Radeon feature matrix".freedesktop.org. Retrieved10 January 2016.
  28. ^Deucher, Alexander (16 September 2015)."XDC2015: AMDGPU"(PDF). Retrieved16 January 2016.
  29. ^abMichel Dänzer (17 November 2016)."[ANNOUNCE] xf86-video-amdgpu 1.2.0".lists.x.org.
  30. ^"ARM Bifrost GPU Architecture". 30 May 2016. Archived fromthe original on 10 September 2016.
  31. ^ Computer memory architecture for hybrid serial and parallel computing systems, US patents 7,707,388, 2010 and 8,145,879, 2012. Inventor:Uzi Vishkin

External links

[edit]
Wikimedia Commons has media related toHeterogeneous System Architecture.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Heterogeneous_System_Architecture&oldid=1304462226"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp