Introduction¶
The Linux compute accelerators subsystem is designed to expose computeaccelerators in a common way to user-space and provide a common set offunctionality.
These devices can be either stand-alone ASICs or IP blocks inside an SoC/GPU.Although these devices are typically designed to accelerateMachine-Learning (ML) and/or Deep-Learning (DL) computations, the accel layeris not limited to handling these types of accelerators.
Typically, a compute accelerator will belong to one of the followingcategories:
Edge AI - doing inference at an edge device. It can be an embedded ASIC/FPGA,or an IP inside a SoC (e.g. laptop web camera). These devicesare typically configured using registers and can work with or without DMA.
Inference data-center - single/multi user devices in a large server. Thistype of device can be stand-alone or an IP inside a SoC or a GPU. It willhave on-board DRAM (to hold the DL topology), DMA engines andcommand submission queues (either kernel or user-space queues).It might also have an MMU to manage multiple users and might also enablevirtualization (SR-IOV) to support multiple VMs on the same device. Inaddition, these devices will usually have some tools, such as profiler anddebugger.
Training data-center - Similar to Inference data-center cards, but typicallyhave more computational power and memory b/w (e.g. HBM) and will likely havea method of scaling-up/out, i.e. connecting to other training cards insidethe server or in other servers, respectively.
All these devices typically have different runtime user-space software stacks,that are tailored-made to their h/w. In addition, they will also probablyinclude a compiler to generate programs to their custom-made computationalengines. Typically, the common layer in user-space will be the DL frameworks,such as PyTorch and TensorFlow.
Sharing code with DRM¶
Because this type of devices can be an IP inside GPUs or have similarcharacteristics as those of GPUs, the accel subsystem will use theDRM subsystem’s code and functionality. i.e. the accel core code willbe part of the DRM subsystem and an accel device will be a new type of DRMdevice.
This will allow us to leverage the extensive DRM code-base andcollaborate with DRM developers that have experience with this type ofdevices. In addition, new features that will be added for the acceleratordrivers can be of use to GPU drivers as well.
Differentiation from GPUs¶
Because we want to prevent the extensive user-space graphic software stackfrom trying to use an accelerator as a GPU, the compute accelerators will bedifferentiated from GPUs by using a new major number and new device char files.
Furthermore, the drivers will be located in a separate place in the kerneltree - drivers/accel/.
The accelerator devices will be exposed to the user space with the dedicated261 major number and will have the following convention:
device char files - /dev/accel/accel*
sysfs - /sys/class/accel/accel*/
debugfs - /sys/kernel/debug/accel/*/
Getting Started¶
First, read the DRM documentation atGPU Driver Developer’s Guide.Not only it will explain how to write a new DRM driver but it will alsocontain all the information on how to contribute, the Code Of Conduct andwhat is the coding style/documentation. All of that is the same for theaccel subsystem.
Second, make sure the kernel is configured with CONFIG_DRM_ACCEL.
To expose your device as an accelerator, two changes are needed tobe done in your driver (as opposed to a standard DRM driver):
Add the DRIVER_COMPUTE_ACCEL feature flag in your drm_driver’sdriver_features field. It is important to note that this driver feature ismutually exclusive with DRIVER_RENDER and DRIVER_MODESET. Devices that wantto expose both graphics and compute device char files should be handled bytwo drivers that are connected using the auxiliary bus framework.
Change the open callback in your driver fops structure to
accel_open().Alternatively, your driver can use DEFINE_DRM_ACCEL_FOPS macro to easilyset the correct function operations pointers structure.
External References¶
email threads¶
Initial discussion on the New subsystem for acceleration devices - Oded Gabbay (2022)
patch-set to add the new subsystem - Oded Gabbay (2022)
Conference talks¶
LPC 2022 Accelerators BOF outcomes summary - Dave Airlie (2022)