Buffer Sharing and Synchronization

The dma-buf subsystem provides the framework for sharing buffers forhardware (DMA) access across multiple device drivers and subsystems, andfor synchronizing asynchronous hardware access.

This is used, for example, by drm “prime” multi-GPU support, but is ofcourse not limited to GPU use cases.

The three main components of this are: (1) dma-buf, representing asg_table and exposed to userspace as a file descriptor to allow passingbetween devices, (2) fence, which provides a mechanism to signal whenone device has finished access, and (3) reservation, which manages theshared or exclusive fence(s) associated with the buffer.

Shared DMA Buffers

This document serves as a guide to device-driver writers on what is the dma-bufbuffer sharing API, how to use it for exporting and using shared buffers.

Any device driver which wishes to be a part of DMA buffer sharing, can do so aseither the ‘exporter’ of buffers, or the ‘user’ or ‘importer’ of buffers.

Say a driver A wants to use buffers created by driver B, then we call B as theexporter, and A as buffer-user/importer.

The exporter

  • implements and manages operations instructdma_buf_ops for the buffer,
  • allows other users to share the buffer by using dma_buf sharing APIs,
  • manages the details of buffer allocation, wrapped in astructdma_buf,
  • decides about the actual backing storage where this allocation happens,
  • and takes care of any migration of scatterlist - for all (shared) users ofthis buffer.

The buffer-user

  • is one of (many) sharing users of the buffer.
  • doesn’t need to worry about how the buffer is allocated, or where.
  • and needs a mechanism to get access to the scatterlist that makes up thisbuffer in memory, mapped into its own address space, so it can access thesame area of memory. This interface is provided bystructdma_buf_attachment.

Any exporters or users of the dma-buf buffer sharing framework must have a‘select DMA_SHARED_BUFFER’ in their respective Kconfigs.

Userspace Interface Notes

Mostly a DMA buffer file descriptor is simply an opaque object for userspace,and hence the generic interface exposed is very minimal. There’s a few things toconsider though:

  • Since kernel 3.12 the dma-buf FD supports the llseek system call, but onlywith offset=0 and whence=SEEK_END|SEEK_SET. SEEK_SET is supported to allowthe usual size discover pattern size = SEEK_END(0); SEEK_SET(0). Every otherllseek operation will report -EINVAL.

    If llseek on dma-buf FDs isn’t support the kernel will report -ESPIPE for allcases. Userspace can use this to detect support for discovering the dma-bufsize using llseek.

  • In order to avoid fd leaks on exec, the FD_CLOEXEC flag must be seton the file descriptor. This is not just a resource leak, but apotential security hole. It could give the newly exec’d applicationaccess to buffers, via the leaked fd, to which it should otherwisenot be permitted access.

    The problem with doing this via a separate fcntl() call, versus doing itatomically when the fd is created, is that this is inherently racy in amulti-threaded app[3]. The issue is made worse when it is library codeopening/creating the file descriptor, as the application may not even beaware of the fd’s.

    To avoid this problem, userspace must have a way to request O_CLOEXECflag be set when the dma-buf fd is created. So any API provided bythe exporting driver to create a dmabuf fd must provide a way to letuserspace control setting of O_CLOEXEC flag passed in todma_buf_fd().

  • Memory mapping the contents of the DMA buffer is also supported. See thediscussion below onCPU Access to DMA Buffer Objects for the full details.

  • The DMA buffer FD is also pollable, seeFence Poll Support below fordetails.

Basic Operation and Device DMA Access

For device DMA access to a shared DMA buffer the usual sequence of operationsis fairly simple:

  1. The exporter defines his exporter instance usingDEFINE_DMA_BUF_EXPORT_INFO() and callsdma_buf_export() to wrap a privatebuffer object into adma_buf. It then exports thatdma_buf to userspaceas a file descriptor by callingdma_buf_fd().

  2. Userspace passes this file-descriptors to all drivers it wants this bufferto share with: First the filedescriptor is converted to adma_buf usingdma_buf_get(). Then the buffer is attached to the device usingdma_buf_attach().

    Up to this stage the exporter is still free to migrate or reallocate thebacking storage.

  3. Once the buffer is attached to all devices userspace can initiate DMAaccess to the shared buffer. In the kernel this is done by callingdma_buf_map_attachment() anddma_buf_unmap_attachment().

  4. Once a driver is done with a shared buffer it needs to calldma_buf_detach() (after cleaning up any mappings) and then release thereference acquired with dma_buf_get by callingdma_buf_put().

For the detailed semantics exporters are expected to implement seedma_buf_ops.

CPU Access to DMA Buffer Objects

There are mutliple reasons for supporting CPU access to a dma buffer object:

  • Fallback operations in the kernel, for example when a device is connectedover USB and the kernel needs to shuffle the data around first beforesending it away. Cache coherency is handled by braketing any transactionswith calls todma_buf_begin_cpu_access() anddma_buf_end_cpu_access()access.

    Since for most kernel internal dma-buf accesses need the entire buffer, avmap interface is introduced. Note that on very old 32-bit architecturesvmalloc space might be limited and result in vmap calls failing.

    Interfaces::

    void *dma_buf_vmap(struct dma_buf *dmabuf)void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)

    The vmap call can fail if there is no vmap support in the exporter, or ifit runs out of vmalloc space. Fallback to kmap should be implemented. Notethat the dma-buf layer keeps a reference count for all vmap access andcalls down into the exporter’s vmap function only when no vmapping exists,and only unmaps it once. Protection against concurrent vmap/vunmap calls isprovided by taking the dma_buf->lock mutex.

  • For full compatibility on the importer side with existing userspaceinterfaces, which might already support mmap’ing buffers. This is needed inmany processing pipelines (e.g. feeding a software rendered image into ahardware pipeline, thumbnail creation, snapshots, …). Also, Android’s IONframework already supported this and for DMA buffer file descriptors toreplace ION buffers mmap support was needed.

    There is no special interfaces, userspace simply calls mmap on the dma-buffd. But like for CPU access there’s a need to braket the actual access,which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note thatDMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it mustbe restarted.

    Some systems might need some sort of cache coherency management e.g. whenCPU and GPU domains are being accessed through dma-buf at the same time.To circumvent this problem there are begin/end coherency markers, thatforward directly to existing dma-buf device drivers vfunc hooks. Userspacecan make use of those markers through the DMA_BUF_IOCTL_SYNC ioctl. Thesequence would be used like following:

    • mmap dma-buf fd
    • for each drawing/upload cycle in CPU 1. SYNC_START ioctl, 2. read/writeto mmap area 3. SYNC_END ioctl. This can be repeated as often as youwant (with the new data being consumed by say the GPU or the scanoutdevice)
    • munmap once you don’t need the buffer any more

    For correctness and optimal performance, it is always required to useSYNC_START and SYNC_END before and after, respectively, when accessing themapped address. Userspace cannot rely on coherent access, even when thereare systems where it just works without calling these ioctls.

  • And as a CPU fallback in userspace processing pipelines.

    Similar to the motivation for kernel cpu access it is again important thatthe userspace code of a given importing subsystem can use the sameinterfaces with a imported dma-buf buffer object as with a native bufferobject. This is especially important for drm where the userspace part ofcontemporary OpenGL, X, and other drivers is huge, and reworking them touse a different way to mmap a buffer rather invasive.

    The assumption in the current dma-buf interfaces is that redirecting theinitial mmap is all that’s needed. A survey of some of the existingsubsystems shows that no driver seems to do any nefarious thing likesyncing up with outstanding asynchronous processing on the device orallocating special resources at fault time. So hopefully this is goodenough, since adding interfaces to intercept pagefaults and allow pteshootdowns would increase the complexity quite a bit.

    Interface::
    int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,

    unsigned long);

    If the importing subsystem simply provides a special-purpose mmap call toset up a mapping in userspace, calling do_mmap with dma_buf->file willequally achieve that for a dma-buf object.

Fence Poll Support

To support cross-device and cross-driver synchronization of buffer accessimplicit fences (represented internally in the kernel withstructfence) canbe attached to adma_buf. The glue for that and a few related things areprovided in thedma_resv structure.

Userspace can query the state of these implicitly tracked fences using poll()and related system calls:

  • Checking for EPOLLIN, i.e. read access, can be use to query the state of themost recent write or exclusive fence.
  • Checking for EPOLLOUT, i.e. write access, can be used to query the state ofall attached fences, shared and exclusive ones.

Note that this only signals the completion of the respective fences, i.e. theDMA transfers are complete. Cache flushing and any other necessarypreparations before CPU access can begin still need to happen.

Kernel Functions and Structures Reference

structdma_buf *dma_buf_export(const structdma_buf_export_info * exp_info)

Creates a new dma_buf, and associates an anon file with this buffer, so it can be exported. Also connect the allocator specific data and ops to the buffer. Additionally, provide a name string for exporter; useful in debugging.

Parameters

conststructdma_buf_export_info*exp_info
[in] holds all the export related information providedby the exporter. seestructdma_buf_export_infofor further details.

Description

Returns, on success, a newly created dma_buf object, which wraps thesupplied private data and operations for dma_buf_ops. On either missingops, or error in allocating struct dma_buf, will return negative error.

For most cases the easiest way to createexp_info is through theDEFINE_DMA_BUF_EXPORT_INFO macro.

intdma_buf_fd(structdma_buf * dmabuf, int flags)

returns a file descriptor for the given dma_buf

Parameters

structdma_buf*dmabuf
[in] pointer to dma_buf for which fd is required.
intflags
[in] flags to give to fd

Description

On success, returns an associated ‘fd’. Else, returns error.

structdma_buf *dma_buf_get(int fd)

returns the dma_buf structure related to an fd

Parameters

intfd
[in] fd associated with the dma_buf to be returned

Description

On success, returns the dma_buf structure associated with an fd; usesfile’s refcounting done by fget to increase refcount. returns ERR_PTRotherwise.

voiddma_buf_put(structdma_buf * dmabuf)

decreases refcount of the buffer

Parameters

structdma_buf*dmabuf
[in] buffer to reduce refcount of

Description

Uses file’s refcounting done implicitly by fput().

If, as a result of this call, the refcount becomes 0, the ‘release’ fileoperation related to this fd is called. It callsdma_buf_ops.release vfuncin turn, and frees the memory allocated for dmabuf when exported.

structdma_buf_attachment *dma_buf_dynamic_attach(structdma_buf * dmabuf, structdevice * dev, const structdma_buf_attach_ops * importer_ops, void * importer_priv)

Add the device to dma_buf’s attachments list; optionally, calls attach() of dma_buf_ops to allow device-specific attach functionality

Parameters

structdma_buf*dmabuf
[in] buffer to attach device to.
structdevice*dev
[in] device to be attached.
conststructdma_buf_attach_ops*importer_ops
[in] importer operations for the attachment
void*importer_priv
[in] importer private pointer for the attachment

Description

Returns struct dma_buf_attachment pointer for this attachment. Attachmentsmust be cleaned up by callingdma_buf_detach().

A pointer to newly createddma_buf_attachment on success, or a negativeerror code wrapped into a pointer on failure.

Note that this can fail if the backing storage ofdmabuf is in a place notaccessible todev, and cannot be moved to a more suitable place. This isindicated with the error code -EBUSY.

Return

structdma_buf_attachment *dma_buf_attach(structdma_buf * dmabuf, structdevice * dev)

Wrapper for dma_buf_dynamic_attach

Parameters

structdma_buf*dmabuf
[in] buffer to attach device to.
structdevice*dev
[in] device to be attached.

Description

Wrapper to calldma_buf_dynamic_attach() for drivers which still use a staticmapping.

voiddma_buf_detach(structdma_buf * dmabuf, structdma_buf_attachment * attach)

Remove the given attachment from dmabuf’s attachments list; optionally calls detach() of dma_buf_ops for device-specific detach

Parameters

structdma_buf*dmabuf
[in] buffer to detach from.
structdma_buf_attachment*attach
[in] attachment to be detached; is free’d after this call.

Description

Clean up a device attachment obtained by callingdma_buf_attach().

intdma_buf_pin(structdma_buf_attachment * attach)

Lock down the DMA-buf

Parameters

structdma_buf_attachment*attach
[in] attachment which should be pinned

Return

0 on success, negative error code on failure.

voiddma_buf_unpin(structdma_buf_attachment * attach)

Remove lock from DMA-buf

Parameters

structdma_buf_attachment*attach
[in] attachment which should be unpinned
struct sg_table *dma_buf_map_attachment(structdma_buf_attachment * attach, enum dma_data_direction direction)

Returns the scatterlist table of the attachment; mapped into _device_ address space. Is a wrapper for map_dma_buf() of the dma_buf_ops.

Parameters

structdma_buf_attachment*attach
[in] attachment whose scatterlist is to be returned
enumdma_data_directiondirection
[in] direction of DMA transfer

Description

Returns sg_table containing the scatterlist to be returned; returns ERR_PTRon error. May return -EINTR if it is interrupted by a signal.

A mapping must be unmapped by usingdma_buf_unmap_attachment(). Note thatthe underlying backing storage is pinned for as long as a mapping exists,therefore users/importers should not hold onto a mapping for undue amounts oftime.

voiddma_buf_unmap_attachment(structdma_buf_attachment * attach, struct sg_table * sg_table, enum dma_data_direction direction)

unmaps and decreases usecount of the buffer;might deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of dma_buf_ops.

Parameters

structdma_buf_attachment*attach
[in] attachment to unmap buffer from
structsg_table*sg_table
[in] scatterlist info of the buffer to unmap
enumdma_data_directiondirection
[in] direction of DMA transfer

Description

This unmaps a DMA mapping forattached obtained bydma_buf_map_attachment().

voiddma_buf_move_notify(structdma_buf * dmabuf)

notify attachments that DMA-buf is moving

Parameters

structdma_buf*dmabuf
[in] buffer which is moving

Description

Informs all attachmenst that they need to destroy and recreated all theirmappings.

intdma_buf_begin_cpu_access(structdma_buf * dmabuf, enum dma_data_direction direction)

Must be called before accessing a dma_buf from the cpu in the kernel context. Calls begin_cpu_access to allow exporter-specific preparations. Coherency is only guaranteed in the specified range for the specified access direction.

Parameters

structdma_buf*dmabuf
[in] buffer to prepare cpu access for.
enumdma_data_directiondirection
[in] length of range for cpu access.

Description

After the cpu access is complete the caller should calldma_buf_end_cpu_access(). Only when cpu access is braketed by both calls isit guaranteed to be coherent with other DMA access.

Can return negative error values, returns 0 on success.

intdma_buf_end_cpu_access(structdma_buf * dmabuf, enum dma_data_direction direction)

Must be called after accessing a dma_buf from the cpu in the kernel context. Calls end_cpu_access to allow exporter-specific actions. Coherency is only guaranteed in the specified range for the specified access direction.

Parameters

structdma_buf*dmabuf
[in] buffer to complete cpu access for.
enumdma_data_directiondirection
[in] length of range for cpu access.

Description

This terminates CPU access started withdma_buf_begin_cpu_access().

Can return negative error values, returns 0 on success.

intdma_buf_mmap(structdma_buf * dmabuf, struct vm_area_struct * vma, unsigned long pgoff)

Setup up a userspace mmap with the given vma

Parameters

structdma_buf*dmabuf
[in] buffer that should back the vma
structvm_area_struct*vma
[in] vma for the mmap
unsignedlongpgoff
[in] offset in pages where this mmap should start within thedma-buf buffer.

Description

This function adjusts the passed in vma so that it points at the file of thedma_buf operation. It also adjusts the starting pgoff and does boundschecking on the size of the vma. Then it calls the exporters mmap function toset up the mapping.

Can return negative error values, returns 0 on success.

void *dma_buf_vmap(structdma_buf * dmabuf)

Create virtual mapping for the buffer object into kernel address space. Same restrictions as for vmap and friends apply.

Parameters

structdma_buf*dmabuf
[in] buffer to vmap

Description

This call may fail due to lack of virtual mapping address space.These calls are optional in drivers. The intended use for themis for mapping objects linear in kernel space for high use objects.Please attempt to use kmap/kunmap before thinking about these interfaces.

Returns NULL on error.

voiddma_buf_vunmap(structdma_buf * dmabuf, void * vaddr)

Unmap a vmap obtained by dma_buf_vmap.

Parameters

structdma_buf*dmabuf
[in] buffer to vunmap
void*vaddr
[in] vmap to vunmap
structdma_buf_ops

operations possible on struct dma_buf

Definition

struct dma_buf_ops {  bool cache_sgt_mapping;  int (*attach)(struct dma_buf *, struct dma_buf_attachment *);  void (*detach)(struct dma_buf *, struct dma_buf_attachment *);  int (*pin)(struct dma_buf_attachment *attach);  void (*unpin)(struct dma_buf_attachment *attach);  struct sg_table * (*map_dma_buf)(struct dma_buf_attachment *, enum dma_data_direction);  void (*unmap_dma_buf)(struct dma_buf_attachment *,struct sg_table *, enum dma_data_direction);  void (*release)(struct dma_buf *);  int (*begin_cpu_access)(struct dma_buf *, enum dma_data_direction);  int (*end_cpu_access)(struct dma_buf *, enum dma_data_direction);  int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);  void *(*vmap)(struct dma_buf *);  void (*vunmap)(struct dma_buf *, void *vaddr);};

Members

cache_sgt_mapping
If true the framework will cache the first mapping made for eachattachment. This avoids creating mappings for attachments multipletimes.
attach

This is called fromdma_buf_attach() to make sure that a givendma_buf_attachment.dev can access the provideddma_buf. Exporterswhich support buffer objects in special locations like VRAM ordevice-specific carveout areas should check whether the buffer couldbe move to system memory (or directly accessed by the provideddevice), and otherwise need to fail the attach operation.

The exporter should also in general check whether the currentallocation fullfills the DMA constraints of the new device. If thisis not the case, and the allocation cannot be moved, it should alsofail the attach operation.

Any exporter-private housekeeping data can be stored in thedma_buf_attachment.priv pointer.

This callback is optional.

Returns:

0 on success, negative error code on failure. It might return -EBUSYto signal that backing storage is already allocated and incompatiblewith the requirements of requesting device.

detach

This is called bydma_buf_detach() to release adma_buf_attachment.Provided so that exporters can clean up any housekeeping for andma_buf_attachment.

This callback is optional.

pin

This is called by dma_buf_pin and lets the exporter know that theDMA-buf can’t be moved any more.

This is called with the dmabuf->resv object locked and is mutualexclusive withcache_sgt_mapping.

This callback is optional and should only be used in limited usecases like scanout and not for temporary pin operations.

Returns:

0 on success, negative error code on failure.

unpin

This is called by dma_buf_unpin and lets the exporter know that theDMA-buf can be moved again.

This is called with the dmabuf->resv object locked and is mutualexclusive withcache_sgt_mapping.

This callback is optional.

map_dma_buf

This is called bydma_buf_map_attachment() and is used to map ashareddma_buf into device address space, and it is mandatory. Itcan only be called ifattach has been called successfully.

This call may sleep, e.g. when the backing storage first needs to beallocated, or moved to a location suitable for all currently attacheddevices.

Note that any specific buffer attributes required for this functionshould get added to device_dma_parameters accessible viadevice.dma_params from thedma_buf_attachment. Theattach callbackshould also check these constraints.

If this is being called for the first time, the exporter can nowchoose to scan through the list of attachments for this buffer,collate the requirements of the attached devices, and choose anappropriate backing storage for the buffer.

Based on enum dma_data_direction, it might be possible to havemultiple users accessing at the same time (for reading, maybe), orany other kind of sharing that the exporter might wish to makeavailable to buffer-users.

This is always called with the dmabuf->resv object locked whenthe dynamic_mapping flag is true.

Returns:

Asg_table scatter list of or the backing storage of the DMA buffer,already mapped into the device address space of thedevice attachedwith the provideddma_buf_attachment.

On failure, returns a negative error value wrapped into a pointer.May also return -EINTR when a signal was received while beingblocked.

unmap_dma_buf
This is called bydma_buf_unmap_attachment() and should unmap andrelease thesg_table allocated inmap_dma_buf, and it is mandatory.For static dma_buf handling this might also unpins the backingstorage if this is the last mapping of the DMA buffer.
release
Called after the last dma_buf_put to release thedma_buf, andmandatory.
begin_cpu_access

This is called fromdma_buf_begin_cpu_access() and allows theexporter to ensure that the memory is actually available for cpuaccess - the exporter might need to allocate or swap-in and pin thebacking storage. The exporter also needs to ensure that cpu access iscoherent for the access direction. The direction can be used by theexporter to optimize the cache flushing, i.e. access with a differentdirection (read instead of write) might return stale or even bogusdata (e.g. when the exporter needs to copy the data to temporarystorage).

This callback is optional.

FIXME: This is both called through the DMA_BUF_IOCTL_SYNC commandfrom userspace (where storage shouldn’t be pinned to avoid handingde-factor mlock rights to userspace) and for the kernel-internalusers of the various kmap interfaces, where the backing storage mustbe pinned to guarantee that the atomic kmap calls can succeed. Sincethere’s no in-kernel users of the kmap interfaces yet this isn’t areal problem.

Returns:

0 on success or a negative error code on failure. This can forexample fail when the backing storage can’t be allocated. Can alsoreturn -ERESTARTSYS or -EINTR when the call has been interrupted andneeds to be restarted.

end_cpu_access

This is called fromdma_buf_end_cpu_access() when the importer isdone accessing the CPU. The exporter can use this to flush caches andunpin any resources pinned inbegin_cpu_access.The result of any dma_buf kmap calls after end_cpu_access isundefined.

This callback is optional.

Returns:

0 on success or a negative error code on failure. Can return-ERESTARTSYS or -EINTR when the call has been interrupted and needsto be restarted.

mmap

This callback is used by thedma_buf_mmap() function

Note that the mapping needs to be incoherent, userspace is expectedto braket CPU access using the DMA_BUF_IOCTL_SYNC interface.

Because dma-buf buffers have invariant size over their lifetime, thedma-buf core checks whether a vma is too large and rejects suchmappings. The exporter hence does not need to duplicate this check.Drivers do not need to check this themselves.

If an exporter needs to manually flush caches and hence needs to fakecoherency for mmap support, it needs to be able to zap all the ptespointing at the backing storage. Now linux mm needs a structaddress_space associated with the struct file stored in vma->vm_fileto do that with the function unmap_mapping_range. But the dma_bufframework only backs every dma_buf fd with the anon_file struct file,i.e. all dma_bufs share the same file.

Hence exporters need to setup their own file (and address_space)association by setting vma->vm_file and adjusting vma->vm_pgoff inthe dma_buf mmap callback. In the specific case of a gem driver theexporter could use the shmem file already provided by gem (and setvm_pgoff = 0). Exporters can then zap ptes by unmapping thecorresponding range of the struct address_space associated with theirown file.

This callback is optional.

Returns:

0 on success or a negative error code on failure.

vmap
[optional] creates a virtual mapping for the buffer into kerneladdress space. Same restrictions as for vmap and friends apply.
vunmap
[optional] unmaps a vmap from the buffer
structdma_buf

shared buffer object

Definition

struct dma_buf {  size_t size;  struct file *file;  struct list_head attachments;  const struct dma_buf_ops *ops;  struct mutex lock;  unsigned vmapping_counter;  void *vmap_ptr;  const char *exp_name;  const char *name;  spinlock_t name_lock;  struct module *owner;  struct list_head list_node;  void *priv;  struct dma_resv *resv;  wait_queue_head_t poll;  struct dma_buf_poll_cb_t {    struct dma_fence_cb cb;    wait_queue_head_t *poll;    __poll_t active;  } cb_excl, cb_shared;};

Members

size
size of the buffer
file
file pointer used for sharing buffers across, and for refcounting.
attachments
list of dma_buf_attachment that denotes all devices attached,protected by dma_resv lock.
ops
dma_buf_ops associated with this buffer object.
lock
used internally to serialize list manipulation, attach/detach andvmap/unmap
vmapping_counter
used internally to refcnt the vmaps
vmap_ptr
the current vmap ptr if vmapping_counter > 0
exp_name
name of the exporter; useful for debugging.
name
userspace-provided name; useful for accounting and debugging,protected byresv.
owner
pointer to exporter module; used for refcounting when exporter is akernel module.
list_node
node for dma_buf accounting and debugging.
priv
exporter specific private data for this buffer object.
resv
reservation object linked to this dma-buf
poll
for userspace poll support
cb_excl
for userspace poll support
cb_shared
for userspace poll support

Description

This represents a shared buffer, created by callingdma_buf_export(). Theuserspace representation is a normal file descriptor, which can be created bycallingdma_buf_fd().

Shared dma buffers are reference counted usingdma_buf_put() andget_dma_buf().

Device DMA access is handled by the separatestructdma_buf_attachment.

structdma_buf_attach_ops

importer operations for an attachment

Definition

struct dma_buf_attach_ops {  bool allow_peer2peer;  void (*move_notify)(struct dma_buf_attachment *attach);};

Members

allow_peer2peer
If this is set to true the importer must be able to handle peerresources without struct pages.
move_notify

[optional] notification that the DMA-buf is moving

If this callback is provided the framework can avoid pinning thebacking store while mappings exists.

This callback is called with the lock of the reservation objectassociated with the dma_buf held and the mapping function must becalled with this lock held as well. This makes sure that no mappingis created concurrently with an ongoing move operation.

Mappings stay valid and are not directly affected by this callback.But the DMA-buf can now be in a different physical location, so allmappings should be destroyed and re-created as soon as possible.

New mappings can be created after this callback returns, and willpoint to the new location of the DMA-buf.

Description

Attachment operations implemented by the importer.

structdma_buf_attachment

holds device-buffer attachment data

Definition

struct dma_buf_attachment {  struct dma_buf *dmabuf;  struct device *dev;  struct list_head node;  struct sg_table *sgt;  enum dma_data_direction dir;  bool peer2peer;  const struct dma_buf_attach_ops *importer_ops;  void *importer_priv;  void *priv;};

Members

dmabuf
buffer for this attachment.
dev
device attached to the buffer.
node
list of dma_buf_attachment, protected by dma_resv lock of the dmabuf.
sgt
cached mapping.
dir
direction of cached mapping.
peer2peer
true if the importer can handle peer resources without pages.
importer_ops
importer operations for this attachment, if provideddma_buf_map/unmap_attachment() must be called with the dma_resv lock held.
importer_priv
importer specific attachment data.
priv
exporter specific attachment data.

Description

This structure holds the attachment information between the dma_buf bufferand its user device(s). The list contains one attachment struct per deviceattached to the buffer.

An attachment is created by callingdma_buf_attach(), and released again bycallingdma_buf_detach(). The DMA mapping itself needed to initiate atransfer is created bydma_buf_map_attachment() and freed again by callingdma_buf_unmap_attachment().

structdma_buf_export_info

holds information needed to export a dma_buf

Definition

struct dma_buf_export_info {  const char *exp_name;  struct module *owner;  const struct dma_buf_ops *ops;  size_t size;  int flags;  struct dma_resv *resv;  void *priv;};

Members

exp_name
name of the exporter - useful for debugging.
owner
pointer to exporter module - used for refcounting kernel module
ops
Attach allocator-defined dma buf ops to the new buffer
size
Size of the buffer
flags
mode flags for the file
resv
reservation-object, NULL to allocate default one
priv
Attach private data of allocator to this buffer

Description

This structure holds the information required to export the buffer. Usedwithdma_buf_export() only.

DEFINE_DMA_BUF_EXPORT_INFO(name)

helper macro for exporters

Parameters

name
export-info name

Description

DEFINE_DMA_BUF_EXPORT_INFO macro defines thestructdma_buf_export_info,zeroes it out and pre-populates exp_name in it.

voidget_dma_buf(structdma_buf * dmabuf)

convenience wrapper for get_file.

Parameters

structdma_buf*dmabuf
[in] pointer to dma_buf

Description

Increments the reference count on the dma-buf, needed in case of driversthat either need to create additional references to the dmabuf on thekernel side. For example, an exporter that needs to keep a dmabuf ptrso that subsequent exports don’t create a new dmabuf.

booldma_buf_is_dynamic(structdma_buf * dmabuf)

check if a DMA-buf uses dynamic mappings.

Parameters

structdma_buf*dmabuf
the DMA-buf to check

Description

Returns true if a DMA-buf exporter wants to be called with the dma_resvlocked for the map/unmap callbacks, false if it doesn’t wants to be calledwith the lock held.

booldma_buf_attachment_is_dynamic(structdma_buf_attachment * attach)

check if a DMA-buf attachment uses dynamic mappinsg

Parameters

structdma_buf_attachment*attach
the DMA-buf attachment to check

Description

Returns true if a DMA-buf importer wants to call the map/unmap functions withthe dma_resv lock held.

Reservation Objects

The reservation object provides a mechanism to manage shared andexclusive fences associated with a buffer. A reservation objectcan have attached one exclusive fence (normally associated withwrite operations) or N shared fences (read operations). The RCUmechanism is used to protect read access to fences from lockedwrite-side updates.

voiddma_resv_init(structdma_resv * obj)

initialize a reservation object

Parameters

structdma_resv*obj
the reservation object
voiddma_resv_fini(structdma_resv * obj)

destroys a reservation object

Parameters

structdma_resv*obj
the reservation object
intdma_resv_reserve_shared(structdma_resv * obj, unsigned int num_fences)

Reserve space to add shared fences to a dma_resv.

Parameters

structdma_resv*obj
reservation object
unsignedintnum_fences
number of fences we want to add

Description

Should be called beforedma_resv_add_shared_fence(). Mustbe called with obj->lock held.

RETURNSZero for success, or -errno

voiddma_resv_add_shared_fence(structdma_resv * obj, structdma_fence * fence)

Add a fence to a shared slot

Parameters

structdma_resv*obj
the reservation object
structdma_fence*fence
the shared fence to add

Description

Add a fence to a shared slot, obj->lock must be held, anddma_resv_reserve_shared() has been called.

voiddma_resv_add_excl_fence(structdma_resv * obj, structdma_fence * fence)

Add an exclusive fence.

Parameters

structdma_resv*obj
the reservation object
structdma_fence*fence
the shared fence to add

Description

Add a fence to the exclusive slot. The obj->lock must be held.

intdma_resv_copy_fences(structdma_resv * dst, structdma_resv * src)

Copy all fences from src to dst.

Parameters

structdma_resv*dst
the destination reservation object
structdma_resv*src
the source reservation object

Description

Copy all fences from src to dst. dst-lock must be held.

intdma_resv_get_fences_rcu(structdma_resv * obj, structdma_fence ** pfence_excl, unsigned * pshared_count, structdma_fence *** pshared)

Get an object’s shared and exclusive fences without update side lock held

Parameters

structdma_resv*obj
the reservation object
structdma_fence**pfence_excl
the returned exclusive fence (or NULL)
unsigned*pshared_count
the number of shared fences returned
structdma_fence***pshared
the array of shared fence ptrs returned (array is krealloc’d tothe required size, and must be freed by caller)

Description

Retrieve all fences from the reservation object. If the pointer for theexclusive fence is not specified the fence is put into the array of theshared fences as well. Returns either zero or -ENOMEM.

longdma_resv_wait_timeout_rcu(structdma_resv * obj, bool wait_all, bool intr, unsigned long timeout)

Wait on reservation’s objects shared and/or exclusive fences.

Parameters

structdma_resv*obj
the reservation object
boolwait_all
if true, wait on all fences, else wait on just exclusive fence
boolintr
if true, do interruptible wait
unsignedlongtimeout
timeout value in jiffies or zero to return immediately

Description

RETURNSReturns -ERESTARTSYS if interrupted, 0 if the wait timed out, orgreater than zer on success.

booldma_resv_test_signaled_rcu(structdma_resv * obj, bool test_all)

Test if a reservation object’s fences have been signaled.

Parameters

structdma_resv*obj
the reservation object
booltest_all
if true, test all fences, otherwise only test the exclusivefence

Description

RETURNStrue if all fences signaled, else false

structdma_resv_list

a list of shared fences

Definition

struct dma_resv_list {  struct rcu_head rcu;  u32 shared_count, shared_max;  struct dma_fence __rcu *shared[];};

Members

rcu
for internal use
shared_count
table of shared fences
shared_max
for growing shared fence table
shared
shared fence table
structdma_resv

a reservation object manages fences for a buffer

Definition

struct dma_resv {  struct ww_mutex lock;  seqcount_t seq;  struct dma_fence __rcu *fence_excl;  struct dma_resv_list __rcu *fence;};

Members

lock
update side lock
seq
sequence count for managing RCU read-side synchronization
fence_excl
the exclusive fence, if there is one currently
fence
list of current shared fences
structdma_resv_list *dma_resv_get_list(structdma_resv * obj)

get the reservation object’s shared fence list, with update-side lock held

Parameters

structdma_resv*obj
the reservation object

Description

Returns the shared fence list. Does NOT take references tothe fence. The obj->lock must be held.

intdma_resv_lock(structdma_resv * obj, struct ww_acquire_ctx * ctx)

lock the reservation object

Parameters

structdma_resv*obj
the reservation object
structww_acquire_ctx*ctx
the locking context

Description

Locks the reservation object for exclusive access and modification. Note,that the lock is only against other writers, readers will run concurrentlywith a writer under RCU. The seqlock is used to notify readers if theyoverlap with a writer.

As the reservation object may be locked by multiple parties in anundefined order, a #ww_acquire_ctx is passed to unwind if a cycleis detected. See ww_mutex_lock() and ww_acquire_init(). A reservationobject may be locked by itself by passing NULL asctx.

intdma_resv_lock_interruptible(structdma_resv * obj, struct ww_acquire_ctx * ctx)

lock the reservation object

Parameters

structdma_resv*obj
the reservation object
structww_acquire_ctx*ctx
the locking context

Description

Locks the reservation object interruptible for exclusive access andmodification. Note, that the lock is only against other writers, readerswill run concurrently with a writer under RCU. The seqlock is used tonotify readers if they overlap with a writer.

As the reservation object may be locked by multiple parties in anundefined order, a #ww_acquire_ctx is passed to unwind if a cycleis detected. See ww_mutex_lock() and ww_acquire_init(). A reservationobject may be locked by itself by passing NULL asctx.

voiddma_resv_lock_slow(structdma_resv * obj, struct ww_acquire_ctx * ctx)

slowpath lock the reservation object

Parameters

structdma_resv*obj
the reservation object
structww_acquire_ctx*ctx
the locking context

Description

Acquires the reservation object after a die case. This functionwill sleep until the lock becomes available. Seedma_resv_lock() aswell.

intdma_resv_lock_slow_interruptible(structdma_resv * obj, struct ww_acquire_ctx * ctx)

slowpath lock the reservation object, interruptible

Parameters

structdma_resv*obj
the reservation object
structww_acquire_ctx*ctx
the locking context

Description

Acquires the reservation object interruptible after a die case. This functionwill sleep until the lock becomes available. Seedma_resv_lock_interruptible() as well.

booldma_resv_trylock(structdma_resv * obj)

trylock the reservation object

Parameters

structdma_resv*obj
the reservation object

Description

Tries to lock the reservation object for exclusive access and modification.Note, that the lock is only against other writers, readers will runconcurrently with a writer under RCU. The seqlock is used to notify readersif they overlap with a writer.

Also note that since no context is provided, no deadlock protection ispossible.

Returns true if the lock was acquired, false otherwise.

booldma_resv_is_locked(structdma_resv * obj)

is the reservation object locked

Parameters

structdma_resv*obj
the reservation object

Description

Returns true if the mutex is locked, false if unlocked.

struct ww_acquire_ctx *dma_resv_locking_ctx(structdma_resv * obj)

returns the context used to lock the object

Parameters

structdma_resv*obj
the reservation object

Description

Returns the context used to lock a reservation object or NULL if no contextwas used or the object is not locked at all.

voiddma_resv_unlock(structdma_resv * obj)

unlock the reservation object

Parameters

structdma_resv*obj
the reservation object

Description

Unlocks the reservation object following exclusive access.

structdma_fence *dma_resv_get_excl(structdma_resv * obj)

get the reservation object’s exclusive fence, with update-side lock held

Parameters

structdma_resv*obj
the reservation object

Description

Returns the exclusive fence (if any). Does NOT take areference. Writers must hold obj->lock, readers may onlyhold a RCU read side lock.

RETURNSThe exclusive fence or NULL

structdma_fence *dma_resv_get_excl_rcu(structdma_resv * obj)

get the reservation object’s exclusive fence, without lock held.

Parameters

structdma_resv*obj
the reservation object

Description

If there is an exclusive fence, this atomically increments it’sreference count and returns it.

RETURNSThe exclusive fence or NULL if none

DMA Fences

DMA fences, represented bystructdma_fence, are the kernel internalsynchronization primitive for DMA operations like GPU rendering, videoencoding/decoding, or displaying buffers on a screen.

A fence is initialized usingdma_fence_init() and completed usingdma_fence_signal(). Fences are associated with a context, allocated throughdma_fence_context_alloc(), and all fences on the same context arefully ordered.

Since the purposes of fences is to facilitate cross-device andcross-application synchronization, there’s multiple ways to use one:

  • Individual fences can be exposed as async_file, accessed as a filedescriptor from userspace, created by callingsync_file_create(). This iscalled explicit fencing, since userspace passes around explicitsynchronization points.
  • Some subsystems also have their own explicit fencing primitives, likedrm_syncobj. Compared tosync_file, adrm_syncobj allows the underlyingfence to be updated.
  • Then there’s also implicit fencing, where the synchronization points areimplicitly passed around as part of shareddma_buf instances. Suchimplicit fences are stored instructdma_resv through thedma_buf.resv pointer.

DMA Fences Functions Reference

structdma_fence *dma_fence_get_stub(void)

return a signaled fence

Parameters

void
no arguments

Description

Return a stub fence which is already signaled.

u64dma_fence_context_alloc(unsigned num)

allocate an array of fence contexts

Parameters

unsignednum
amount of contexts to allocate

Description

This function will return the first index of the number of fence contextsallocated. The fence context is used for settingdma_fence.context to aunique number by passing the context todma_fence_init().

intdma_fence_signal_locked(structdma_fence * fence)

signal completion of a fence

Parameters

structdma_fence*fence
the fence to signal

Description

Signal completion for software callbacks on a fence, this will unblockdma_fence_wait() calls and run all the callbacks added withdma_fence_add_callback(). Can be called multiple times, but since a fencecan only go from the unsignaled to the signaled state and not back, it willonly be effective the first time.

Unlikedma_fence_signal(), this function must be called withdma_fence.lockheld.

Returns 0 on success and a negative error value whenfence has beensignalled already.

intdma_fence_signal(structdma_fence * fence)

signal completion of a fence

Parameters

structdma_fence*fence
the fence to signal

Description

Signal completion for software callbacks on a fence, this will unblockdma_fence_wait() calls and run all the callbacks added withdma_fence_add_callback(). Can be called multiple times, but since a fencecan only go from the unsignaled to the signaled state and not back, it willonly be effective the first time.

Returns 0 on success and a negative error value whenfence has beensignalled already.

signed longdma_fence_wait_timeout(structdma_fence * fence, bool intr, signed long timeout)

sleep until the fence gets signaled or until timeout elapses

Parameters

structdma_fence*fence
the fence to wait on
boolintr
if true, do an interruptible wait
signedlongtimeout
timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT

Description

Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or theremaining timeout in jiffies on success. Other error values may bereturned on custom implementations.

Performs a synchronous wait on this fence. It is assumed the callerdirectly or indirectly (buf-mgr between reservation and committing)holds a reference to the fence, otherwise the fence might befreed before return, resulting in undefined behavior.

See alsodma_fence_wait() anddma_fence_wait_any_timeout().

voiddma_fence_release(struct kref * kref)

default relese function for fences

Parameters

structkref*kref
dma_fence.recfount

Description

This is the default release functions fordma_fence. Drivers shouldn’t callthis directly, but instead calldma_fence_put().

voiddma_fence_free(structdma_fence * fence)

default release function fordma_fence.

Parameters

structdma_fence*fence
fence to release

Description

This is the default implementation fordma_fence_ops.release. It callskfree_rcu() onfence.

voiddma_fence_enable_sw_signaling(structdma_fence * fence)

enable signaling on fence

Parameters

structdma_fence*fence
the fence to enable

Description

This will request for sw signaling to be enabled, to make the fencecomplete as soon as possible. This callsdma_fence_ops.enable_signalinginternally.

intdma_fence_add_callback(structdma_fence * fence, structdma_fence_cb * cb, dma_fence_func_t func)

add a callback to be called when the fence is signaled

Parameters

structdma_fence*fence
the fence to wait on
structdma_fence_cb*cb
the callback to register
dma_fence_func_tfunc
the function to call

Description

cb will be initialized bydma_fence_add_callback(), no initializationby the caller is required. Any number of callbacks can be registeredto a fence, but a callback can only be registered to one fence at a time.

Note that the callback can be called from an atomic context. Iffence is already signaled, this function will return -ENOENT (andnot call the callback).

Add a software callback to the fence. Same restrictions apply torefcount as it does todma_fence_wait(), however the caller doesn’t need tokeep a refcount to fence afterwarddma_fence_add_callback() has returned:when software access is enabled, the creator of the fence is required to keepthe fence alive until after it signals withdma_fence_signal(). The callbackitself can be called from irq context.

Returns 0 in case of success, -ENOENT if the fence is already signaledand -EINVAL in case of error.

intdma_fence_get_status(structdma_fence * fence)

returns the status upon completion

Parameters

structdma_fence*fence
the dma_fence to query

Description

This wrapsdma_fence_get_status_locked() to return the error statuscondition on a signaled fence. Seedma_fence_get_status_locked() for moredetails.

Returns 0 if the fence has not yet been signaled, 1 if the fence hasbeen signaled without an error condition, or a negative error codeif the fence has been completed in err.

booldma_fence_remove_callback(structdma_fence * fence, structdma_fence_cb * cb)

remove a callback from the signaling list

Parameters

structdma_fence*fence
the fence to wait on
structdma_fence_cb*cb
the callback to remove

Description

Remove a previously queued callback from the fence. This function returnstrue if the callback is successfully removed, or false if the fence hasalready been signaled.

WARNING:Cancelling a callback should only be done if you really know what you’redoing, since deadlocks and race conditions could occur all too easily. Forthis reason, it should only ever be done on hardware lockup recovery,with a reference held to the fence.

Behaviour is undefined ifcb has not been added tofence usingdma_fence_add_callback() beforehand.

signed longdma_fence_default_wait(structdma_fence * fence, bool intr, signed long timeout)

default sleep until the fence gets signaled or until timeout elapses

Parameters

structdma_fence*fence
the fence to wait on
boolintr
if true, do an interruptible wait
signedlongtimeout
timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT

Description

Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or theremaining timeout in jiffies on success. If timeout is zero the value one isreturned if the fence is already signaled for consistency with otherfunctions taking a jiffies timeout.

signed longdma_fence_wait_any_timeout(structdma_fence ** fences, uint32_t count, bool intr, signed long timeout, uint32_t * idx)

sleep until any fence gets signaled or until timeout elapses

Parameters

structdma_fence**fences
array of fences to wait on
uint32_tcount
number of fences to wait on
boolintr
if true, do an interruptible wait
signedlongtimeout
timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT
uint32_t*idx
used to store the first signaled fence index, meaningful only onpositive return

Description

Returns -EINVAL on custom fence wait implementation, -ERESTARTSYS ifinterrupted, 0 if the wait timed out, or the remaining timeout in jiffieson success.

Synchronous waits for the first fence in the array to be signaled. Thecaller needs to hold a reference to all fences in the array, otherwise afence might be freed before return, resulting in undefined behavior.

See alsodma_fence_wait() anddma_fence_wait_timeout().

voiddma_fence_init(structdma_fence * fence, const structdma_fence_ops * ops, spinlock_t * lock, u64 context, u64 seqno)

Initialize a custom fence.

Parameters

structdma_fence*fence
the fence to initialize
conststructdma_fence_ops*ops
the dma_fence_ops for operations on this fence
spinlock_t*lock
the irqsafe spinlock to use for locking this fence
u64context
the execution context this fence is run on
u64seqno
a linear increasing sequence number for this context

Description

Initializes an allocated fence, the caller doesn’t have to keep itsrefcount after committing with this fence, but it will need to hold arefcount again ifdma_fence_ops.enable_signaling gets called.

context and seqno are used for easy comparison between fences, allowingto check which fence is later by simply usingdma_fence_later().

structdma_fence

software synchronization primitive

Definition

struct dma_fence {  spinlock_t *lock;  const struct dma_fence_ops *ops;  union {    struct list_head cb_list;    ktime_t timestamp;    struct rcu_head rcu;  };  u64 context;  u64 seqno;  unsigned long flags;  struct kref refcount;  int error;};

Members

lock
spin_lock_irqsave used for locking
ops
dma_fence_ops associated with this fence
{unnamed_union}
anonymous
cb_list
list of all callbacks to call
timestamp
Timestamp when the fence was signaled.
rcu
used for releasing fence with kfree_rcu
context
execution context this fence belongs to, returned bydma_fence_context_alloc()
seqno
the sequence number of this fence inside the execution context,can be compared to decide which fence would be signaled later.
flags
A mask of DMA_FENCE_FLAG_* defined below
refcount
refcount for this fence
error
Optional, only valid if < 0, must be set before callingdma_fence_signal, indicates that the fence has completed with an error.

Description

the flags member must be manipulated and read using the appropriateatomic ops (bit_*), so taking the spinlock will not be needed mostof the time.

DMA_FENCE_FLAG_SIGNALED_BIT - fence is already signaledDMA_FENCE_FLAG_TIMESTAMP_BIT - timestamp recorded for fence signalingDMA_FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been calledDMA_FENCE_FLAG_USER_BITS - start of the unused bits, can be used by theimplementer of the fence for its own purposes. Can be used in differentways by different fence implementers, so do not rely on this.

Since atomic bitops are used, this is not guaranteed to be the case.Particularly, if the bit was set, but dma_fence_signal was called rightbefore this bit was set, it would have been able to set theDMA_FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called.Adding a check for DMA_FENCE_FLAG_SIGNALED_BIT after settingDMA_FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure thatafter dma_fence_signal was called, any enable_signaling call will have eitherbeen completed, or never called at all.

structdma_fence_cb

callback fordma_fence_add_callback()

Definition

struct dma_fence_cb {  struct list_head node;  dma_fence_func_t func;};

Members

node
used bydma_fence_add_callback() to append this struct to fence::cb_list
func
dma_fence_func_t to call

Description

This struct will be initialized bydma_fence_add_callback(), additionaldata can be passed along by embedding dma_fence_cb in another struct.

structdma_fence_ops

operations implemented for fence

Definition

struct dma_fence_ops {  bool use_64bit_seqno;  const char * (*get_driver_name)(struct dma_fence *fence);  const char * (*get_timeline_name)(struct dma_fence *fence);  bool (*enable_signaling)(struct dma_fence *fence);  bool (*signaled)(struct dma_fence *fence);  signed long (*wait)(struct dma_fence *fence, bool intr, signed long timeout);  void (*release)(struct dma_fence *fence);  void (*fence_value_str)(struct dma_fence *fence, char *str, int size);  void (*timeline_value_str)(struct dma_fence *fence, char *str, int size);};

Members

use_64bit_seqno
True if this dma_fence implementation uses 64bit seqno, falseotherwise.
get_driver_name

Returns the driver name. This is a callback to allow drivers tocompute the name at runtime, without having it to store permanentlyfor each fence, or build a cache of some sort.

This callback is mandatory.

get_timeline_name

Return the name of the context this fence belongs to. This is acallback to allow drivers to compute the name at runtime, withouthaving it to store permanently for each fence, or build a cache ofsome sort.

This callback is mandatory.

enable_signaling

Enable software signaling of fence.

For fence implementations that have the capability for hw->hwsignaling, they can implement this op to enable the necessaryinterrupts, or insert commands into cmdstream, etc, to avoid thesecostly operations for the common case where only hw->hwsynchronization is required. This is called in the firstdma_fence_wait() ordma_fence_add_callback() path to let the fenceimplementation know that there is another driver waiting on thesignal (ie. hw->sw case).

This function can be called from atomic context, but notfrom irq context, so normal spinlocks can be used.

A return value of false indicates the fence already passed,or some failure occurred that made it impossible to enablesignaling. True indicates successful enabling.

dma_fence.error may be set in enable_signaling, but only when falseis returned.

Since many implementations can calldma_fence_signal() even when beforeenable_signaling has been called there’s a race window, where thedma_fence_signal() might result in the final fence reference beingreleased and its memory freed. To avoid this, implementations of thiscallback should grab their own reference usingdma_fence_get(), to bereleased when the fence is signalled (through e.g. the interrupthandler).

This callback is optional. If this callback is not present, then thedriver must always have signaling enabled.

signaled

Peek whether the fence is signaled, as a fastpath optimization fore.g.dma_fence_wait() ordma_fence_add_callback(). Note that thiscallback does not need to make any guarantees beyond that a fenceonce indicates as signalled must always return true from thiscallback. This callback may return false even if the fence hascompleted already, in this case information hasn’t propogated througthe system yet. See alsodma_fence_is_signaled().

May setdma_fence.error if returning true.

This callback is optional.

wait

Custom wait implementation, defaults todma_fence_default_wait() ifnot set.

The dma_fence_default_wait implementation should work for any fence, as longasenable_signaling works correctly. This hook allows drivers tohave an optimized version for the case where a process context isalready available, e.g. ifenable_signaling for the general caseneeds to set up a worker thread.

Must return -ERESTARTSYS if the wait is intr = true and the wait wasinterrupted, and remaining jiffies if fence has signaled, or 0 if waittimed out. Can also return other error values on custom implementations,which should be treated as if the fence is signaled. For example a hardwarelockup could be reported like that.

This callback is optional.

release
Called on destruction of fence to release additional resources.Can be called from irq context. This callback is optional. If it isNULL, thendma_fence_free() is instead called as the defaultimplementation.
fence_value_str

Callback to fill in free-form debug info specific to this fence, likethe sequence number.

This callback is optional.

timeline_value_str
Fills in the current value of the timeline as a string, like thesequence number. Note that the specific fence passed to this functionshould not matter, drivers should only use it to look up thecorresponding timeline structures.
voiddma_fence_put(structdma_fence * fence)

decreases refcount of the fence

Parameters

structdma_fence*fence
fence to reduce refcount of
structdma_fence *dma_fence_get(structdma_fence * fence)

increases refcount of the fence

Parameters

structdma_fence*fence
fence to increase refcount of

Description

Returns the same fence, with refcount increased by 1.

structdma_fence *dma_fence_get_rcu(structdma_fence * fence)

get a fence from a dma_resv_list with rcu read lock

Parameters

structdma_fence*fence
fence to increase refcount of

Description

Function returns NULL if no refcount could be obtained, or the fence.

structdma_fence *dma_fence_get_rcu_safe(structdma_fence __rcu ** fencep)

acquire a reference to an RCU tracked fence

Parameters

structdma_fence__rcu**fencep
pointer to fence to increase refcount of

Description

Function returns NULL if no refcount could be obtained, or the fence.This function handles acquiring a reference to a fence that may bereallocated within the RCU grace period (such as with SLAB_TYPESAFE_BY_RCU),so long as the caller is using RCU on the pointer to the fence.

An alternative mechanism is to employ a seqlock to protect a bunch offences, such as used by struct dma_resv. When using a seqlock,the seqlock must be taken before and checked after a reference to thefence is acquired (as shown here).

The caller is required to hold the RCU read lock.

booldma_fence_is_signaled_locked(structdma_fence * fence)

Return an indication if the fence is signaled yet.

Parameters

structdma_fence*fence
the fence to check

Description

Returns true if the fence was already signaled, false if not. Since thisfunction doesn’t enable signaling, it is not guaranteed to ever returntrue ifdma_fence_add_callback(),dma_fence_wait() ordma_fence_enable_sw_signaling() haven’t been called before.

This function requiresdma_fence.lock to be held.

See alsodma_fence_is_signaled().

booldma_fence_is_signaled(structdma_fence * fence)

Return an indication if the fence is signaled yet.

Parameters

structdma_fence*fence
the fence to check

Description

Returns true if the fence was already signaled, false if not. Since thisfunction doesn’t enable signaling, it is not guaranteed to ever returntrue ifdma_fence_add_callback(),dma_fence_wait() ordma_fence_enable_sw_signaling() haven’t been called before.

It’s recommended for seqno fences to call dma_fence_signal when theoperation is complete, it makes it possible to prevent issues fromwraparound between time of issue and time of use by checking the returnvalue of this function before calling hardware-specific wait instructions.

See alsodma_fence_is_signaled_locked().

bool__dma_fence_is_later(u64 f1, u64 f2, const structdma_fence_ops * ops)

return if f1 is chronologically later than f2

Parameters

u64f1
the first fence’s seqno
u64f2
the second fence’s seqno from the same context
conststructdma_fence_ops*ops
dma_fence_ops associated with the seqno

Description

Returns true if f1 is chronologically later than f2. Both fences must befrom the same context, since a seqno is not common across contexts.

booldma_fence_is_later(structdma_fence * f1, structdma_fence * f2)

return if f1 is chronologically later than f2

Parameters

structdma_fence*f1
the first fence from the same context
structdma_fence*f2
the second fence from the same context

Description

Returns true if f1 is chronologically later than f2. Both fences must befrom the same context, since a seqno is not re-used across contexts.

structdma_fence *dma_fence_later(structdma_fence * f1, structdma_fence * f2)

return the chronologically later fence

Parameters

structdma_fence*f1
the first fence from the same context
structdma_fence*f2
the second fence from the same context

Description

Returns NULL if both fences are signaled, otherwise the fence that would besignaled last. Both fences must be from the same context, since a seqno isnot re-used across contexts.

intdma_fence_get_status_locked(structdma_fence * fence)

returns the status upon completion

Parameters

structdma_fence*fence
the dma_fence to query

Description

Drivers can supply an optional error status condition before they signalthe fence (to indicate whether the fence was completed due to an errorrather than success). The value of the status condition is only validif the fence has been signaled,dma_fence_get_status_locked() first checksthe signal state before reporting the error status.

Returns 0 if the fence has not yet been signaled, 1 if the fence hasbeen signaled without an error condition, or a negative error codeif the fence has been completed in err.

voiddma_fence_set_error(structdma_fence * fence, int error)

flag an error condition on the fence

Parameters

structdma_fence*fence
the dma_fence
interror
the error to store

Description

Drivers can supply an optional error status condition before they signalthe fence, to indicate that the fence was completed due to an errorrather than success. This must be set before signaling (so that the valueis visible before any waiters on the signal callback are woken). Thishelper exists to help catching erroneous setting of #dma_fence.error.

signed longdma_fence_wait(structdma_fence * fence, bool intr)

sleep until the fence gets signaled

Parameters

structdma_fence*fence
the fence to wait on
boolintr
if true, do an interruptible wait

Description

This function will return -ERESTARTSYS if interrupted by a signal,or 0 if the fence was signaled. Other error values may bereturned on custom implementations.

Performs a synchronous wait on this fence. It is assumed the callerdirectly or indirectly holds a reference to the fence, otherwise thefence might be freed before return, resulting in undefined behavior.

See alsodma_fence_wait_timeout() anddma_fence_wait_any_timeout().

Seqno Hardware Fences

struct seqno_fence *to_seqno_fence(structdma_fence * fence)

cast a fence to a seqno_fence

Parameters

structdma_fence*fence
fence to cast to a seqno_fence

Description

Returns NULL if the fence is not a seqno_fence,or the seqno_fence otherwise.

voidseqno_fence_init(struct seqno_fence * fence, spinlock_t * lock, structdma_buf * sync_buf, uint32_t context, uint32_t seqno_ofs, uint32_t seqno, enum seqno_fence_condition cond, const structdma_fence_ops * ops)

initialize a seqno fence

Parameters

structseqno_fence*fence
seqno_fence to initialize
spinlock_t*lock
pointer to spinlock to use for fence
structdma_buf*sync_buf
buffer containing the memory location to signal on
uint32_tcontext
the execution context this fence is a part of
uint32_tseqno_ofs
the offset withinsync_buf
uint32_tseqno
the sequence # to signal on
enumseqno_fence_conditioncond
fence wait condition
conststructdma_fence_ops*ops
the fence_ops for operations on this seqno fence

Description

This function initializes a struct seqno_fence with passed parameters,and takes a reference on sync_buf which is released on fence destruction.

A seqno_fence is a dma_fence which can complete in software whenenable_signaling is called, but it also completes when(s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true

The seqno_fence will take a refcount on the sync_buf until it’sdestroyed, but actual lifetime of sync_buf may be longer if one of thecallers take a reference to it.

Certain hardware have instructions to insert this type of wait conditionin the command stream, so no intervention from software would be needed.This type of fence can be destroyed before completed, however a referenceon the sync_buf dma-buf can be taken. It is encouraged to re-use the samedma-buf for sync_buf, since mapping or unmapping the sync_buf to thedevice’s vm can be expensive.

It is recommended for creators of seqno_fence to calldma_fence_signal()before destruction. This will prevent possible issues from wraparound attime of issue vs time of check, since users can checkdma_fence_is_signaled()before submitting instructions for the hardware to wait on the fence.However, when ops.enable_signaling is not called, it doesn’t have to bedone as soon as possible, just before there’s any real danger of seqnowraparound.

DMA Fence Array

structdma_fence_array *dma_fence_array_create(int num_fences, structdma_fence ** fences, u64 context, unsigned seqno, bool signal_on_any)

Create a custom fence array

Parameters

intnum_fences
[in] number of fences to add in the array
structdma_fence**fences
[in] array containing the fences
u64context
[in] fence context to use
unsignedseqno
[in] sequence number to use
boolsignal_on_any
[in] signal on any fence in the array

Description

Allocate a dma_fence_array object and initialize the base fence withdma_fence_init().In case of error it returns NULL.

The caller should allocate the fences array with num_fences sizeand fill it with the fences it wants to add to the object. Ownership of thisarray is taken anddma_fence_put() is used on each fence on release.

Ifsignal_on_any is true the fence array signals if any fence in the arraysignals, otherwise it signals when all fences in the array signal.

booldma_fence_match_context(structdma_fence * fence, u64 context)

Check if all fences are from the given context

Parameters

structdma_fence*fence
[in] fence or fence array
u64context
[in] fence context to check all fences against

Description

Checks the provided fence or, for a fence array, all fences in the arrayagainst the given context. Returns false if any fence is from a differentcontext.

structdma_fence_array_cb

callback helper for fence array

Definition

struct dma_fence_array_cb {  struct dma_fence_cb cb;  struct dma_fence_array *array;};

Members

cb
fence callback structure for signaling
array
reference to the parent fence array object
structdma_fence_array

fence to represent an array of fences

Definition

struct dma_fence_array {  struct dma_fence base;  spinlock_t lock;  unsigned num_fences;  atomic_t num_pending;  struct dma_fence **fences;  struct irq_work work;};

Members

base
fence base class
lock
spinlock for fence handling
num_fences
number of fences in the array
num_pending
fences in the array still pending
fences
array of the fences
work
internal irq_work function
booldma_fence_is_array(structdma_fence * fence)

check if a fence is from the array subsclass

Parameters

structdma_fence*fence
fence to test

Description

Return true if it is a dma_fence_array and false otherwise.

structdma_fence_array *to_dma_fence_array(structdma_fence * fence)

cast a fence to a dma_fence_array

Parameters

structdma_fence*fence
fence to cast to a dma_fence_array

Description

Returns NULL if the fence is not a dma_fence_array,or the dma_fence_array otherwise.

DMA Fence uABI/Sync File

structsync_file *sync_file_create(structdma_fence * fence)

creates a sync file

Parameters

structdma_fence*fence
fence to add to the sync_fence

Description

Creates a sync_file containgfence. This function acquires and additionalreference offence for the newly-createdsync_file, if it succeeds. Thesync_file can be released with fput(sync_file->file). Returns thesync_file or NULL in case of error.

structdma_fence *sync_file_get_fence(int fd)

get the fence related to the sync_file fd

Parameters

intfd
sync_file fd to get the fence from

Description

Ensuresfd references a valid sync_file and returns a fence thatrepresents all fence in the sync_file. On error NULL is returned.

structsync_file

sync file to export to the userspace

Definition

struct sync_file {  struct file             *file;  char user_name[32];#ifdef CONFIG_DEBUG_FS;  struct list_head        sync_file_list;#endif;  wait_queue_head_t wq;  unsigned long           flags;  struct dma_fence        *fence;  struct dma_fence_cb cb;};

Members

file
file representing this fence
user_name
Name of the sync file provided by userspace, for merged fences.Otherwise generated through driver callbacks (in which case theentire array is 0).
sync_file_list
membership in global file list
wq
wait queue for fence signaling
flags
flags for the sync_file
fence
fence with the fences in the sync_file
cb
fence callback information

Description

flags:POLL_ENABLED: whether userspace is currently poll()’ing or not