Dynamic DMA mapping using the generic device

Author:

James E.J. Bottomley <James.Bottomley@HansenPartnership.com>

This document describes the DMA API. For a more gentle introductionof the API (and actual examples), seeDynamic DMA mapping Guide.

This API is split into two pieces. Part I describes the basic API.Part II describes extensions for supporting non-coherent memorymachines. Unless you know that your driver absolutely has to supportnon-coherent platforms (this is usually only legacy platforms) youshould only use the API described in part I.

Part I - DMA API

To get the DMA API, you must #include <linux/dma-mapping.h>. Thisprovides dma_addr_t and the interfaces described below.

A dma_addr_t can hold any valid DMA address for the platform. It can begiven to a device to use as a DMA source or target. A CPU cannot referencea dma_addr_t directly because there may be translation between its physicaladdress space and the DMA address space.

Part Ia - Using large DMA-coherent buffers

void *dma_alloc_coherent(struct device *dev, size_t size,                   dma_addr_t *dma_handle, gfp_t flag)

Coherent memory is memory for which a write by either the device orthe processor can immediately be read by the processor or devicewithout having to worry about caching effects. (You may however needto make sure to flush the processor’s write buffers before tellingdevices to read that memory.)

This routine allocates a region of <size> bytes of coherent memory.

It returns a pointer to the allocated region (in the processor’s virtualaddress space) or NULL if the allocation failed.

It also returns a <dma_handle> which may be cast to an unsigned integer thesame width as the bus and given to the device as the DMA address base ofthe region.

Note: coherent memory can be expensive on some platforms, and theminimum allocation length may be as big as a page, so you shouldconsolidate your requests for coherent memory as much as possible.The simplest way to do that is to use the dma_pool calls (see below).

The flag parameter allows the caller to specify theGFP_ flags (seekmalloc()) for the allocation (the implementation may ignore flags that affectthe location of the returned memory, like GFP_DMA).

voiddma_free_coherent(struct device *dev, size_t size, void *cpu_addr,                  dma_addr_t dma_handle)

Free a previously allocated region of coherent memory. dev, size and dma_handlemust all be the same as those passed intodma_alloc_coherent(). cpu_addr mustbe the virtual address returned bydma_alloc_coherent().

Note that unlike the sibling allocation call, this routine may only be calledwith IRQs enabled.

Part Ib - Using small DMA-coherent buffers

To get this part of the DMA API, you must #include <linux/dmapool.h>

Many drivers need lots of small DMA-coherent memory regions for DMAdescriptors or I/O buffers. Rather than allocating in units of a pageor more usingdma_alloc_coherent(), you can use DMA pools. These workmuch like astructkmem_cache, except that they use the DMA-coherent allocator,not__get_free_pages(). Also, they understand common hardware constraintsfor alignment, like queue heads needing to be aligned on N-byte boundaries.

structdma_pool*dma_pool_create_node(constchar*name,structdevice*dev,size_tsize,size_talign,size_tboundary,intnode)

Creates a pool of coherent DMA memory blocks.

Parameters

constchar*name

name of pool, for diagnostics

structdevice*dev

device that will be doing the DMA

size_tsize

size of the blocks in this pool.

size_talign

alignment requirement for blocks; must be a power of two

size_tboundary

returned blocks won’t cross this power of two boundary

intnode

optional NUMA node to allocate structs ‘dma_pool’ and ‘dma_page’ on

Context

notin_interrupt()

Description

Given one of these pools,dma_pool_alloc()may be used to allocate memory. Such memory will all have coherentDMA mappings, accessible by the device and its driver without usingcache flushing primitives. The actual size of blocks allocated may belarger than requested because of alignment.

Ifboundary is nonzero, objects returned fromdma_pool_alloc() won’tcross that size boundary. This is useful for devices which haveaddressing restrictions on individual DMA transfers, such as not crossingboundaries of 4KBytes.

Return

a dma allocation pool with the requested characteristics, orNULL if one can’t be created.

voiddma_pool_destroy(structdma_pool*pool)

destroys a pool of dma memory blocks.

Parameters

structdma_pool*pool

dma pool that will be destroyed

Context

!in_interrupt()

Description

Caller guarantees that no more memory from the pool is in use,and that nothing will try to use the pool after this call.

void*dma_pool_alloc(structdma_pool*pool,gfp_tmem_flags,dma_addr_t*handle)

get a block of coherent memory

Parameters

structdma_pool*pool

dma pool that will produce the block

gfp_tmem_flags

GFP_* bitmask

dma_addr_t*handle

pointer to dma address of block

Return

the kernel virtual address of a currently unused block,and reports its dma address through the handle.If such a memory block can’t be allocated,NULL is returned.

voiddma_pool_free(structdma_pool*pool,void*vaddr,dma_addr_tdma)

put block back into dma pool

Parameters

structdma_pool*pool

the dma pool holding the block

void*vaddr

virtual address of block

dma_addr_tdma

dma address of block

Description

Caller promises neither device nor driver will again touch this blockunless it is first re-allocated.

structdma_pool*dmam_pool_create(constchar*name,structdevice*dev,size_tsize,size_talign,size_tallocation)

Manageddma_pool_create()

Parameters

constchar*name

name of pool, for diagnostics

structdevice*dev

device that will be doing the DMA

size_tsize

size of the blocks in this pool.

size_talign

alignment requirement for blocks; must be a power of two

size_tallocation

returned blocks won’t cross this boundary (or zero)

Description

Manageddma_pool_create(). DMA pool created with this function isautomatically destroyed on driver detach.

Return

a managed dma allocation pool with the requestedcharacteristics, orNULL if one can’t be created.

voiddmam_pool_destroy(structdma_pool*pool)

Manageddma_pool_destroy()

Parameters

structdma_pool*pool

dma pool that will be destroyed

Description

Manageddma_pool_destroy().

void*dma_pool_zalloc(structdma_pool*pool,gfp_tmem_flags,dma_addr_t*handle)

Get a zero-initialized block of DMA coherent memory.

Parameters

structdma_pool*pool

dma pool that will produce the block

gfp_tmem_flags

GFP_* bitmask

dma_addr_t*handle

pointer to dma address of block

Description

Same asdma_pool_alloc(), but the returned memory is zeroed.

Part Ic - DMA addressing limitations

DMA mask is a bit mask of the addressable region for the device. In other words,if applying the DMA mask (a bitwise AND operation) to the DMA address of amemory region does not clear any bits in the address, then the device canperform DMA to that memory region.

All the below functions which set a DMA mask may fail if the requested maskcannot be used with the device, or if the device is not capable of doing DMA.

intdma_set_mask_and_coherent(struct device *dev, u64 mask)

Updates both streaming and coherent DMA masks.

Returns: 0 if successful and a negative error if not.

intdma_set_mask(struct device *dev, u64 mask)

Updates only the streaming DMA mask.

Returns: 0 if successful and a negative error if not.

intdma_set_coherent_mask(struct device *dev, u64 mask)

Updates only the coherent DMA mask.

Returns: 0 if successful and a negative error if not.

u64dma_get_required_mask(struct device *dev)

This API returns the mask that the platform requires tooperate efficiently. Usually this means the returned maskis the minimum required to cover all of memory. Examining therequired mask gives drivers with variable descriptor sizes theopportunity to use smaller descriptors as necessary.

Requesting the required mask does not alter the current mask. If youwish to take advantage of it, you should issue adma_set_mask()call to set the mask to the value returned.

size_tdma_max_mapping_size(struct device *dev);

Returns the maximum size of a mapping for the device. The size parameterof the mapping functions likedma_map_single(),dma_map_page() andothers should not be larger than the returned value.

size_tdma_opt_mapping_size(struct device *dev);

Returns the maximum optimal size of a mapping for the device.

Mapping larger buffers may take much longer in certain scenarios. Inaddition, for high-rate short-lived streaming mappings, the upfront timespent on the mapping may account for an appreciable part of the totalrequest lifetime. As such, if splitting larger requests incurs nosignificant performance penalty, then device drivers are advised tolimit total DMA streaming mappings length to the returned value.

booldma_need_sync(struct device *dev, dma_addr_t dma_addr);

Returns %true if dma_sync_single_for_{device,cpu} calls are required totransfer memory ownership. Returns %false if those calls can be skipped.

unsigned longdma_get_merge_boundary(struct device *dev);

Returns the DMA merge boundary. If the device cannot merge any DMA addresssegments, the function returns 0.

Part Id - Streaming DMA mappings

Streaming DMA allows to map an existing buffer for DMA transfers and thenunmap it when finished. Map functions are not guaranteed to succeed, so thereturn value must be checked.

Note

In particular, mapping may fail for memory not addressable by thedevice, e.g. if it is not within the DMA mask of the device and/or aconnecting bus bridge. Streaming DMA functions try to overcome suchaddressing constraints, either by using an IOMMU (a device which mapsI/O DMA addresses to physical memory addresses), or by copying thedata to/from a bounce buffer if the kernel is configured with aSWIOTLB. However, these methods are not alwaysavailable, and even if they are, they may still fail for a number ofreasons.

In short, a device driver may need to be wary of where buffers arelocated in physical memory, especially if the DMA mask is less than 32bits.

dma_addr_tdma_map_single(struct device *dev, void *cpu_addr, size_t size,               enum dma_data_direction direction)

Maps a piece of processor virtual memory so it can be accessed by thedevice and returns the DMA address of the memory.

The DMA API uses a strongly typed enumerator for its direction:

DMA_NONE

no direction (used for debugging)

DMA_TO_DEVICE

data is going from the memory to the device

DMA_FROM_DEVICE

data is coming from the device to the memory

DMA_BIDIRECTIONAL

direction isn’t known

Note

Contiguous kernel virtual space may not be contiguous asphysical memory. Since this API does not provide any scatter/gathercapability, it will fail if the user tries to map a non-physicallycontiguous piece of memory. For this reason, memory to be mapped bythis API should be obtained from sources which guarantee it to bephysically contiguous (like kmalloc).

Warning

Memory coherency operates at a granularity called the cacheline width. In order for memory mapped by this API to operatecorrectly, the mapped region must begin exactly on a cache lineboundary and end exactly on one (to prevent two separately mappedregions from sharing a single cache line). Since the cache line sizemay not be known at compile time, the API will not enforce thisrequirement. Therefore, it is recommended that driver writers whodon’t take special care to determine the cache line size at run timeonly map virtual regions that begin and end on page boundaries (whichare guaranteed also to be cache line boundaries).

DMA_TO_DEVICE synchronisation must be done after the last modificationof the memory region by the software and before it is handed off tothe device. Once this primitive is used, memory covered by thisprimitive should be treated as read-only by the device. If the devicemay write to it at any point, it should be DMA_BIDIRECTIONAL (seebelow).

DMA_FROM_DEVICE synchronisation must be done before the driveraccesses data that may be changed by the device. This memory shouldbe treated as read-only by the driver. If the driver needs to writeto it at any point, it should be DMA_BIDIRECTIONAL (see below).

DMA_BIDIRECTIONAL requires special handling: it means that the driverisn’t sure if the memory was modified before being handed off to thedevice and also isn’t sure if the device will also modify it. Thus,you must always sync bidirectional memory twice: once before thememory is handed off to the device (to make sure all memory changesare flushed from the processor) and once before the data may beaccessed after being used by the device (to make sure any processorcache lines are updated with data that the device may have changed).

voiddma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,                 enum dma_data_direction direction)

Unmaps the region previously mapped. All the parameters passed inmust be identical to those passed to (and returned by)dma_map_single().

dma_addr_tdma_map_page(struct device *dev, struct page *page,             unsigned long offset, size_t size,             enum dma_data_direction direction)voiddma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,               enum dma_data_direction direction)

API for mapping and unmapping for pages. All the notes and warningsfor the other mapping APIs apply here. Also, although the <offset>and <size> parameters are provided to do partial page mapping, it isrecommended that you never use these unless you really know what thecache width is.

dma_addr_tdma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,                 enum dma_data_direction dir, unsigned long attrs)voiddma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,                   enum dma_data_direction dir, unsigned long attrs)

API for mapping and unmapping for MMIO resources. All the notes andwarnings for the other mapping APIs apply here. The API should only beused to map device MMIO resources, mapping of RAM is not permitted.

intdma_mapping_error(struct device *dev, dma_addr_t dma_addr)

In some circumstancesdma_map_single(),dma_map_page() anddma_map_resource()will fail to create a mapping. A driver can check for these errors by testingthe returned DMA address withdma_mapping_error(). A non-zero return valuemeans the mapping could not be created and the driver should take appropriateaction (e.g. reduce current DMA mapping usage or delay and try again later).

intdma_map_sg(struct device *dev, struct scatterlist *sg,           int nents, enum dma_data_direction direction)

Maps a scatter/gather list for DMA. Returns the number of DMA address segmentsmapped, which may be smaller than <nents> passed in if several consecutivesglist entries are merged (e.g. with an IOMMU, or if some adjacent segmentsjust happen to be physically contiguous).

Please note that the sg cannot be mapped again if it has been mapped once.The mapping process is allowed to destroy information in the sg.

As with the other mapping interfaces,dma_map_sg() can fail. When itdoes, 0 is returned and a driver must take appropriate action. It iscritical that the driver do something, in the case of a block driveraborting the request or even oopsing is better than doing nothing andcorrupting the filesystem.

With scatterlists, you use the resulting mapping like this:

int i, count = dma_map_sg(dev, sglist, nents, direction);struct scatterlist *sg;for_each_sg(sglist, sg, count, i) {        hw_address[i] = sg_dma_address(sg);        hw_len[i] = sg_dma_len(sg);}

where nents is the number of entries in the sglist.

The implementation is free to merge several consecutive sglist entriesinto one. The returned number is the actual number of sg entries itmapped them to. On failure, 0 is returned.

Then you should loop count times (note: this can be less than nents times)and usesg_dma_address() andsg_dma_len() macros where you previouslyaccessed sg->address and sg->length as shown above.

voiddma_unmap_sg(struct device *dev, struct scatterlist *sg,             int nents, enum dma_data_direction direction)

Unmap the previously mapped scatter/gather list. All the parametersmust be the same as those and passed in to the scatter/gather mappingAPI.

Note: <nents> must be the number you passed in,not the number ofDMA address entries returned.

voiddma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,                        size_t size,                        enum dma_data_direction direction)voiddma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,                           size_t size,                           enum dma_data_direction direction)voiddma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,                    int nents,                    enum dma_data_direction direction)voiddma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,                       int nents,                       enum dma_data_direction direction)

Synchronise a single contiguous or scatter/gather mapping for the CPUand device. With the sync_sg API, all the parameters must be the sameas those passed into the sg mapping API. With the sync_single API,you can use dma_handle and size parameters that aren’t identical tothose passed into the single mapping API to do a partial sync.

Note

You must do this:

  • Before reading values that have been written by DMA from the device(use the DMA_FROM_DEVICE direction)

  • After writing values that will be written to the device using DMA(use the DMA_TO_DEVICE) direction

  • beforeand after handing memory to the device if the memory isDMA_BIDIRECTIONAL

See alsodma_map_single().

dma_addr_tdma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,                     enum dma_data_direction dir,                     unsigned long attrs)voiddma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,                       size_t size, enum dma_data_direction dir,                       unsigned long attrs)intdma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,                 int nents, enum dma_data_direction dir,                 unsigned long attrs)voiddma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,                   int nents, enum dma_data_direction dir,                   unsigned long attrs)

The four functions above are just like the counterpart functionswithout the _attrs suffixes, except that they pass an optionaldma_attrs.

The interpretation of DMA attributes is architecture-specific, andeach attribute should be documented inDMA attributes.

If dma_attrs are 0, the semantics of each of these functionsis identical to those of the corresponding functionwithout the _attrs suffix. As a resultdma_map_single_attrs()can generally replacedma_map_single(), etc.

As an example of the use of the*_attrs functions, here’s howyou could pass an attribute DMA_ATTR_FOO when mapping memoryfor DMA:

#include <linux/dma-mapping.h>/* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and* documented in Documentation/core-api/dma-attributes.rst */...        unsigned long attr;        attr |= DMA_ATTR_FOO;        ....        n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr);        ....

Architectures that care about DMA_ATTR_FOO would check for itspresence in their implementations of the mapping and unmappingroutines, e.g.::

void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,                             size_t size, enum dma_data_direction dir,                             unsigned long attrs){        ....        if (attrs & DMA_ATTR_FOO)                /* twizzle the frobnozzle */        ....}

Part Ie - IOVA-based DMA mappings

These APIs allow a very efficient mapping when using an IOMMU. They are anoptional path that requires extra code and are only recommended for driverswhere DMA mapping performance, or the space usage for storing the DMA addressesmatter. All the considerations from the previous section apply here as well.

bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state,            phys_addr_t phys, size_t size);

Is used to try to allocate IOVA space for mapping operation. If it returnsfalse this API can’t be used for the given device and the normal streamingDMA mapping API should be used. Thestructdma_iova_state is allocatedby the driver and must be kept around until unmap time.

static inline bool dma_use_iova(struct dma_iova_state *state)

Can be used by the driver to check if the IOVA-based API is used after acall to dma_iova_try_alloc. This can be useful in the unmap path.

int dma_iova_link(struct device *dev, struct dma_iova_state *state,            phys_addr_t phys, size_t offset, size_t size,            enum dma_data_direction dir, unsigned long attrs);

Is used to link ranges to the IOVA previously allocated. The start of allbut the first call to dma_iova_link for a given state must be alignedto the DMA merge boundary returned bydma_get_merge_boundary()), andthe size of all but the last range must be aligned to the DMA merge boundaryas well.

int dma_iova_sync(struct device *dev, struct dma_iova_state *state,            size_t offset, size_t size);

Must be called to sync the IOMMU page tables for IOVA-range mapped by one ormore calls todma_iova_link().

For drivers that use a one-shot mapping, all ranges can be unmapped and theIOVA freed by calling:

void dma_iova_destroy(struct device *dev, struct dma_iova_state *state,             size_t mapped_len, enum dma_data_direction dir,             unsigned long attrs);

Alternatively drivers can dynamically manage the IOVA space by unmappingand mapping individual regions. In that case

void dma_iova_unlink(struct device *dev, struct dma_iova_state *state,            size_t offset, size_t size, enum dma_data_direction dir,            unsigned long attrs);

is used to unmap a range previously mapped, and

void dma_iova_free(struct device *dev, struct dma_iova_state *state);

is used to free the IOVA space. All regions must have been unmapped usingdma_iova_unlink() before callingdma_iova_free().

Part II - Non-coherent DMA allocations

These APIs allow to allocate pages that are guaranteed to be DMA addressableby the passed in device, but which need explicit management of memory ownershipfor the kernel vs the device.

If you don’t understand how cache line coherency works between a processor andan I/O device, you should not be using this part of the API.

struct page *dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle,                enum dma_data_direction dir, gfp_t gfp)

This routine allocates a region of <size> bytes of non-coherent memory. Itreturns a pointer to firststructpage for the region, or NULL if theallocation failed. The resultingstructpage can be used for everything astructpage is suitable for.

It also returns a <dma_handle> which may be cast to an unsigned integer thesame width as the bus and given to the device as the DMA address base ofthe region.

The dir parameter specified if data is read and/or written by the device,seedma_map_single() for details.

The gfp parameter allows the caller to specify theGFP_ flags (seekmalloc()) for the allocation, but rejects flags used to specify a memoryzone such as GFP_DMA or GFP_HIGHMEM.

Before giving the memory to the device,dma_sync_single_for_device() needsto be called, and before reading memory written by the device,dma_sync_single_for_cpu(), just like for streaming DMA mappings that arereused.

voiddma_free_pages(struct device *dev, size_t size, struct page *page,                dma_addr_t dma_handle, enum dma_data_direction dir)

Free a region of memory previously allocated usingdma_alloc_pages().dev, size, dma_handle and dir must all be the same as those passed intodma_alloc_pages(). page must be the pointer returned bydma_alloc_pages().

intdma_mmap_pages(struct device *dev, struct vm_area_struct *vma,               size_t size, struct page *page)

Map an allocation returned fromdma_alloc_pages() into a user address space.dev and size must be the same as those passed intodma_alloc_pages().page must be the pointer returned bydma_alloc_pages().

void *dma_alloc_noncoherent(struct device *dev, size_t size,                dma_addr_t *dma_handle, enum dma_data_direction dir,                gfp_t gfp)

This routine is a convenient wrapper around dma_alloc_pages that returns thekernel virtual address for the allocated memory instead of the page structure.

voiddma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,                dma_addr_t dma_handle, enum dma_data_direction dir)

Free a region of memory previously allocated usingdma_alloc_noncoherent().dev, size, dma_handle and dir must all be the same as those passed intodma_alloc_noncoherent(). cpu_addr must be the virtual address returned bydma_alloc_noncoherent().

struct sg_table *dma_alloc_noncontiguous(struct device *dev, size_t size,                        enum dma_data_direction dir, gfp_t gfp,                        unsigned long attrs);

This routine allocates <size> bytes of non-coherent and possibly non-contiguousmemory. It returns a pointer tostructsg_table that describes the allocatedand DMA mapped memory, or NULL if the allocation failed. The resulting memorycan be used forstructpage mapped into a scatterlist are suitable for.

The return sg_table is guaranteed to have 1 single DMA mapped segment asindicated by sgt->nents, but it might have multiple CPU side segments asindicated by sgt->orig_nents.

The dir parameter specified if data is read and/or written by the device,seedma_map_single() for details.

The gfp parameter allows the caller to specify theGFP_ flags (seekmalloc()) for the allocation, but rejects flags used to specify a memoryzone such as GFP_DMA or GFP_HIGHMEM.

The attrs argument must be either 0 or DMA_ATTR_ALLOC_SINGLE_PAGES.

Before giving the memory to the device,dma_sync_sgtable_for_device() needsto be called, and before reading memory written by the device,dma_sync_sgtable_for_cpu(), just like for streaming DMA mappings that arereused.

voiddma_free_noncontiguous(struct device *dev, size_t size,                       struct sg_table *sgt,                       enum dma_data_direction dir)

Free memory previously allocated usingdma_alloc_noncontiguous(). dev, size,and dir must all be the same as those passed intodma_alloc_noncontiguous().sgt must be the pointer returned bydma_alloc_noncontiguous().

void *dma_vmap_noncontiguous(struct device *dev, size_t size,        struct sg_table *sgt)

Return a contiguous kernel mapping for an allocation returned fromdma_alloc_noncontiguous(). dev and size must be the same as those passed intodma_alloc_noncontiguous(). sgt must be the pointer returned bydma_alloc_noncontiguous().

Once a non-contiguous allocation is mapped using this function, theflush_kernel_vmap_range() andinvalidate_kernel_vmap_range() APIs must be usedto manage the coherency between the kernel mapping, the device and user spacemappings (if any).

voiddma_vunmap_noncontiguous(struct device *dev, void *vaddr)

Unmap a kernel mapping returned bydma_vmap_noncontiguous(). dev must be thesame the one passed intodma_alloc_noncontiguous(). vaddr must be the pointerreturned bydma_vmap_noncontiguous().

intdma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,                       size_t size, struct sg_table *sgt)

Map an allocation returned fromdma_alloc_noncontiguous() into a user addressspace. dev and size must be the same as those passed intodma_alloc_noncontiguous(). sgt must be the pointer returned bydma_alloc_noncontiguous().

intdma_get_cache_alignment(void)

Returns the processor cache alignment. This is the absolute minimumalignmentand width that you must observe when either mappingmemory or doing partial flushes.

Note

This API may return a numberlarger than the actual cacheline, but it will guarantee that one or more cache lines fit exactlyinto the width returned by this call. It will also always be a powerof two for easy alignment.

Part III - Debug drivers use of the DMA API

The DMA API as described above has some constraints. DMA addresses must bereleased with the corresponding function with the same size for example. Withthe advent of hardware IOMMUs it becomes more and more important that driversdo not violate those constraints. In the worst case such a violation canresult in data corruption up to destroyed filesystems.

To debug drivers and find bugs in the usage of the DMA API checking code canbe compiled into the kernel which will tell the developer about thoseviolations. If your architecture supports it you can select the “Enabledebugging of DMA API usage” option in your kernel configuration. Enabling thisoption has a performance impact. Do not enable it in production kernels.

If you boot the resulting kernel will contain code which does some bookkeepingabout what DMA memory was allocated for which device. If this code detects anerror it prints a warning message with some details into your kernel log. Anexample warning message may look like this:

WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448        check_unmap+0x203/0x490()Hardware name:forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong        function [device address=0x00000000640444be] [size=66 bytes] [mapped assingle] [unmapped as page]Modules linked in: nfsd exportfs bridge stp llc r8169Pid: 0, comm: swapper Tainted: G        W  2.6.28-dmatest-09289-g8bb99c0 #1Call Trace:<IRQ>  [<ffffffff80240b22>] warn_slowpath+0xf2/0x130[<ffffffff80647b70>] _spin_unlock+0x10/0x30[<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0[<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40[<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0[<ffffffff80252f96>] queue_work+0x56/0x60[<ffffffff80237e10>] enqueue_task_fair+0x20/0x50[<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0[<ffffffff803b78c3>] cpumask_next_and+0x23/0x40[<ffffffff80235177>] find_busiest_group+0x207/0x8a0[<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50[<ffffffff803c7ea3>] check_unmap+0x203/0x490[<ffffffff803c8259>] debug_dma_unmap_phys+0x49/0x50[<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0[<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0[<ffffffff8026df84>] handle_IRQ_event+0x34/0x70[<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150[<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0[<ffffffff8020c093>] ret_from_intr+0x0/0xa<EOI> <4>---[ end trace f6435a98e2a38c0e ]---

The driver developer can find the driver and the device including a stacktraceof the DMA API call which caused this warning.

Per default only the first error will result in a warning message. All othererrors will only silently counted. This limitation exist to prevent the codefrom flooding your kernel log. To support debugging a device driver this canbe disabled via debugfs. See the debugfs interface documentation below fordetails.

The debugfs directory for the DMA API debugging code is called dma-api/. Inthis directory the following files can currently be found:

dma-api/all_errors

This file contains a numeric value. If thisvalue is not equal to zero the debugging codewill print a warning for every error it findsinto the kernel log. Be careful with thisoption, as it can easily flood your logs.

dma-api/disabled

This read-only file contains the character ‘Y’if the debugging code is disabled. This canhappen when it runs out of memory or if it wasdisabled at boot time

dma-api/dump

This read-only file contains current DMAmappings.

dma-api/error_count

This file is read-only and shows the totalnumbers of errors found.

dma-api/num_errors

The number in this file shows how manywarnings will be printed to the kernel logbefore it stops. This number is initialized toone at system boot and be set by writing intothis file

dma-api/min_free_entries

This read-only file can be read to get theminimum number of free dma_debug_entries theallocator has ever seen. If this value goesdown to zero the code will attempt to increasenr_total_entries to compensate.

dma-api/num_free_entries

The current number of free dma_debug_entriesin the allocator.

dma-api/nr_total_entries

The total number of dma_debug_entries in theallocator, both free and used.

dma-api/driver_filter

You can write a name of a driver into this fileto limit the debug output to requests from thatparticular driver. Write an empty string tothat file to disable the filter and seeall errors again.

If you have this code compiled into your kernel it will be enabled by default.If you want to boot without the bookkeeping anyway you can provide‘dma_debug=off’ as a boot parameter. This will disable DMA API debugging.Notice that you can not enable it again at runtime. You have to reboot to doso.

If you want to see debug messages only for a special device driver you canspecify the dma_debug_driver=<drivername> parameter. This will enable thedriver filter at boot time. The debug code will only print errors for thatdriver afterwards. This filter can be disabled or changed later using debugfs.

When the code disables itself at runtime this is most likely because it ranout of dma_debug_entries and was unable to allocate more on-demand. 65536entries are preallocated at boot - if this is too low for you boot with‘dma_debug_entries=<your_desired_number>’ to overwrite the default. Notethat the code allocates entries in batches, so the exact number ofpreallocated entries may be greater than the actual number requested. Thecode will print to the kernel log each time it has dynamically allocatedas many entries as were initially preallocated. This is to indicate that alarger preallocation size may be appropriate, or if it happens continuallythat a driver may be leaking mappings.

voiddebug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);

dma-debug interfacedebug_dma_mapping_error() to debug drivers that failto check DMA mapping errors on addresses returned bydma_map_single() anddma_map_page() interfaces. This interface clears a flag set bydebug_dma_map_phys() to indicate thatdma_mapping_error() has been called bythe driver. When driver does unmap,debug_dma_unmap() checks the flag and ifthis flag is still set, prints warning message that includes call trace thatleads up to the unmap. This interface can be called fromdma_mapping_error()routines to enable DMA mapping error check debugging.

Functions and structures

structscatterlist*sg_next(structscatterlist*sg)

return the next scatterlist entry in a list

Parameters

structscatterlist*sg

The current sg entry

Description

Usually the next entry will besg + 1, but if this sg element is partof a chained scatterlist, it could jump to the start of a newscatterlist array.

voidsg_assign_page(structscatterlist*sg,structpage*page)

Assign a given page to an SG entry

Parameters

structscatterlist*sg

SG entry

structpage*page

The page

Description

Assign page to sg entry. Also seesg_set_page(), the most commonly usedvariant.

voidsg_set_page(structscatterlist*sg,structpage*page,unsignedintlen,unsignedintoffset)

Set sg entry to point at given page

Parameters

structscatterlist*sg

SG entry

structpage*page

The page

unsignedintlen

Length of data

unsignedintoffset

Offset into page

Description

Use this function to set an sg entry pointing at a page, never assignthe page directly. We encode sg table information in the lower bitsof the page pointer. Seesg_page() for looking up the page belongingto an sg entry.

voidsg_set_folio(structscatterlist*sg,structfolio*folio,size_tlen,size_toffset)

Set sg entry to point at given folio

Parameters

structscatterlist*sg

SG entry

structfolio*folio

The folio

size_tlen

Length of data

size_toffset

Offset into folio

Description

Use this function to set an sg entry pointing at a folio, never assignthe folio directly. We encode sg table information in the lower bitsof the folio pointer. Seesg_page() for looking up the page belongingto an sg entry.

voidsg_set_buf(structscatterlist*sg,constvoid*buf,unsignedintbuflen)

Set sg entry to point at given data

Parameters

structscatterlist*sg

SG entry

constvoid*buf

Data

unsignedintbuflen

Data length

voidsg_chain(structscatterlist*prv,unsignedintprv_nents,structscatterlist*sgl)

Chain two sglists together

Parameters

structscatterlist*prv

First scatterlist

unsignedintprv_nents

Number of entries in prv

structscatterlist*sgl

Second scatterlist

Description

Linksprv andsgl together, to form a longer scatterlist.

voidsg_mark_end(structscatterlist*sg)

Mark the end of the scatterlist

Parameters

structscatterlist*sg

SG entryScatterlist

Description

Marks the passed in sg entry as the termination point for the sgtable. A call tosg_next() on this entry will return NULL.

voidsg_unmark_end(structscatterlist*sg)

Undo setting the end of the scatterlist

Parameters

structscatterlist*sg

SG entryScatterlist

Description

Removes the termination marker from the given entry of the scatterlist.

boolsg_dma_is_bus_address(structscatterlist*sg)

Return whether a given segment was marked as a bus address

Parameters

structscatterlist*sg

SG entry

Description

Returns true ifsg_dma_mark_bus_address() has been called onthis segment.

voidsg_dma_mark_bus_address(structscatterlist*sg)

Mark the scatterlist entry as a bus address

Parameters

structscatterlist*sg

SG entry

Description

Marks the passed in sg entry to indicate that the dma_address isa bus address and doesn’t need to be unmapped. This should only beused bydma_map_sg() implementations to mark bus addressesso they can be properly cleaned up indma_unmap_sg().

voidsg_dma_unmark_bus_address(structscatterlist*sg)

Unmark the scatterlist entry as a bus address

Parameters

structscatterlist*sg

SG entry

Description

Clears the bus address mark.

boolsg_dma_is_swiotlb(structscatterlist*sg)

Return whether the scatterlist was marked for SWIOTLB bouncing

Parameters

structscatterlist*sg

SG entry

Description

Returns true if the scatterlist was marked for SWIOTLB bouncing. Not allelements may have been bounced, so the caller would have to checkindividual SG entries withswiotlb_find_pool().

voidsg_dma_mark_swiotlb(structscatterlist*sg)

Mark the scatterlist for SWIOTLB bouncing

Parameters

structscatterlist*sg

SG entry

Description

Marks a a scatterlist for SWIOTLB bounce. Not all SG entries may bebounced.

dma_addr_tsg_phys(structscatterlist*sg)

Return physical address of an sg entry

Parameters

structscatterlist*sg

SG entry

Description

This callspage_to_phys() on the page in this sg entry, and adds thesg offset. The caller must know that it is legal to callpage_to_phys()on the sg page.

void*sg_virt(structscatterlist*sg)

Return virtual address of an sg entry

Parameters

structscatterlist*sg

SG entry

Description

This callspage_address() on the page in this sg entry, and adds thesg offset. The caller must know that the sg page has a valid virtualmapping.

voidsg_init_marker(structscatterlist*sgl,unsignedintnents)

Initialize markers in sg table

Parameters

structscatterlist*sgl

The SG table

unsignedintnents

Number of entries in table

intsg_alloc_table_from_pages(structsg_table*sgt,structpage**pages,unsignedintn_pages,unsignedintoffset,unsignedlongsize,gfp_tgfp_mask)

Allocate and initialize an sg table from an array of pages

Parameters

structsg_table*sgt

The sg table header to use

structpage**pages

Pointer to an array of page pointers

unsignedintn_pages

Number of pages in the pages array

unsignedintoffset

Offset from start of the first page to the start of a buffer

unsignedlongsize

Number of valid bytes in the buffer (after offset)

gfp_tgfp_mask

GFP allocation mask

Description

Allocate and initialize an sg table from a list of pages. Contiguousranges of the pages are squashed into a single scatterlist node. A usermay provide an offset at a start and a size of valid data in a bufferspecified by the page array. The returned sg table is released bysg_free_table.

Return

0 on success, negative error on failure

structpage*sg_page_iter_page(structsg_page_iter*piter)

get the current page held by the page iterator

Parameters

structsg_page_iter*piter

page iterator holding the page

dma_addr_tsg_page_iter_dma_address(structsg_dma_page_iter*dma_iter)

get the dma address of the current page held by the page iterator.

Parameters

structsg_dma_page_iter*dma_iter

page iterator holding the page

for_each_sg_page

for_each_sg_page(sglist,piter,nents,pgoffset)

iterate over the pages of the given sg list

Parameters

sglist

sglist to iterate over

piter

page iterator to hold current page, sg, sg_pgoffset

nents

maximum number of sg entries to iterate over

pgoffset

starting page offset (in pages)

Description

Callers may usesg_page_iter_page() to get each page pointer.In each loop it operates on PAGE_SIZE unit.

for_each_sg_dma_page

for_each_sg_dma_page(sglist,dma_iter,dma_nents,pgoffset)

iterate over the pages of the given sg list

Parameters

sglist

sglist to iterate over

dma_iter

DMA page iterator to hold current page

dma_nents

maximum number of sg entries to iterate over, this is the valuereturned from dma_map_sg

pgoffset

starting page offset (in pages)

Description

Callers may usesg_page_iter_dma_address() to get each page’s DMA address.In each loop it operates on PAGE_SIZE unit.

for_each_sgtable_page

for_each_sgtable_page(sgt,piter,pgoffset)

iterate over all pages in the sg_table object

Parameters

sgt

sg_table object to iterate over

piter

page iterator to hold current page

pgoffset

starting page offset (in pages)

Description

Iterates over the all memory pages in the buffer described bya scatterlist stored in the given sg_table object.See alsofor_each_sg_page(). In each loop it operates on PAGE_SIZE unit.

for_each_sgtable_dma_page

for_each_sgtable_dma_page(sgt,dma_iter,pgoffset)

iterate over the DMA mapped sg_table object

Parameters

sgt

sg_table object to iterate over

dma_iter

DMA page iterator to hold current page

pgoffset

starting page offset (in pages)

Description

Iterates over the all DMA mapped pages in the buffer described bya scatterlist stored in the given sg_table object.See alsofor_each_sg_dma_page(). In each loop it operates on PAGE_SIZEunit.

intsg_nents(structscatterlist*sg)

return total count of entries in scatterlist

Parameters

structscatterlist*sg

The scatterlist

Description

Allows to know how many entries are in sg, taking into accountchaining as well

intsg_nents_for_len(structscatterlist*sg,u64len)

return total count of entries in scatterlist needed to satisfy the supplied length

Parameters

structscatterlist*sg

The scatterlist

u64len

The total required length

Description

Determines the number of entries in sg that are required to meetthe supplied length, taking into account chaining as well

Return

the number of sg entries needed, negative error on failure

structscatterlist*sg_last(structscatterlist*sgl,unsignedintnents)

return the last scatterlist entry in a list

Parameters

structscatterlist*sgl

First entry in the scatterlist

unsignedintnents

Number of entries in the scatterlist

Description

Should only be used casually, it (currently) scans the entire listto get the last entry.

Note that thesgl pointer passed in need not be the first one,the important bit is thatnents denotes the number of entries thatexist fromsgl.

voidsg_init_table(structscatterlist*sgl,unsignedintnents)

Initialize SG table

Parameters

structscatterlist*sgl

The SG table

unsignedintnents

Number of entries in table

Notes

If this is part of a chained sg table,sg_mark_end() should beused only on the last table part.

voidsg_init_one(structscatterlist*sg,constvoid*buf,unsignedintbuflen)

Initialize a single entry sg list

Parameters

structscatterlist*sg

SG entry

constvoid*buf

Virtual address for IO

unsignedintbuflen

IO length

void__sg_free_table(structsg_table*table,unsignedintmax_ents,unsignedintnents_first_chunk,sg_free_fn*free_fn,unsignedintnum_ents)

Free a previously mapped sg table

Parameters

structsg_table*table

The sg table header to use

unsignedintmax_ents

The maximum number of entries per single scatterlist

unsignedintnents_first_chunk

Number of entries int the (preallocated) firstscatterlist chunk, 0 means no such preallocated first chunk

sg_free_fn*free_fn

Free function

unsignedintnum_ents

Number of entries in the table

Description

Free an sg table previously allocated and setup with__sg_alloc_table(). Themax_ents value must be identical tothat previously used with__sg_alloc_table().

voidsg_free_append_table(structsg_append_table*table)

Free a previously allocated append sg table.

Parameters

structsg_append_table*table

The mapped sg append table header

voidsg_free_table(structsg_table*table)

Free a previously allocated sg table

Parameters

structsg_table*table

The mapped sg table header

int__sg_alloc_table(structsg_table*table,unsignedintnents,unsignedintmax_ents,structscatterlist*first_chunk,unsignedintnents_first_chunk,gfp_tgfp_mask,sg_alloc_fn*alloc_fn)

Allocate and initialize an sg table with given allocator

Parameters

structsg_table*table

The sg table header to use

unsignedintnents

Number of entries in sg list

unsignedintmax_ents

The maximum number of entries the allocator returns per call

structscatterlist*first_chunk

first SGL if preallocated (may beNULL)

unsignedintnents_first_chunk

Number of entries in the (preallocated) firstscatterlist chunk, 0 means no such preallocated chunk provided by user

gfp_tgfp_mask

GFP allocation mask

sg_alloc_fn*alloc_fn

Allocator to use

Description

This function returns atablenents long. The allocator isdefined to return scatterlist chunks of maximum sizemax_ents.Thus ifnents is bigger thanmax_ents, the scatterlists will bechained in units ofmax_ents.

Notes

If this function returns non-0 (eg failure), the caller must call__sg_free_table() to cleanup any leftover allocations.

intsg_alloc_table(structsg_table*table,unsignedintnents,gfp_tgfp_mask)

Allocate and initialize an sg table

Parameters

structsg_table*table

The sg table header to use

unsignedintnents

Number of entries in sg list

gfp_tgfp_mask

GFP allocation mask

Description

Allocate and initialize an sg table. Ifnents is larger thanSG_MAX_SINGLE_ALLOC a chained sg table will be setup.

intsg_alloc_append_table_from_pages(structsg_append_table*sgt_append,structpage**pages,unsignedintn_pages,unsignedintoffset,unsignedlongsize,unsignedintmax_segment,unsignedintleft_pages,gfp_tgfp_mask)

Allocate and initialize an append sg table from an array of pages

Parameters

structsg_append_table*sgt_append

The sg append table to use

structpage**pages

Pointer to an array of page pointers

unsignedintn_pages

Number of pages in the pages array

unsignedintoffset

Offset from start of the first page to the start of a buffer

unsignedlongsize

Number of valid bytes in the buffer (after offset)

unsignedintmax_segment

Maximum size of a scatterlist element in bytes

unsignedintleft_pages

Left pages caller have to set after this call

gfp_tgfp_mask

GFP allocation mask

Description

In the first call it allocate and initialize an sg table from a list ofpages, else reuse the scatterlist from sgt_append. Contiguous ranges ofthe pages are squashed into a single scatterlist entry up to the maximumsize specified inmax_segment. A user may provide an offset at a startand a size of valid data in a buffer specified by the page array. Thereturned sg table is released by sg_free_append_table

Return

0 on success, negative error on failure

Notes

If this function returns non-0 (eg failure), the caller must callsg_free_append_table() to cleanup any leftover allocations.

In the fist call, sgt_append must by initialized.

intsg_alloc_table_from_pages_segment(structsg_table*sgt,structpage**pages,unsignedintn_pages,unsignedintoffset,unsignedlongsize,unsignedintmax_segment,gfp_tgfp_mask)

Allocate and initialize an sg table from an array of pages and given maximum segment.

Parameters

structsg_table*sgt

The sg table header to use

structpage**pages

Pointer to an array of page pointers

unsignedintn_pages

Number of pages in the pages array

unsignedintoffset

Offset from start of the first page to the start of a buffer

unsignedlongsize

Number of valid bytes in the buffer (after offset)

unsignedintmax_segment

Maximum size of a scatterlist element in bytes

gfp_tgfp_mask

GFP allocation mask

Description

Allocate and initialize an sg table from a list of pages. Contiguousranges of the pages are squashed into a single scatterlist node up to themaximum size specified inmax_segment. A user may provide an offset at astart and a size of valid data in a buffer specified by the page array.

The returned sg table is released by sg_free_table.

Return

0 on success, negative error on failure

structscatterlist*sgl_alloc_order(unsignedlonglonglength,unsignedintorder,boolchainable,gfp_tgfp,unsignedint*nent_p)

allocate a scatterlist and its pages

Parameters

unsignedlonglonglength

Length in bytes of the scatterlist. Must be at least one

unsignedintorder

Second argument foralloc_pages()

boolchainable

Whether or not to allocate an extra element in the scatterlistfor scatterlist chaining purposes

gfp_tgfp

Memory allocation flags

unsignedint*nent_p

[out] Number of entries in the scatterlist that have pages

Return

A pointer to an initialized scatterlist orNULL upon failure.

structscatterlist*sgl_alloc(unsignedlonglonglength,gfp_tgfp,unsignedint*nent_p)

allocate a scatterlist and its pages

Parameters

unsignedlonglonglength

Length in bytes of the scatterlist

gfp_tgfp

Memory allocation flags

unsignedint*nent_p

[out] Number of entries in the scatterlist

Return

A pointer to an initialized scatterlist orNULL upon failure.

voidsgl_free_n_order(structscatterlist*sgl,intnents,intorder)

free a scatterlist and its pages

Parameters

structscatterlist*sgl

Scatterlist with one or more elements

intnents

Maximum number of elements to free

intorder

Second argument for__free_pages()

Notes

  • If several scatterlists have been chained and each chain element isfreed separately then it’s essential to set nents correctly to avoid that apage would get freed twice.

  • All pages in a chained scatterlist can be freed at once by settingnentsto a high number.

voidsgl_free_order(structscatterlist*sgl,intorder)

free a scatterlist and its pages

Parameters

structscatterlist*sgl

Scatterlist with one or more elements

intorder

Second argument for__free_pages()

voidsgl_free(structscatterlist*sgl)

free a scatterlist and its pages

Parameters

structscatterlist*sgl

Scatterlist with one or more elements

voidsg_miter_start(structsg_mapping_iter*miter,structscatterlist*sgl,unsignedintnents,unsignedintflags)

start mapping iteration over a sg list

Parameters

structsg_mapping_iter*miter

sg mapping iter to be started

structscatterlist*sgl

sg list to iterate over

unsignedintnents

number of sg entries

unsignedintflags

sg iterator flags

Description

Starts mapping iteratormiter.

Context

Don’t care.

boolsg_miter_skip(structsg_mapping_iter*miter,off_toffset)

reposition mapping iterator

Parameters

structsg_mapping_iter*miter

sg mapping iter to be skipped

off_toffset

number of bytes to plus the current location

Description

Sets the offset ofmiter to its current location plusoffset bytes.If mapping iteratormiter has been proceeded bysg_miter_next(), thisstopsmiter.

Context

Don’t care.

Return

true ifmiter contains the valid mapping. false if end of sglist is reached.

boolsg_miter_next(structsg_mapping_iter*miter)

proceed mapping iterator to the next mapping

Parameters

structsg_mapping_iter*miter

sg mapping iter to proceed

Description

Proceedsmiter to the next mapping.miter should have been startedusingsg_miter_start(). On successful return,miter->page,miter->addr andmiter->length point to the current mapping.

Context

May sleep if !SG_MITER_ATOMIC && !SG_MITER_LOCAL.

Return

true ifmiter contains the next mapping. false if end of sglist is reached.

voidsg_miter_stop(structsg_mapping_iter*miter)

stop mapping iteration

Parameters

structsg_mapping_iter*miter

sg mapping iter to be stopped

Description

Stops mapping iteratormiter.miter should have been startedusingsg_miter_start(). A stopped iteration can be resumed bycallingsg_miter_next() on it. This is useful when resources (kmap)need to be released during iteration.

Context

Don’t care otherwise.

size_tsg_copy_buffer(structscatterlist*sgl,unsignedintnents,void*buf,size_tbuflen,off_tskip,boolto_buffer)

Copy data between a linear buffer and an SG list

Parameters

structscatterlist*sgl

The SG list

unsignedintnents

Number of SG entries

void*buf

Where to copy from

size_tbuflen

The number of bytes to copy

off_tskip

Number of bytes to skip before copying

boolto_buffer

transfer direction (true == from an sg list to abuffer, false == from a buffer to an sg list)

Description

Returns the number of copied bytes.

size_tsg_copy_from_buffer(structscatterlist*sgl,unsignedintnents,constvoid*buf,size_tbuflen)

Copy from a linear buffer to an SG list

Parameters

structscatterlist*sgl

The SG list

unsignedintnents

Number of SG entries

constvoid*buf

Where to copy from

size_tbuflen

The number of bytes to copy

Description

Returns the number of copied bytes.

size_tsg_copy_to_buffer(structscatterlist*sgl,unsignedintnents,void*buf,size_tbuflen)

Copy from an SG list to a linear buffer

Parameters

structscatterlist*sgl

The SG list

unsignedintnents

Number of SG entries

void*buf

Where to copy to

size_tbuflen

The number of bytes to copy

Description

Returns the number of copied bytes.

size_tsg_pcopy_from_buffer(structscatterlist*sgl,unsignedintnents,constvoid*buf,size_tbuflen,off_tskip)

Copy from a linear buffer to an SG list

Parameters

structscatterlist*sgl

The SG list

unsignedintnents

Number of SG entries

constvoid*buf

Where to copy from

size_tbuflen

The number of bytes to copy

off_tskip

Number of bytes to skip before copying

Description

Returns the number of copied bytes.

size_tsg_pcopy_to_buffer(structscatterlist*sgl,unsignedintnents,void*buf,size_tbuflen,off_tskip)

Copy from an SG list to a linear buffer

Parameters

structscatterlist*sgl

The SG list

unsignedintnents

Number of SG entries

void*buf

Where to copy to

size_tbuflen

The number of bytes to copy

off_tskip

Number of bytes to skip before copying

Description

Returns the number of copied bytes.

size_tsg_zero_buffer(structscatterlist*sgl,unsignedintnents,size_tbuflen,off_tskip)

Zero-out a part of a SG list

Parameters

structscatterlist*sgl

The SG list

unsignedintnents

Number of SG entries

size_tbuflen

The number of bytes to zero out

off_tskip

Number of bytes to skip before zeroing

Description

Returns the number of bytes zeroed.

ssize_textract_iter_to_sg(structiov_iter*iter,size_tmaxsize,structsg_table*sgtable,unsignedintsg_max,iov_iter_extraction_textraction_flags)

Extract pages from an iterator and add to an sglist

Parameters

structiov_iter*iter

The iterator to extract from

size_tmaxsize

The amount of iterator to copy

structsg_table*sgtable

The scatterlist table to fill in

unsignedintsg_max

Maximum number of elements insgtable that may be filled

iov_iter_extraction_textraction_flags

Flags to qualify the request

Description

Extract the page fragments from the given amount of the source iterator andadd them to a scatterlist that refers to all of those bits, to a maximumaddition ofsg_max elements.

The pages referred to by UBUF- and IOVEC-type iterators are extracted andpinned; BVEC-, KVEC-, FOLIOQ- and XARRAY-type are extracted but aren’tpinned; DISCARD-type is not supported.

No end mark is placed on the scatterlist; that’s left to the caller.

extraction_flags can have ITER_ALLOW_P2PDMA set to request peer-to-peer DMAbe allowed on the pages extracted.

If successful,sgtable->nents is updated to include the number of elementsadded and the number of bytes added is returned.sgtable->orig_nents isleft unaltered.

Theiov_iter_extract_mode() function should be used to query how cleanupshould be performed.