Page Pool API¶
The page_pool allocator is optimized for recycling page or page fragment usedby skb packet and xdp frame.
Basic use involves replacing anyalloc_pages() calls withpage_pool_alloc(),which allocate memory with or without page splitting depending on therequested memory size.
If the driver knows that it always requires full pages or its allocations arealways smaller than half a page, it can use one of the more specific APIcalls:
1.page_pool_alloc_pages(): allocate memory without page splitting whendriver knows that the memory it need is always bigger than half of the pageallocated from page pool. There is no cache line dirtying for ‘structpage’when a page is recycled back to the page pool.
2.page_pool_alloc_frag(): allocate memory with page splitting when driverknows that the memory it need is always smaller than or equal to half of thepage allocated from page pool. Page splitting enables memory saving and thusavoids TLB/cache miss for data access, but there also is some cost toimplement page splitting, mainly some cache line dirtying/bouncing for‘structpage’ and atomic operation for page->pp_ref_count.
The API keeps track of in-flight pages, in order to let API users know whenit is safe to free a page_pool object, the API users must callpage_pool_put_page() orpage_pool_free_va() to free the page_pool object, orattach the page_pool object to a page_pool-aware object like skbs marked withskb_mark_for_recycle().
page_pool_put_page() may be called multiple times on the same page if a pageis split into multiple fragments. For the last fragment, it will eitherrecycle the page, or in case of page->_refcount > 1, it will release the DMAmapping and in-flight state accounting.
dma_sync_single_range_for_device() is only called for the last fragment whenpage_pool is created with PP_FLAG_DMA_SYNC_DEV flag, so it depends on thelast freed fragment to do the sync_for_device operation for all fragments inthe same page when a page is split. The API user must setup pool->p.max_lenand pool->p.offset correctly and ensure thatpage_pool_put_page() is calledwith dma_sync_size being -1 for fragment API.
Architecture overview¶
+------------------+| Driver |+------------------+ ^ | | | v+--------------------------------------------+| request memory |+--------------------------------------------+ ^ ^ | | | Pool empty | Pool has entries | | v v+-----------------------+ +------------------------+| alloc (and map) pages | | get page from cache |+-----------------------+ +------------------------+ ^ ^ | | | cache available | No entries, refill | | from ptr-ring | | v v +-----------------+ +------------------+ | Fast cache | | ptr-ring cache | +-----------------+ +------------------+
Monitoring¶
Information about page pools on the system can be accessed via the netdevgenetlink family (see Documentation/netlink/specs/netdev.yaml).
API interface¶
The number of pools createdmust match the number of hardware queuesunless hardware restrictions make that impossible. This would otherwise beat thepurpose of page pool, which is allocate pages fast from cache without locking.This lockless guarantee naturally comes from running under a NAPI softirq.The protection doesn’t strictly have to be NAPI, any guarantee that allocatinga page will cause no race conditions is enough.
- structpage_pool*page_pool_create(conststructpage_pool_params*params)¶
create a page pool
Parameters
conststructpage_pool_params*paramsparameters, see
structpage_pool_params
- structpage_pool_params¶
page pool parameters
Definition:
struct page_pool_params { struct page_pool_params_fast fast; unsigned int order; unsigned int pool_size; int nid; struct device *dev; struct napi_struct *napi; enum dma_data_direction dma_dir; unsigned int max_len; unsigned int offset; struct page_pool_params_slow slow; STRUCT_GROUP( struct net_device *netdev; unsigned int queue_idx; unsigned int flags;};Members
fastparams accessed frequently on hotpath
order2^order pages on allocation
pool_sizesize of the ptr_ring
nidNUMA node id to allocate from pages from
devdevice, for DMA pre-mapping purposes
napiNAPI which is the sole consumer of pages, otherwise NULL
dma_dirDMA mapping direction
max_lenmax DMA sync memory size for PP_FLAG_DMA_SYNC_DEV
offsetDMA sync address offset for PP_FLAG_DMA_SYNC_DEV
slowparams with slowpath access only (initialization and Netlink)
netdevnetdev this pool will serve (leave as NULL if none or multiple)
queue_idxqueue idx this page_pool is being created for.
flagsPP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV, PP_FLAG_SYSTEM_POOL,PP_FLAG_ALLOW_UNREADABLE_NETMEM.
- structpage*page_pool_dev_alloc_pages(structpage_pool*pool)¶
allocate a page.
Parameters
structpage_pool*poolpool from which to allocate
Description
Get a page from the page allocator or page_pool caches.
- structpage*page_pool_dev_alloc_frag(structpage_pool*pool,unsignedint*offset,unsignedintsize)¶
allocate a page fragment.
Parameters
structpage_pool*poolpool from which to allocate
unsignedint*offsetoffset to the allocated page
unsignedintsizerequested size
Description
Get a page fragment from the page allocator or page_pool caches.
Return
allocated page fragment, otherwise return NULL.
- structpage*page_pool_dev_alloc(structpage_pool*pool,unsignedint*offset,unsignedint*size)¶
allocate a page or a page fragment.
Parameters
structpage_pool*poolpool from which to allocate
unsignedint*offsetoffset to the allocated page
unsignedint*sizein as the requested size, out as the allocated size
Description
Get a page or a page fragment from the page allocator or page_pool cachesdepending on the requested size in order to allocate memory with least memoryutilization and performance penalty.
Return
allocated page or page fragment, otherwise return NULL.
- void*page_pool_dev_alloc_va(structpage_pool*pool,unsignedint*size)¶
allocate a page or a page fragment and return its va.
Parameters
structpage_pool*poolpool from which to allocate
unsignedint*sizein as the requested size, out as the allocated size
Description
This is just a thin wrapper around thepage_pool_alloc() API, andit returns va of the allocated page or page fragment.
Return
the va for the allocated page or page fragment, otherwise return NULL.
- enumdma_data_directionpage_pool_get_dma_dir(conststructpage_pool*pool)¶
Retrieve the stored DMA direction.
Parameters
conststructpage_pool*poolpool from which page was allocated
Description
Get the stored dma direction. A driver might decide to store this locallyand avoid the extra cache line from page_pool to determine the direction.
- voidpage_pool_put_page(structpage_pool*pool,structpage*page,unsignedintdma_sync_size,boolallow_direct)¶
release a reference to a page pool page
Parameters
structpage_pool*poolpool from which page was allocated
structpage*pagepage to release a reference on
unsignedintdma_sync_sizehow much of the page may have been touched by the device
boolallow_directreleased by the consumer, allow lockless caching
Description
The outcome of this depends on the page refcnt. If the driver bumpsthe refcnt > 1 this will unmap the page. If the page refcnt is 1the allocator owns the page and will try to recycle it in one of the poolcaches. If PP_FLAG_DMA_SYNC_DEV is set, the page will be synced for_deviceusingdma_sync_single_range_for_device().
- voidpage_pool_put_full_page(structpage_pool*pool,structpage*page,boolallow_direct)¶
release a reference on a page pool page
Parameters
structpage_pool*poolpool from which page was allocated
structpage*pagepage to release a reference on
boolallow_directreleased by the consumer, allow lockless caching
Description
Similar topage_pool_put_page(), but will DMA sync the entire memory areaas configured inpage_pool_params.max_len.
- voidpage_pool_recycle_direct(structpage_pool*pool,structpage*page)¶
release a reference on a page pool page
Parameters
structpage_pool*poolpool from which page was allocated
structpage*pagepage to release a reference on
Description
Similar topage_pool_put_full_page() but caller must guarantee safe context(e.g NAPI), since it will recycle the page directly into the pool fast cache.
- voidpage_pool_free_va(structpage_pool*pool,void*va,boolallow_direct)¶
free a va into the page_pool
Parameters
structpage_pool*poolpool from which va was allocated
void*vava to be freed
boolallow_directfreed by the consumer, allow lockless caching
Description
Free a va allocated frompage_pool_allo_va().
Parameters
conststructpage*pagepage allocated from a page pool
Description
Fetch the DMA address of the page. The page pool to which the page belongsmust had been created with PP_FLAG_DMA_MAP.
- boolpage_pool_get_stats(conststructpage_pool*pool,structpage_pool_stats*stats)¶
fetch page pool stats
Parameters
conststructpage_pool*poolpool from which page was allocated
structpage_pool_stats*statsstructpage_pool_statsto fill in
Description
Retrieve statistics about the page_pool. This API is only availableif the kernel has been configured withCONFIG_PAGE_POOL_STATS=y.A pointer to a caller allocatedstructpage_pool_stats structureis passed to this API which is filled in. The caller can then reportthose stats to the user (perhaps via ethtool, debugfs, etc.).
DMA sync¶
Driver is always responsible for syncing the pages for the CPU.Drivers may choose to take care of syncing for the device as wellor set thePP_FLAG_DMA_SYNC_DEV flag to request that pagesallocated from the page pool are already synced for the device.
IfPP_FLAG_DMA_SYNC_DEV is set, the driver must inform the core what portionof the buffer has to be synced. This allows the core to avoid syncing the entirepage when the drivers knows that the device only accessed a portion of the page.
Most drivers will reserve headroom in front of the frame. This partof the buffer is not touched by the device, so to avoid syncingit drivers can set theoffset field instructpage_pool_paramsappropriately.
For pages recycled on the XDP xmit and skb paths the page pool willuse themax_len member ofstructpage_pool_params to decide howmuch of the page needs to be synced (starting atoffset).When directly freeing pages in the driver (page_pool_put_page())thedma_sync_size argument specifies how much of the buffer needsto be synced.
If in doubt setoffset to 0,max_len toPAGE_SIZE andpass -1 asdma_sync_size. That combination of arguments is alwayscorrect.
Note that the syncing parameters are for the entire page.This is important to remember when using fragments (PP_FLAG_PAGE_FRAG),where allocated buffers may be smaller than a full page.Unless the driver author really understands page pool internalsit’s recommended to always useoffset=0,max_len=PAGE_SIZEwith fragmented page pools.
Stats API and structures¶
If the kernel is configured withCONFIG_PAGE_POOL_STATS=y, the APIpage_pool_get_stats() and structures described below are available.It takes a pointer to astructpage_pool and a pointer to astructpage_pool_stats allocated by the caller.
Older drivers expose page pool statistics via ethtool or debugfs.The same statistics are accessible via the netlink netdev familyin a driver-independent fashion.
- structpage_pool_alloc_stats¶
allocation statistics
Definition:
struct page_pool_alloc_stats { u64 fast; u64 slow; u64 slow_high_order; u64 empty; u64 refill; u64 waive;};Members
fastsuccessful fast path allocations
slowslow path order-0 allocations
slow_high_orderslow path high order allocations
emptyptr ring is empty, so a slow path allocation was forced
refillan allocation which triggered a refill of the cache
waivepages obtained from the ptr ring that cannot be added tothe cache due to a NUMA mismatch
- structpage_pool_recycle_stats¶
recycling (freeing) statistics
Definition:
struct page_pool_recycle_stats { u64 cached; u64 cache_full; u64 ring; u64 ring_full; u64 released_refcnt;};Members
cachedrecycling placed page in the page pool cache
cache_fullpage pool cache was full
ringpage placed into the ptr ring
ring_fullpage released from page pool because the ptr ring was full
released_refcntpage released (and not recycled) because refcnt > 1
- structpage_pool_stats¶
combined page pool use statistics
Definition:
struct page_pool_stats { struct page_pool_alloc_stats alloc_stats; struct page_pool_recycle_stats recycle_stats;};Members
alloc_statsrecycle_stats
Description
Wrapper struct for combining page pool stats with different storagerequirements.
Coding examples¶
Registration¶
/* Page pool registration */structpage_pool_paramspp_params={0};structxdp_rxq_infoxdp_rxq;interr;pp_params.order=0;/* internal DMA mapping in page_pool */pp_params.flags=PP_FLAG_DMA_MAP;pp_params.pool_size=DESC_NUM;pp_params.nid=NUMA_NO_NODE;pp_params.dev=priv->dev;pp_params.napi=napi;/* only if locking is tied to NAPI */pp_params.dma_dir=xdp_prog?DMA_BIDIRECTIONAL:DMA_FROM_DEVICE;page_pool=page_pool_create(&pp_params);err=xdp_rxq_info_reg(&xdp_rxq,ndev,0);if(err)gotoerr_out;err=xdp_rxq_info_reg_mem_model(&xdp_rxq,MEM_TYPE_PAGE_POOL,page_pool);if(err)gotoerr_out;
NAPI poller¶
/* NAPI Rx poller */enumdma_data_directiondma_dir;dma_dir=page_pool_get_dma_dir(dring->page_pool);while(done<budget){if(someerror)page_pool_recycle_direct(page_pool,page);if(packet_is_xdp){ifXDP_DROP:page_pool_recycle_direct(page_pool,page);}else(packet_is_skb){skb_mark_for_recycle(skb);new_page=page_pool_dev_alloc_pages(page_pool);}}
Stats¶
#ifdef CONFIG_PAGE_POOL_STATS/* retrieve stats */structpage_pool_statsstats={0};if(page_pool_get_stats(page_pool,&stats)){/* perhaps the driver reports statistics with ethool */ethtool_print_allocation_stats(&stats.alloc_stats);ethtool_print_recycle_stats(&stats.recycle_stats);}#endif
Driver unload¶
/* Driver unload */page_pool_put_full_page(page_pool,page,false);xdp_rxq_info_unreg(&xdp_rxq);