User Interface for Resource Control feature (resctrl)¶
- Copyright:
© 2016 Intel Corporation
- Authors:
Fenghua Yu <fenghua.yu@intel.com>
Tony Luck <tony.luck@intel.com>
Vikas Shivappa <vikas.shivappa@intel.com>
Intel refers to this feature as Intel Resource Director Technology(Intel(R) RDT).AMD refers to this feature as AMD Platform Quality of Service(AMD QoS).
This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfoflag bits:
RDT (Resource Director Technology) Allocation | “rdt_a” |
CAT (Cache Allocation Technology) | “cat_l3”, “cat_l2” |
CDP (Code and Data Prioritization) | “cdp_l3”, “cdp_l2” |
CQM (Cache QoS Monitoring) | “cqm_llc”, “cqm_occup_llc” |
MBM (Memory Bandwidth Monitoring) | “cqm_mbm_total”, “cqm_mbm_local” |
MBA (Memory Bandwidth Allocation) | “mba” |
SMBA (Slow Memory Bandwidth Allocation) | “” |
BMEC (Bandwidth Monitoring Event Configuration) | “” |
ABMC (Assignable Bandwidth Monitoring Counters) | “” |
SDCIAE (Smart Data Cache Injection Allocation Enforcement) | “” |
Historically, new features were made visible by default in /proc/cpuinfo. Thisresulted in the feature flags becoming hard to parse by humans. Adding a newflag to /proc/cpuinfo should be avoided if user space can obtain informationabout the feature from resctrl’s info directory.
To use the feature mount the file system:
# mount -t resctrl resctrl [-o cdp[,cdpl2][,mba_MBps][,debug]] /sys/fs/resctrl
mount options are:
- “cdp”:
Enable code/data prioritization in L3 cache allocations.
- “cdpl2”:
Enable code/data prioritization in L2 cache allocations.
- “mba_MBps”:
Enable the MBA Software Controller(mba_sc) to specify MBAbandwidth in MiBps
- “debug”:
Make debug files accessible. Available debug files are annotated with“Available only with debug option”.
L2 and L3 CDP are controlled separately.
RDT features are orthogonal. A particular system may support onlymonitoring, only control, or both monitoring and control. Cachepseudo-locking is a unique way of using cache control to “pin” or“lock” data in the cache. Details can be found in“Cache Pseudo-Locking”.
The mount succeeds if either of allocation or monitoring is present, butonly those files and directories supported by the system will be created.For more details on the behavior of the interface during monitoringand allocation, see the “Resource alloc and monitor groups” section.
Info directory¶
The ‘info’ directory contains information about the enabledresources. Each resource has its own subdirectory. The subdirectorynames reflect the resource names.
Most of the files in the resource’s subdirectory are read-only, anddescribe properties of the resource. Resources that support globalconfiguration options also include writable files that can be usedto modify those settings.
Each subdirectory contains the following files with respect toallocation:
Cache resource(L3/L2) subdirectory contains the following filesrelated to allocation:
- “num_closids”:
The number of CLOSIDs which are valid for thisresource. The kernel uses the smallest number ofCLOSIDs of all enabled resources as limit.
- “cbm_mask”:
The bitmask which is valid for this resource.This mask is equivalent to 100%.
- “min_cbm_bits”:
The minimum number of consecutive bits whichmust be set when writing a mask.
- “shareable_bits”:
Bitmask of shareable resource with other executing entities(e.g. I/O). Applies to all instances of this resource. Usercan use this when setting up exclusive cache partitions.Note that some platforms support devices that have theirown settings for cache use which can over-ride these bits.
When “io_alloc” is enabled, a portion of each cache instance canbe configured for shared use between hardware and software.“bit_usage” should be used to see which portions of each cacheinstance is configured for hardware use via “io_alloc” featurebecause every cache instance can have its “io_alloc” bitmaskconfigured independently via “io_alloc_cbm”.
- “bit_usage”:
Annotated capacity bitmasks showing how allinstances of the resource are used. The legend is:
- “0”:
Corresponding region is unused. When the system’sresources have been allocated and a “0” is foundin “bit_usage” it is a sign that resources arewasted.
- “H”:
Corresponding region is used by hardware onlybut available for software use. If a resourcehas bits set in “shareable_bits” or “io_alloc_cbm”but not all of these bits appear in the resourcegroups’ schemata then the bits appearing in“shareable_bits” or “io_alloc_cbm” but noresource group will be marked as “H”.
- “X”:
Corresponding region is available for sharing andused by hardware and software. These are the bitsthat appear in “shareable_bits” or “io_alloc_cbm”as well as a resource group’s allocation.
- “S”:
Corresponding region is used by softwareand available for sharing.
- “E”:
Corresponding region is used exclusively byone resource group. No sharing allowed.
- “P”:
Corresponding region is pseudo-locked. Nosharing allowed.
- “sparse_masks”:
Indicates if non-contiguous 1s value in CBM is supported.
- “0”:
Only contiguous 1s value in CBM is supported.
- “1”:
Non-contiguous 1s value in CBM is supported.
- “io_alloc”:
“io_alloc” enables system software to configure the portion ofthe cache allocated for I/O traffic. File may only exist if thesystem supports this feature on some of its cache resources.
- “disabled”:
Resource supports “io_alloc” but the feature is disabled.Portions of cache used for allocation of I/O traffic cannotbe configured.
- “enabled”:
Portions of cache used for allocation of I/O trafficcan be configured using “io_alloc_cbm”.
- “not supported”:
Support not available for this resource.
The feature can be modified by writing to the interface, for example:
To enable:
# echo 1 > /sys/fs/resctrl/info/L3/io_alloc
To disable:
# echo 0 > /sys/fs/resctrl/info/L3/io_alloc
The underlying implementation may reduce resources available togeneral (CPU) cache allocation. See architecture specific notesbelow. Depending on usage requirements the feature can be enabledor disabled.
On AMD systems, io_alloc feature is supported by the L3 SmartData Cache Injection Allocation Enforcement (SDCIAE). The CLOSID forio_alloc is the highest CLOSID supported by the resource. Whenio_alloc is enabled, the highest CLOSID is dedicated to io_alloc andno longer available for general (CPU) cache allocation. When CDP isenabled, io_alloc routes I/O traffic using the highest CLOSID allocatedfor the instruction cache (CDP_CODE), making this CLOSID no longeravailable for general (CPU) cache allocation for both the CDP_CODEand CDP_DATA resources.
- “io_alloc_cbm”:
Capacity bitmasks that describe the portions of cache instances towhich I/O traffic from supported I/O devices are routed when “io_alloc”is enabled.
CBMs are displayed in the following format:
<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
Example:
# cat /sys/fs/resctrl/info/L3/io_alloc_cbm0=ffff;1=ffff
CBMs can be configured by writing to the interface.
Example:
# echo 1=ff > /sys/fs/resctrl/info/L3/io_alloc_cbm# cat /sys/fs/resctrl/info/L3/io_alloc_cbm0=ffff;1=00ff# echo "0=ff;1=f" > /sys/fs/resctrl/info/L3/io_alloc_cbm# cat /sys/fs/resctrl/info/L3/io_alloc_cbm0=00ff;1=000f
When CDP is enabled “io_alloc_cbm” associated with the CDP_DATA and CDP_CODEresources may reflect the same values. For example, values read from andwritten to /sys/fs/resctrl/info/L3DATA/io_alloc_cbm may be reflected by/sys/fs/resctrl/info/L3CODE/io_alloc_cbm and vice versa.
Memory bandwidth(MB) subdirectory contains the following fileswith respect to allocation:
- “min_bandwidth”:
The minimum memory bandwidth percentage whichuser can request.
- “bandwidth_gran”:
The granularity in which the memory bandwidthpercentage is allocated. The allocatedb/w percentage is rounded off to the nextcontrol step available on the hardware. Theavailable bandwidth control steps are:min_bandwidth + N * bandwidth_gran.
- “delay_linear”:
Indicates if the delay scale is linear ornon-linear. This field is purely informationalonly.
- “thread_throttle_mode”:
Indicator on Intel systems of how tasks running on threadsof a physical core are throttled in cases where theyrequest different memory bandwidth percentages:
- “max”:
the smallest percentage is appliedto all threads
- “per-thread”:
bandwidth percentages are directly applied tothe threads running on the core
If RDT monitoring is available there will be an “L3_MON” directorywith the following files:
- “num_rmids”:
The number of RMIDs available. This is theupper bound for how many “CTRL_MON” + “MON”groups can be created.
- “mon_features”:
Lists the monitoring events ifmonitoring is enabled for the resource.Example:
# cat /sys/fs/resctrl/info/L3_MON/mon_featuresllc_occupancymbm_total_bytesmbm_local_bytes
If the system supports Bandwidth Monitoring EventConfiguration (BMEC), then the bandwidth events willbe configurable. The output will be:
# cat /sys/fs/resctrl/info/L3_MON/mon_featuresllc_occupancymbm_total_bytesmbm_total_bytes_configmbm_local_bytesmbm_local_bytes_config
- “mbm_total_bytes_config”, “mbm_local_bytes_config”:
Read/write files containing the configuration for the mbm_total_bytesand mbm_local_bytes events, respectively, when the BandwidthMonitoring Event Configuration (BMEC) feature is supported.The event configuration settings are domain specific and affectall the CPUs in the domain. When either event configuration ischanged, the bandwidth counters for all RMIDs of both events(mbm_total_bytes as well as mbm_local_bytes) are cleared for thatdomain. The next read for every RMID will report “Unavailable”and subsequent reads will report the valid value.
Following are the types of events supported:
Bits
Description
6
Dirty Victims from the QOS domain to all types of memory
5
Reads to slow memory in the non-local NUMA domain
4
Reads to slow memory in the local NUMA domain
3
Non-temporal writes to non-local NUMA domain
2
Non-temporal writes to local NUMA domain
1
Reads to memory in the non-local NUMA domain
0
Reads to memory in the local NUMA domain
By default, the mbm_total_bytes configuration is set to 0x7f to countall the event types and the mbm_local_bytes configuration is set to0x15 to count all the local memory events.
Examples:
To view the current configuration::
# cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config0=0x7f;1=0x7f;2=0x7f;3=0x7f# cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config0=0x15;1=0x15;3=0x15;4=0x15
To change the mbm_total_bytes to count only reads on domain 0,the bits 0, 1, 4 and 5 needs to be set, which is 110011b in binary(in hexadecimal 0x33):
# echo "0=0x33" > /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config# cat /sys/fs/resctrl/info/L3_MON/mbm_total_bytes_config0=0x33;1=0x7f;2=0x7f;3=0x7f
To change the mbm_local_bytes to count all the slow memory reads ondomain 0 and 1, the bits 4 and 5 needs to be set, which is 110000bin binary (in hexadecimal 0x30):
# echo "0=0x30;1=0x30" > /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config# cat /sys/fs/resctrl/info/L3_MON/mbm_local_bytes_config0=0x30;1=0x30;3=0x15;4=0x15
- “mbm_assign_mode”:
The supported counter assignment modes. The enclosed brackets indicate which modeis enabled. The MBM events associated with counters may reset when “mbm_assign_mode”is changed.
# cat /sys/fs/resctrl/info/L3_MON/mbm_assign_mode[mbm_event]default
“mbm_event”:
mbm_event mode allows users to assign a hardware counter to an RMID, eventpair and monitor the bandwidth usage as long as it is assigned. The hardwarecontinues to track the assigned counter until it is explicitly unassigned bythe user. Each event within a resctrl group can be assigned independently.
In this mode, a monitoring event can only accumulate data while it is backedby a hardware counter. Use “mbm_L3_assignments” found in each CTRL_MON and MONgroup to specify which of the events should have a counter assigned. The numberof counters available is described in the “num_mbm_cntrs” file. Changing themode may cause all counters on the resource to reset.
Moving to mbm_event counter assignment mode requires users to assign the countersto the events. Otherwise, the MBM event counters will return ‘Unassigned’ when read.
The mode is beneficial for AMD platforms that support more CTRL_MONand MON groups than available hardware counters. By default, thisfeature is enabled on AMD platforms with the ABMC (Assignable BandwidthMonitoring Counters) capability, ensuring counters remain assigned evenwhen the corresponding RMID is not actively used by any processor.
“default”:
In default mode, resctrl assumes there is a hardware counter for eachevent within every CTRL_MON and MON group. On AMD platforms, it isrecommended to use the mbm_event mode, if supported, to prevent reset of MBMevents between reads resulting from hardware re-allocating counters. This canresult in misleading values or display “Unavailable” if no counter is assignedto the event.
To enable “mbm_event” counter assignment mode:
# echo "mbm_event" > /sys/fs/resctrl/info/L3_MON/mbm_assign_mode
To enable “default” monitoring mode:
# echo "default" > /sys/fs/resctrl/info/L3_MON/mbm_assign_mode
- “num_mbm_cntrs”:
The maximum number of counters (total of available and assigned counters) ineach domain when the system supports mbm_event mode.
For example, on a system with maximum of 32 memory bandwidth monitoringcounters in each of its L3 domains:
# cat /sys/fs/resctrl/info/L3_MON/num_mbm_cntrs0=32;1=32
- “available_mbm_cntrs”:
The number of counters available for assignment in each domain when mbm_eventmode is enabled on the system.
For example, on a system with 30 available [hardware] assignable countersin each of its L3 domains:
# cat /sys/fs/resctrl/info/L3_MON/available_mbm_cntrs0=30;1=30
- “event_configs”:
Directory that exists when “mbm_event” counter assignment mode is supported.Contains a sub-directory for each MBM event that can be assigned to a counter.
Two MBM events are supported by default: mbm_local_bytes and mbm_total_bytes.Each MBM event’s sub-directory contains a file named “event_filter” that isused to view and modify which memory transactions the MBM event is configuredwith. The file is accessible only when “mbm_event” counter assignment mode isenabled.
List of memory transaction types supported:
Name
Description
dirty_victim_writes_all
Dirty Victims from the QOS domain to all types of memory
remote_reads_slow_memory
Reads to slow memory in the non-local NUMA domain
local_reads_slow_memory
Reads to slow memory in the local NUMA domain
remote_non_temporal_writes
Non-temporal writes to non-local NUMA domain
local_non_temporal_writes
Non-temporal writes to local NUMA domain
remote_reads
Reads to memory in the non-local NUMA domain
local_reads
Reads to memory in the local NUMA domain
For example:
# cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_total_bytes/event_filterlocal_reads,remote_reads,local_non_temporal_writes,remote_non_temporal_writes,local_reads_slow_memory,remote_reads_slow_memory,dirty_victim_writes_all# cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_local_bytes/event_filterlocal_reads,local_non_temporal_writes,local_reads_slow_memory
Modify the event configuration by writing to the “event_filter” file withinthe “event_configs” directory. The read/write “event_filter” file contains theconfiguration of the event that reflects which memory transactions are counted by it.
For example:
# echo "local_reads, local_non_temporal_writes" > /sys/fs/resctrl/info/L3_MON/event_configs/mbm_total_bytes/event_filter# cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_total_bytes/event_filter local_reads,local_non_temporal_writes
- “mbm_assign_on_mkdir”:
Exists when “mbm_event” counter assignment mode is supported. Accessibleonly when “mbm_event” counter assignment mode is enabled.
Determines if a counter will automatically be assigned to an RMID, MBM eventpair when its associated monitor group is created via mkdir. Enabled by defaulton boot, also when switched from “default” mode to “mbm_event” counter assignmentmode. Users can disable this capability by writing to the interface.
- “0”:
Auto assignment is disabled.
- “1”:
Auto assignment is enabled.
Example:
# echo 0 > /sys/fs/resctrl/info/L3_MON/mbm_assign_on_mkdir# cat /sys/fs/resctrl/info/L3_MON/mbm_assign_on_mkdir0
- “max_threshold_occupancy”:
Read/write file provides the largest value (inbytes) at which a previously used LLC_occupancycounter can be considered for re-use.
Finally, in the top level of the “info” directory there is a filenamed “last_cmd_status”. This is reset with every “command” issuedvia the file system (making new directories or writing to any of thecontrol files). If the command was successful, it will read as “ok”.If the command failed, it will provide more information that can beconveyed in the error returns from file operations. E.g.
# echo L3:0=f7 > schematabash: echo: write error: Invalid argument# cat info/last_cmd_statusmask f7 has non-consecutive 1-bits
Resource alloc and monitor groups¶
Resource groups are represented as directories in the resctrl filesystem. The default group is the root directory which, immediatelyafter mounting, owns all the tasks and cpus in the system and can makefull use of all resources.
On a system with RDT control features additional directories can becreated in the root directory that specify different amounts of eachresource (see “schemata” below). The root and these additional top leveldirectories are referred to as “CTRL_MON” groups below.
On a system with RDT monitoring the root directory and other top leveldirectories contain a directory named “mon_groups” in which additionaldirectories can be created to monitor subsets of tasks in the CTRL_MONgroup that is their ancestor. These are called “MON” groups in the restof this document.
Removing a directory will move all tasks and cpus owned by the group itrepresents to the parent. Removing one of the created CTRL_MON groupswill automatically remove all MON groups below it.
Moving MON group directories to a new parent CTRL_MON group is supportedfor the purpose of changing the resource allocations of a MON groupwithout impacting its monitoring data or assigned tasks. This operationis not allowed for MON groups which monitor CPUs. No other moveoperation is currently allowed other than simply renaming a CTRL_MON orMON group.
All groups contain the following files:
- “tasks”:
Reading this file shows the list of all tasks that belong tothis group. Writing a task id to the file will add a task to thegroup. Multiple tasks can be added by separating the task idswith commas. Tasks will be assigned sequentially. Multiplefailures are not supported. A single failure encountered whileattempting to assign a task will cause the operation to abort andalready added tasks before the failure will remain in the group.Failures will be logged to /sys/fs/resctrl/info/last_cmd_status.
If the group is a CTRL_MON group the task is removed fromwhichever previous CTRL_MON group owned the task and also fromany MON group that owned the task. If the group is a MON group,then the task must already belong to the CTRL_MON parent of thisgroup. The task is removed from any previous MON group.
- “cpus”:
Reading this file shows a bitmask of the logical CPUs owned bythis group. Writing a mask to this file will add and removeCPUs to/from this group. As with the tasks file a hierarchy ismaintained where MON groups may only include CPUs owned by theparent CTRL_MON group.When the resource group is in pseudo-locked mode this file willonly be readable, reflecting the CPUs associated with thepseudo-locked region.
- “cpus_list”:
Just like “cpus”, only using ranges of CPUs instead of bitmasks.
When control is enabled all CTRL_MON groups will also contain:
- “schemata”:
A list of all the resources available to this group.Each resource has its own line and format - see below for details.
- “size”:
Mirrors the display of the “schemata” file to display the size inbytes of each allocation instead of the bits representing theallocation.
- “mode”:
The “mode” of the resource group dictates the sharing of itsallocations. A “shareable” resource group allows sharing of itsallocations while an “exclusive” resource group does not. Acache pseudo-locked region is created by first writing“pseudo-locksetup” to the “mode” file before writing the cachepseudo-locked region’s schemata to the resource group’s “schemata”file. On successful pseudo-locked region creation the mode willautomatically change to “pseudo-locked”.
- “ctrl_hw_id”:
Available only with debug option. The identifier used by hardwarefor the control group. On x86 this is the CLOSID.
When monitoring is enabled all MON groups will also contain:
- “mon_data”:
This contains a set of files organized by L3 domain and byRDT event. E.g. on a system with two L3 domains there willbe subdirectories “mon_L3_00” and “mon_L3_01”. Each of thesedirectories have one file per event (e.g. “llc_occupancy”,“mbm_total_bytes”, and “mbm_local_bytes”). In a MON group thesefiles provide a read out of the current value of the event forall tasks in the group. In CTRL_MON groups these files providethe sum for all tasks in the CTRL_MON group and all tasks inMON groups. Please see example section for more details on usage.On systems with Sub-NUMA Cluster (SNC) enabled there are extradirectories for each node (located within the “mon_L3_XX” directoryfor the L3 cache they occupy). These are named “mon_sub_L3_YY”where “YY” is the node number.
When the ‘mbm_event’ counter assignment mode is enabled, readingan MBM event of a MON group returns ‘Unassigned’ if no hardwarecounter is assigned to it. For CTRL_MON groups, ‘Unassigned’ isreturned if the MBM event does not have an assigned counter in theCTRL_MON group nor in any of its associated MON groups.
- “mon_hw_id”:
Available only with debug option. The identifier used by hardwarefor the monitor group. On x86 this is the RMID.
When monitoring is enabled all MON groups may also contain:
- “mbm_L3_assignments”:
Exists when “mbm_event” counter assignment mode is supported and lists thecounter assignment states of the group.
The assignment list is displayed in the following format:
<Event>:<Domain ID>=<Assignment state>;<Domain ID>=<Assignment state>
- Event: A valid MBM event in the
/sys/fs/resctrl/info/L3_MON/event_configs directory.
- Domain ID: A valid domain ID. When writing, ‘*’ applies the changes
to all the domains.
Assignment states:
_ : No counter assigned.
e : Counter assigned exclusively.
Example:
To display the counter assignment states for the default group.
# cd /sys/fs/resctrl# cat /sys/fs/resctrl/mbm_L3_assignments mbm_total_bytes:0=e;1=e mbm_local_bytes:0=e;1=e
Assignments can be modified by writing to the interface.
Examples:
To unassign the counter associated with the mbm_total_bytes event on domain 0:
# echo "mbm_total_bytes:0=_" > /sys/fs/resctrl/mbm_L3_assignments# cat /sys/fs/resctrl/mbm_L3_assignments mbm_total_bytes:0=_;1=e mbm_local_bytes:0=e;1=e
To unassign the counter associated with the mbm_total_bytes event on all the domains:
# echo "mbm_total_bytes:*=_" > /sys/fs/resctrl/mbm_L3_assignments# cat /sys/fs/resctrl/mbm_L3_assignments mbm_total_bytes:0=_;1=_ mbm_local_bytes:0=e;1=e
To assign a counter associated with the mbm_total_bytes event on all domains inexclusive mode:
# echo "mbm_total_bytes:*=e" > /sys/fs/resctrl/mbm_L3_assignments# cat /sys/fs/resctrl/mbm_L3_assignments mbm_total_bytes:0=e;1=e mbm_local_bytes:0=e;1=e
When the “mba_MBps” mount option is used all CTRL_MON groups will also contain:
- “mba_MBps_event”:
Reading this file shows which memory bandwidth event is usedas input to the software feedback loop that keeps memory bandwidthbelow the value specified in the schemata file. Writing thename of one of the supported memory bandwidth events found in/sys/fs/resctrl/info/L3_MON/mon_features changes the inputevent.
Resource allocation rules¶
When a task is running the following rules define which resources areavailable to it:
If the task is a member of a non-default group, then the schematafor that group is used.
Else if the task belongs to the default group, but is running on aCPU that is assigned to some specific group, then the schemata for theCPU’s group is used.
Otherwise the schemata for the default group is used.
Resource monitoring rules¶
If a task is a member of a MON group, or non-default CTRL_MON groupthen RDT events for the task will be reported in that group.
If a task is a member of the default CTRL_MON group, but is runningon a CPU that is assigned to some specific group, then the RDT eventsfor the task will be reported in that group.
Otherwise RDT events for the task will be reported in the root level“mon_data” group.
Notes on cache occupancy monitoring and control¶
When moving a task from one group to another you should remember thatthis only affectsnew cache allocations by the task. E.g. you may havea task in a monitor group showing 3 MB of cache occupancy. If you moveto a new group and immediately check the occupancy of the old and newgroups you will likely see that the old group is still showing 3 MB andthe new group zero. When the task accesses locations still in cache frombefore the move, the h/w does not update any counters. On a busy systemyou will likely see the occupancy in the old group go down as cache linesare evicted and re-used while the occupancy in the new group rises asthe task accesses memory and loads into the cache are counted based onmembership in the new group.
The same applies to cache allocation control. Moving a task to a groupwith a smaller cache partition will not evict any cache lines. Theprocess may continue to use them from the old partition.
Hardware uses CLOSid(Class of service ID) and an RMID(Resource monitoring ID)to identify a control group and a monitoring group respectively. Each ofthe resource groups are mapped to these IDs based on the kind of group. Thenumber of CLOSid and RMID are limited by the hardware and hence the creation ofa “CTRL_MON” directory may fail if we run out of either CLOSID or RMIDand creation of “MON” group may fail if we run out of RMIDs.
max_threshold_occupancy - generic concepts¶
Note that an RMID once freed may not be immediately available for use asthe RMID is still tagged the cache lines of the previous user of RMID.Hence such RMIDs are placed on limbo list and checked back if the cacheoccupancy has gone down. If there is a time when system has a lot oflimbo RMIDs but which are not ready to be used, user may see an -EBUSYduring mkdir.
max_threshold_occupancy is a user configurable value to determine theoccupancy at which an RMID can be freed.
The mon_llc_occupancy_limbo tracepoint gives the precise occupancy in bytesfor a subset of RMID that are not immediately available for allocation.This can’t be relied on to produce output every second, it may be necessaryto attempt to create an empty monitor group to force an update. Output mayonly be produced if creation of a control or monitor group fails.
Schemata files - general concepts¶
Each line in the file describes one resource. The line starts withthe name of the resource, followed by specific values to be appliedin each of the instances of that resource on the system.
Cache IDs¶
On current generation systems there is one L3 cache per socket and L2caches are generally just shared by the hyperthreads on a core, but thisisn’t an architectural requirement. We could have multiple separate L3caches on a socket, multiple cores could share an L2 cache. So insteadof using “socket” or “core” to define the set of logical cpus sharinga resource we use a “Cache ID”. At a given cache level this will be aunique number across the whole system (but it isn’t guaranteed to be acontiguous sequence, there may be gaps). To find the ID for each logicalCPU look in /sys/devices/system/cpu/cpu*/cache/index*/id
Cache Bit Masks (CBM)¶
For cache resources we describe the portion of the cache that is availablefor allocation using a bitmask. The maximum value of the mask is definedby each cpu model (and may be different for different cache levels). Itis found using CPUID, but is also provided in the “info” directory ofthe resctrl file system in “info/{resource}/cbm_mask”. Some Intel hardwarerequires that these masks have all the ‘1’ bits in a contiguous block. So0x3, 0x6 and 0xC are legal 4-bit masks with two bits set, but 0x5, 0x9and 0xA are not. Check /sys/fs/resctrl/info/{resource}/sparse_masksif non-contiguous 1s value is supported. On a system with a 20-bit maskeach bit represents 5% of the capacity of the cache. You could partitionthe cache into four equal parts with masks: 0x1f, 0x3e0, 0x7c00, 0xf8000.
Notes on Sub-NUMA Cluster mode¶
When SNC mode is enabled, Linux may load balance tasks between Sub-NUMAnodes much more readily than between regular NUMA nodes since the CPUson Sub-NUMA nodes share the same L3 cache and the system may reportthe NUMA distance between Sub-NUMA nodes with a lower value than usedfor regular NUMA nodes.
The top-level monitoring files in each “mon_L3_XX” directory providethe sum of data across all SNC nodes sharing an L3 cache instance.Users who bind tasks to the CPUs of a specific Sub-NUMA node can readthe “llc_occupancy”, “mbm_total_bytes”, and “mbm_local_bytes” in the“mon_sub_L3_YY” directories to get node local data.
Memory bandwidth allocation is still performed at the L3 cachelevel. I.e. throttling controls are applied to all SNC nodes.
L3 cache allocation bitmaps also apply to all SNC nodes. But note thatthe amount of L3 cache represented by each bit is divided by the numberof SNC nodes per L3 cache. E.g. with a 100MB cache on a system with 10-bitallocation masks each bit normally represents 10MB. With SNC mode enabledwith two SNC nodes per L3 cache, each bit only represents 5MB.
Memory bandwidth Allocation and monitoring¶
For Memory bandwidth resource, by default the user controls the resourceby indicating the percentage of total memory bandwidth.
The minimum bandwidth percentage value for each cpu model is predefinedand can be looked up through “info/MB/min_bandwidth”. The bandwidthgranularity that is allocated is also dependent on the cpu model and canbe looked up at “info/MB/bandwidth_gran”. The available bandwidthcontrol steps are: min_bw + N * bw_gran. Intermediate values are roundedto the next control step available on the hardware.
The bandwidth throttling is a core specific mechanism on some of IntelSKUs. Using a high bandwidth and a low bandwidth setting on two threadssharing a core may result in both threads being throttled to use thelow bandwidth (see “thread_throttle_mode”).
The fact that Memory bandwidth allocation(MBA) may be a corespecific mechanism where as memory bandwidth monitoring(MBM) is done atthe package level may lead to confusion when users try to apply controlvia the MBA and then monitor the bandwidth to see if the controls areeffective. Below are such scenarios:
User maynot see increase in actual bandwidth when percentagevalues are increased:
This can occur when aggregate L2 external bandwidth is more than L3external bandwidth. Consider an SKL SKU with 24 cores on a package andwhere L2 external is 10GBps (hence aggregate L2 external bandwidth is240GBps) and L3 external bandwidth is 100GBps. Now a workload with ‘20threads, having 50% bandwidth, each consuming 5GBps’ consumes the max L3bandwidth of 100GBps although the percentage value specified is only 50%<< 100%. Hence increasing the bandwidth percentage will not yield anymore bandwidth. This is because although the L2 external bandwidth stillhas capacity, the L3 external bandwidth is fully used. Also note thatthis would be dependent on number of cores the benchmark is run on.
Same bandwidth percentage may mean different actual bandwidthdepending on # of threads:
For the same SKU in #1, a ‘single thread, with 10% bandwidth’ and ‘4thread, with 10% bandwidth’ can consume up to 10GBps and 40GBps althoughthey have same percentage bandwidth of 10%. This is simply because asthreads start using more cores in an rdtgroup, the actual bandwidth mayincrease or vary although user specified bandwidth percentage is same.
In order to mitigate this and make the interface more user friendly,resctrl added support for specifying the bandwidth in MiBps as well. Thekernel underneath would use a software feedback mechanism or a “SoftwareController(mba_sc)” which reads the actual bandwidth using MBM countersand adjust the memory bandwidth percentages to ensure:
"actual bandwidth < user specified bandwidth".
By default, the schemata would take the bandwidth percentage valueswhere as user can switch to the “MBA software controller” mode usinga mount option ‘mba_MBps’. The schemata format is specified in the belowsections.
L3 schemata file details (code and data prioritization disabled)¶
With CDP disabled the L3 schemata format is:
L3:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
L3 schemata file details (CDP enabled via mount option to resctrl)¶
When CDP is enabled L3 control is split into two separate resourcesso you can specify independent masks for code and data like this:
L3DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...L3CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
L2 schemata file details¶
CDP is supported at L2 using the ‘cdpl2’ mount option. The schemataformat is either:
L2:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
or
L2DATA:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...L2CODE:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...
Memory bandwidth Allocation (default mode)¶
Memory b/w domain is L3 cache.
MB:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
Memory bandwidth Allocation specified in MiBps¶
Memory bandwidth domain is L3 cache.
MB:<cache_id0>=bw_MiBps0;<cache_id1>=bw_MiBps1;...
Slow Memory Bandwidth Allocation (SMBA)¶
AMD hardware supports Slow Memory Bandwidth Allocation (SMBA).CXL.memory is the only supported “slow” memory device. With thesupport of SMBA, the hardware enables bandwidth allocation onthe slow memory devices. If there are multiple such devices inthe system, the throttling logic groups all the slow sourcestogether and applies the limit on them as a whole.
The presence of SMBA (with CXL.memory) is independent of slow memorydevices presence. If there are no such devices on the system, thenconfiguring SMBA will have no impact on the performance of the system.
The bandwidth domain for slow memory is L3 cache. Its schemata fileis formatted as:
SMBA:<cache_id0>=bandwidth0;<cache_id1>=bandwidth1;...
Reading/writing the schemata file¶
Reading the schemata file will show the state of all resourceson all domains. When writing you only need to specify those valueswhich you wish to change. E.g.
# cat schemataL3DATA:0=fffff;1=fffff;2=fffff;3=fffffL3CODE:0=fffff;1=fffff;2=fffff;3=fffff# echo "L3DATA:2=3c0;" > schemata# cat schemataL3DATA:0=fffff;1=fffff;2=3c0;3=fffffL3CODE:0=fffff;1=fffff;2=fffff;3=fffff
Reading/writing the schemata file (on AMD systems)¶
Reading the schemata file will show the current bandwidth limit on alldomains. The allocated resources are in multiples of one eighth GB/s.When writing to the file, you need to specify what cache id you wish toconfigure the bandwidth limit.
For example, to allocate 2GB/s limit on the first cache id:
# cat schemata MB:0=2048;1=2048;2=2048;3=2048 L3:0=ffff;1=ffff;2=ffff;3=ffff# echo "MB:1=16" > schemata# cat schemata MB:0=2048;1= 16;2=2048;3=2048 L3:0=ffff;1=ffff;2=ffff;3=ffff
Reading/writing the schemata file (on AMD systems) with SMBA feature¶
Reading and writing the schemata file is the same as without SMBA inabove section.
For example, to allocate 8GB/s limit on the first cache id:
# cat schemata SMBA:0=2048;1=2048;2=2048;3=2048 MB:0=2048;1=2048;2=2048;3=2048 L3:0=ffff;1=ffff;2=ffff;3=ffff# echo "SMBA:1=64" > schemata# cat schemata SMBA:0=2048;1= 64;2=2048;3=2048 MB:0=2048;1=2048;2=2048;3=2048 L3:0=ffff;1=ffff;2=ffff;3=ffff
Cache Pseudo-Locking¶
CAT enables a user to specify the amount of cache space that anapplication can fill. Cache pseudo-locking builds on the fact that aCPU can still read and write data pre-allocated outside its currentallocated area on a cache hit. With cache pseudo-locking, data can bepreloaded into a reserved portion of cache that no application canfill, and from that point on will only serve cache hits. The cachepseudo-locked memory is made accessible to user space where anapplication can map it into its virtual address space and thus havea region of memory with reduced average read latency.
The creation of a cache pseudo-locked region is triggered by a requestfrom the user to do so that is accompanied by a schemata of the regionto be pseudo-locked. The cache pseudo-locked region is created as follows:
Create a CAT allocation CLOSNEW with a CBM matching the schematafrom the user of the cache region that will contain the pseudo-lockedmemory. This region must not overlap with any current CAT allocation/CLOSon the system and no future overlap with this cache region is allowedwhile the pseudo-locked region exists.
Create a contiguous region of memory of the same size as the cacheregion.
Flush the cache, disable hardware prefetchers, disable preemption.
Make CLOSNEW the active CLOS and touch the allocated memory to loadit into the cache.
Set the previous CLOS as active.
At this point the closid CLOSNEW can be released - the cachepseudo-locked region is protected as long as its CBM does not appear inany CAT allocation. Even though the cache pseudo-locked region will fromthis point on not appear in any CBM of any CLOS an application running withany CLOS will be able to access the memory in the pseudo-locked region sincethe region continues to serve cache hits.
The contiguous region of memory loaded into the cache is exposed touser-space as a character device.
Cache pseudo-locking increases the probability that data will remainin the cache via carefully configuring the CAT feature and controllingapplication behavior. There is no guarantee that data is placed incache. Instructions like INVD, WBINVD, CLFLUSH, etc. can still evict“locked” data from cache. Power management C-states may shrink orpower off cache. Deeper C-states will automatically be restricted onpseudo-locked region creation.
It is required that an application using a pseudo-locked region runswith affinity to the cores (or a subset of the cores) associatedwith the cache on which the pseudo-locked region resides. A sanity checkwithin the code will not allow an application to map pseudo-locked memoryunless it runs with affinity to cores associated with the cache on which thepseudo-locked region resides. The sanity check is only done during theinitial mmap() handling, there is no enforcement afterwards and theapplication self needs to ensure it remains affine to the correct cores.
Pseudo-locking is accomplished in two stages:
During the first stage the system administrator allocates a portionof cache that should be dedicated to pseudo-locking. At this time anequivalent portion of memory is allocated, loaded into allocatedcache portion, and exposed as a character device.
During the second stage a user-space application maps (mmap()) thepseudo-locked memory into its address space.
Cache Pseudo-Locking Interface¶
A pseudo-locked region is created using the resctrl interface as follows:
Create a new resource group by creating a new directory in /sys/fs/resctrl.
Change the new resource group’s mode to “pseudo-locksetup” by writing“pseudo-locksetup” to the “mode” file.
Write the schemata of the pseudo-locked region to the “schemata” file. Allbits within the schemata should be “unused” according to the “bit_usage”file.
On successful pseudo-locked region creation the “mode” file will contain“pseudo-locked” and a new character device with the same name as the resourcegroup will exist in /dev/pseudo_lock. This character device can be mmap()’edby user space in order to obtain access to the pseudo-locked memory region.
An example of cache pseudo-locked region creation and usage can be found below.
Cache Pseudo-Locking Debugging Interface¶
The pseudo-locking debugging interface is enabled by default (ifCONFIG_DEBUG_FS is enabled) and can be found in /sys/kernel/debug/resctrl.
There is no explicit way for the kernel to test if a provided memorylocation is present in the cache. The pseudo-locking debugging interface usesthe tracing infrastructure to provide two ways to measure cache residency ofthe pseudo-locked region:
Memory access latency using the pseudo_lock_mem_latency tracepoint. Datafrom these measurements are best visualized using a hist trigger (seeexample below). In this test the pseudo-locked region is traversed ata stride of 32 bytes while hardware prefetchers and preemptionare disabled. This also provides a substitute visualization of cachehits and misses.
Cache hit and miss measurements using model specific precision counters ifavailable. Depending on the levels of cache on the system the pseudo_lock_l2and pseudo_lock_l3 tracepoints are available.
When a pseudo-locked region is created a new debugfs directory is created forit in debugfs as /sys/kernel/debug/resctrl/<newdir>. A singlewrite-only file, pseudo_lock_measure, is present in this directory. Themeasurement of the pseudo-locked region depends on the number written to thisdebugfs file:
- 1:
writing “1” to the pseudo_lock_measure file will trigger the latencymeasurement captured in the pseudo_lock_mem_latency tracepoint. Seeexample below.
- 2:
writing “2” to the pseudo_lock_measure file will trigger the L2 cacheresidency (cache hits and misses) measurement captured in thepseudo_lock_l2 tracepoint. See example below.
- 3:
writing “3” to the pseudo_lock_measure file will trigger the L3 cacheresidency (cache hits and misses) measurement captured in thepseudo_lock_l3 tracepoint.
All measurements are recorded with the tracing infrastructure. This requiresthe relevant tracepoints to be enabled before the measurement is triggered.
Example of latency debugging interface¶
In this example a pseudo-locked region named “newlock” was created. Here ishow we can measure the latency in cycles of reading from this region andvisualize this data with a histogram that is available if CONFIG_HIST_TRIGGERSis set:
# :> /sys/kernel/tracing/trace# echo 'hist:keys=latency' > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/trigger# echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable# echo 1 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure# echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/enable# cat /sys/kernel/tracing/events/resctrl/pseudo_lock_mem_latency/hist# event histogram## trigger info: hist:keys=latency:vals=hitcount:sort=hitcount:size=2048 [active]#{ latency: 456 } hitcount: 1{ latency: 50 } hitcount: 83{ latency: 36 } hitcount: 96{ latency: 44 } hitcount: 174{ latency: 48 } hitcount: 195{ latency: 46 } hitcount: 262{ latency: 42 } hitcount: 693{ latency: 40 } hitcount: 3204{ latency: 38 } hitcount: 3484Totals: Hits: 8192 Entries: 9 Dropped: 0Example of cache hits/misses debugging¶
In this example a pseudo-locked region named “newlock” was created on the L2cache of a platform. Here is how we can obtain details of the cache hitsand misses using the platform’s precision counters.
# :> /sys/kernel/tracing/trace# echo 1 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable# echo 2 > /sys/kernel/debug/resctrl/newlock/pseudo_lock_measure# echo 0 > /sys/kernel/tracing/events/resctrl/pseudo_lock_l2/enable# cat /sys/kernel/tracing/trace# tracer: nop## _-----=> irqs-off# / _----=> need-resched# | / _---=> hardirq/softirq# || / _--=> preempt-depth# ||| / delay# TASK-PID CPU# |||| TIMESTAMP FUNCTION# | | | |||| | |pseudo_lock_mea-1672 [002] .... 3132.860500: pseudo_lock_l2: hits=4097 miss=0
Examples for RDT allocation usage¶
Example 1
On a two socket machine (one L3 cache per socket) with just four bitsfor cache bit masks, minimum b/w of 10% with a memory bandwidthgranularity of 10%.
# mount -t resctrl resctrl /sys/fs/resctrl# cd /sys/fs/resctrl# mkdir p0 p1# echo "L3:0=3;1=c\nMB:0=50;1=50" > /sys/fs/resctrl/p0/schemata# echo "L3:0=3;1=3\nMB:0=50;1=50" > /sys/fs/resctrl/p1/schemata
The default resource group is unmodified, so we have access to all partsof all caches (its schemata file reads “L3:0=f;1=f”).
Tasks that are under the control of group “p0” may only allocate from the“lower” 50% on cache ID 0, and the “upper” 50% of cache ID 1.Tasks in group “p1” use the “lower” 50% of cache on both sockets.
Similarly, tasks that are under the control of group “p0” may use amaximum memory b/w of 50% on socket0 and 50% on socket 1.Tasks in group “p1” may also use 50% memory b/w on both sockets.Note that unlike cache masks, memory b/w cannot specify whether theseallocations can overlap or not. The allocations specifies the maximumb/w that the group may be able to use and the system admin can configurethe b/w accordingly.
If resctrl is using the software controller (mba_sc) then user can enter themax b/w in MB rather than the percentage values.
# echo "L3:0=3;1=c\nMB:0=1024;1=500" > /sys/fs/resctrl/p0/schemata# echo "L3:0=3;1=3\nMB:0=1024;1=500" > /sys/fs/resctrl/p1/schemata
In the above example the tasks in “p1” and “p0” on socket 0 would use a max b/wof 1024MB where as on socket 1 they would use 500MB.
Example 2
Again two sockets, but this time with a more realistic 20-bit mask.
Two real time tasks pid=1234 running on processor 0 and pid=5678 running onprocessor 1 on socket 0 on a 2-socket and dual core machine. To avoid noisyneighbors, each of the two real-time tasks exclusively occupies one quarterof L3 cache on socket 0.
# mount -t resctrl resctrl /sys/fs/resctrl# cd /sys/fs/resctrl
First we reset the schemata for the default group so that the “upper”50% of the L3 cache on socket 0 and 50% of memory b/w cannot be used byordinary tasks:
# echo "L3:0=3ff;1=fffff\nMB:0=50;1=100" > schemata
Next we make a resource group for our first real time task and giveit access to the “top” 25% of the cache on socket 0.
# mkdir p0# echo "L3:0=f8000;1=fffff" > p0/schemata
Finally we move our first real time task into this resource group. Wealso use taskset(1) to ensure the task always runs on a dedicated CPUon socket 0. Most uses of resource groups will also constrain whichprocessors tasks run on.
# echo 1234 > p0/tasks# taskset -cp 1 1234
Ditto for the second real time task (with the remaining 25% of cache):
# mkdir p1# echo "L3:0=7c00;1=fffff" > p1/schemata# echo 5678 > p1/tasks# taskset -cp 2 5678
For the same 2 socket system with memory b/w resource and CAT L3 theschemata would look like(Assume min_bandwidth 10 and bandwidth_gran is10):
For our first real time task this would request 20% memory b/w on socket 0.
# echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
For our second real time task this would request an other 20% memory b/won socket 0.
# echo -e "L3:0=f8000;1=fffff\nMB:0=20;1=100" > p0/schemata
Example 3
A single socket system which has real-time tasks running on core 4-7 andnon real-time workload assigned to core 0-3. The real-time tasks share textand data, so a per task association is not required and due to interactionwith the kernel it’s desired that the kernel on these cores shares L3 withthe tasks.
# mount -t resctrl resctrl /sys/fs/resctrl# cd /sys/fs/resctrl
First we reset the schemata for the default group so that the “upper”50% of the L3 cache on socket 0, and 50% of memory bandwidth on socket 0cannot be used by ordinary tasks:
# echo "L3:0=3ff\nMB:0=50" > schemata
Next we make a resource group for our real time cores and give it accessto the “top” 50% of the cache on socket 0 and 50% of memory bandwidth onsocket 0.
# mkdir p0# echo "L3:0=ffc00\nMB:0=50" > p0/schemata
Finally we move core 4-7 over to the new group and make sure that thekernel and the tasks running there get 50% of the cache. They shouldalso get 50% of memory bandwidth assuming that the cores 4-7 are SMTsiblings and only the real time threads are scheduled on the cores 4-7.
# echo F0 > p0/cpus
Example 4
The resource groups in previous examples were all in the default “shareable”mode allowing sharing of their cache allocations. If one resource groupconfigures a cache allocation then nothing prevents another resource groupto overlap with that allocation.
In this example a new exclusive resource group will be created on a L2 CATsystem with two L2 cache instances that can be configured with an 8-bitcapacity bitmask. The new exclusive resource group will be configured to use25% of each cache instance.
# mount -t resctrl resctrl /sys/fs/resctrl/# cd /sys/fs/resctrl
First, we observe that the default group is configured to allocate to all L2cache:
# cat schemataL2:0=ff;1=ff
We could attempt to create the new resource group at this point, but it willfail because of the overlap with the schemata of the default group:
# mkdir p0# echo 'L2:0=0x3;1=0x3' > p0/schemata# cat p0/modeshareable# echo exclusive > p0/mode-sh: echo: write error: Invalid argument# cat info/last_cmd_statusschemata overlaps
To ensure that there is no overlap with another resource group the defaultresource group’s schemata has to change, making it possible for the newresource group to become exclusive.
# echo 'L2:0=0xfc;1=0xfc' > schemata# echo exclusive > p0/mode# grep . p0/*p0/cpus:0p0/mode:exclusivep0/schemata:L2:0=03;1=03p0/size:L2:0=262144;1=262144
A new resource group will on creation not overlap with an exclusive resourcegroup:
# mkdir p1# grep . p1/*p1/cpus:0p1/mode:shareablep1/schemata:L2:0=fc;1=fcp1/size:L2:0=786432;1=786432
The bit_usage will reflect how the cache is used:
# cat info/L2/bit_usage0=SSSSSSEE;1=SSSSSSEE
A resource group cannot be forced to overlap with an exclusive resource group:
# echo 'L2:0=0x1;1=0x1' > p1/schemata-sh: echo: write error: Invalid argument# cat info/last_cmd_statusoverlaps with exclusive group
Example of Cache Pseudo-Locking¶
Lock portion of L2 cache from cache id 1 using CBM 0x3. Pseudo-lockedregion is exposed at /dev/pseudo_lock/newlock that can be provided toapplication for argument to mmap().
# mount -t resctrl resctrl /sys/fs/resctrl/# cd /sys/fs/resctrl
Ensure that there are bits available that can be pseudo-locked, since onlyunused bits can be pseudo-locked the bits to be pseudo-locked needs to beremoved from the default resource group’s schemata:
# cat info/L2/bit_usage0=SSSSSSSS;1=SSSSSSSS# echo 'L2:1=0xfc' > schemata# cat info/L2/bit_usage0=SSSSSSSS;1=SSSSSS00
Create a new resource group that will be associated with the pseudo-lockedregion, indicate that it will be used for a pseudo-locked region, andconfigure the requested pseudo-locked region capacity bitmask:
# mkdir newlock# echo pseudo-locksetup > newlock/mode# echo 'L2:1=0x3' > newlock/schemata
On success the resource group’s mode will change to pseudo-locked, thebit_usage will reflect the pseudo-locked region, and the character deviceexposing the pseudo-locked region will exist:
# cat newlock/modepseudo-locked# cat info/L2/bit_usage0=SSSSSSSS;1=SSSSSSPP# ls -l /dev/pseudo_lock/newlockcrw------- 1 root root 243, 0 Apr 3 05:01 /dev/pseudo_lock/newlock
/** Example code to access one page of pseudo-locked cache region* from user space.*/#define _GNU_SOURCE#include <fcntl.h>#include <sched.h>#include <stdio.h>#include <stdlib.h>#include <unistd.h>#include <sys/mman.h>/** It is required that the application runs with affinity to only* cores associated with the pseudo-locked region. Here the cpu* is hardcoded for convenience of example.*/static int cpuid = 2;int main(int argc, char *argv[]){ cpu_set_t cpuset; long page_size; void *mapping; int dev_fd; int ret; page_size = sysconf(_SC_PAGESIZE); CPU_ZERO(&cpuset); CPU_SET(cpuid, &cpuset); ret = sched_setaffinity(0, sizeof(cpuset), &cpuset); if (ret < 0) { perror("sched_setaffinity"); exit(EXIT_FAILURE); } dev_fd = open("/dev/pseudo_lock/newlock", O_RDWR); if (dev_fd < 0) { perror("open"); exit(EXIT_FAILURE); } mapping = mmap(0, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, dev_fd, 0); if (mapping == MAP_FAILED) { perror("mmap"); close(dev_fd); exit(EXIT_FAILURE); } /* Application interacts with pseudo-locked memory @mapping */ ret = munmap(mapping, page_size); if (ret < 0) { perror("munmap"); close(dev_fd); exit(EXIT_FAILURE); } close(dev_fd); exit(EXIT_SUCCESS);}Locking between applications¶
Certain operations on the resctrl filesystem, composed of read/writesto/from multiple files, must be atomic.
As an example, the allocation of an exclusive reservation of L3 cacheinvolves:
Read the cbmmasks from each directory or the per-resource “bit_usage”
Find a contiguous set of bits in the global CBM bitmask that is clearin any of the directory cbmmasks
Create a new directory
Set the bits found in step 2 to the new directory “schemata” file
If two applications attempt to allocate space concurrently then they canend up allocating the same bits so the reservations are shared instead ofexclusive.
To coordinate atomic operations on the resctrlfs and to avoid the problemabove, the following locking procedure is recommended:
Locking is based on flock, which is available in libc and also as a shellscript command
Write lock:
Take flock(LOCK_EX) on /sys/fs/resctrl
Read/write the directory structure.
funlock
Read lock:
Take flock(LOCK_SH) on /sys/fs/resctrl
If success read the directory structure.
funlock
Example with bash:
# Atomically read directory structure$ flock -s /sys/fs/resctrl/ find /sys/fs/resctrl# Read directory contents and create new subdirectory$ cat create-dir.shfind /sys/fs/resctrl/ > output.txtmask = function-of(output.txt)mkdir /sys/fs/resctrl/newres/echo mask > /sys/fs/resctrl/newres/schemata$ flock /sys/fs/resctrl/ ./create-dir.sh
Example with C:
/** Example code do take advisory locks* before accessing resctrl filesystem*/#include <sys/file.h>#include <stdlib.h>void resctrl_take_shared_lock(int fd){ int ret; /* take shared lock on resctrl filesystem */ ret = flock(fd, LOCK_SH); if (ret) { perror("flock"); exit(-1); }}void resctrl_take_exclusive_lock(int fd){ int ret; /* release lock on resctrl filesystem */ ret = flock(fd, LOCK_EX); if (ret) { perror("flock"); exit(-1); }}void resctrl_release_lock(int fd){ int ret; /* take shared lock on resctrl filesystem */ ret = flock(fd, LOCK_UN); if (ret) { perror("flock"); exit(-1); }}void main(void){ int fd, ret; fd = open("/sys/fs/resctrl", O_DIRECTORY); if (fd == -1) { perror("open"); exit(-1); } resctrl_take_shared_lock(fd); /* code to read directory contents */ resctrl_release_lock(fd); resctrl_take_exclusive_lock(fd); /* code to read and write directory contents */ resctrl_release_lock(fd);}Examples for RDT Monitoring along with allocation usage¶
Reading monitored data¶
Reading an event file (for ex: mon_data/mon_L3_00/llc_occupancy) wouldshow the current snapshot of LLC occupancy of the corresponding MONgroup or CTRL_MON group.
Example 1 (Monitor CTRL_MON group and subset of tasks in CTRL_MON group)¶
On a two socket machine (one L3 cache per socket) with just four bitsfor cache bit masks:
# mount -t resctrl resctrl /sys/fs/resctrl# cd /sys/fs/resctrl# mkdir p0 p1# echo "L3:0=3;1=c" > /sys/fs/resctrl/p0/schemata# echo "L3:0=3;1=3" > /sys/fs/resctrl/p1/schemata# echo 5678 > p1/tasks# echo 5679 > p1/tasks
The default resource group is unmodified, so we have access to all partsof all caches (its schemata file reads “L3:0=f;1=f”).
Tasks that are under the control of group “p0” may only allocate from the“lower” 50% on cache ID 0, and the “upper” 50% of cache ID 1.Tasks in group “p1” use the “lower” 50% of cache on both sockets.
Create monitor groups and assign a subset of tasks to each monitor group.
# cd /sys/fs/resctrl/p1/mon_groups# mkdir m11 m12# echo 5678 > m11/tasks# echo 5679 > m12/tasks
fetch data (data shown in bytes)
# cat m11/mon_data/mon_L3_00/llc_occupancy16234000# cat m11/mon_data/mon_L3_01/llc_occupancy14789000# cat m12/mon_data/mon_L3_00/llc_occupancy16789000
The parent ctrl_mon group shows the aggregated data.
# cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy31234000
Example 2 (Monitor a task from its creation)¶
On a two socket machine (one L3 cache per socket):
# mount -t resctrl resctrl /sys/fs/resctrl# cd /sys/fs/resctrl# mkdir p0 p1
An RMID is allocated to the group once its created and hence the <cmd>below is monitored from its creation.
# echo $$ > /sys/fs/resctrl/p1/tasks# <cmd>
Fetch the data:
# cat /sys/fs/resctrl/p1/mon_data/mon_l3_00/llc_occupancy31789000
Example 3 (Monitor without CAT support or before creating CAT groups)¶
Assume a system like HSW has only CQM and no CAT support. In this casethe resctrl will still mount but cannot create CTRL_MON directories.But user can create different MON groups within the root group therebyable to monitor all tasks including kernel threads.
This can also be used to profile jobs cache size footprint before beingable to allocate them to different allocation groups.
# mount -t resctrl resctrl /sys/fs/resctrl# cd /sys/fs/resctrl# mkdir mon_groups/m01# mkdir mon_groups/m02# echo 3478 > /sys/fs/resctrl/mon_groups/m01/tasks# echo 2467 > /sys/fs/resctrl/mon_groups/m02/tasks
Monitor the groups separately and also get per domain data. From thebelow its apparent that the tasks are mostly doing work ondomain(socket) 0.
# cat /sys/fs/resctrl/mon_groups/m01/mon_L3_00/llc_occupancy31234000# cat /sys/fs/resctrl/mon_groups/m01/mon_L3_01/llc_occupancy34555# cat /sys/fs/resctrl/mon_groups/m02/mon_L3_00/llc_occupancy31234000# cat /sys/fs/resctrl/mon_groups/m02/mon_L3_01/llc_occupancy32789
Example 4 (Monitor real time tasks)¶
A single socket system which has real time tasks running on cores 4-7and non real time tasks on other cpus. We want to monitor the cacheoccupancy of the real time threads on these cores.
# mount -t resctrl resctrl /sys/fs/resctrl# cd /sys/fs/resctrl# mkdir p1
Move the cpus 4-7 over to p1:
# echo f0 > p1/cpus
View the llc occupancy snapshot:
# cat /sys/fs/resctrl/p1/mon_data/mon_L3_00/llc_occupancy11234000
Examples on working with mbm_assign_mode¶
a. Check if MBM counter assignment mode is supported.
# mount -t resctrl resctrl /sys/fs/resctrl/# cat /sys/fs/resctrl/info/L3_MON/mbm_assign_mode[mbm_event]default
The “mbm_event” mode is detected and enabled.
b. Check how many assignable counters are supported.
# cat /sys/fs/resctrl/info/L3_MON/num_mbm_cntrs0=32;1=32
c. Check how many assignable counters are available for assignment in each domain.
# cat /sys/fs/resctrl/info/L3_MON/available_mbm_cntrs0=30;1=30
d. To list the default group’s assign states.
# cat /sys/fs/resctrl/mbm_L3_assignmentsmbm_total_bytes:0=e;1=embm_local_bytes:0=e;1=e
e. To unassign the counter associated with the mbm_total_bytes event on domain 0.
# echo "mbm_total_bytes:0=_" > /sys/fs/resctrl/mbm_L3_assignments# cat /sys/fs/resctrl/mbm_L3_assignmentsmbm_total_bytes:0=_;1=embm_local_bytes:0=e;1=e
f. To unassign the counter associated with the mbm_total_bytes event on all domains.
# echo "mbm_total_bytes:*=_" > /sys/fs/resctrl/mbm_L3_assignments# cat /sys/fs/resctrl/mbm_L3_assignmentmbm_total_bytes:0=_;1=_mbm_local_bytes:0=e;1=e
g. To assign a counter associated with the mbm_total_bytes event on all domains inexclusive mode.
# echo "mbm_total_bytes:*=e" > /sys/fs/resctrl/mbm_L3_assignments# cat /sys/fs/resctrl/mbm_L3_assignmentsmbm_total_bytes:0=e;1=embm_local_bytes:0=e;1=e
h. Read the events mbm_total_bytes and mbm_local_bytes of the default group. There isno change in reading the events with the assignment.
# cat /sys/fs/resctrl/mon_data/mon_L3_00/mbm_total_bytes779247936# cat /sys/fs/resctrl/mon_data/mon_L3_01/mbm_total_bytes562324232# cat /sys/fs/resctrl/mon_data/mon_L3_00/mbm_local_bytes212122123# cat /sys/fs/resctrl/mon_data/mon_L3_01/mbm_local_bytes121212144
i. Check the event configurations.
# cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_total_bytes/event_filterlocal_reads,remote_reads,local_non_temporal_writes,remote_non_temporal_writes,local_reads_slow_memory,remote_reads_slow_memory,dirty_victim_writes_all# cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_local_bytes/event_filterlocal_reads,local_non_temporal_writes,local_reads_slow_memory
j. Change the event configuration for mbm_local_bytes.
# echo "local_reads, local_non_temporal_writes, local_reads_slow_memory, remote_reads" >/sys/fs/resctrl/info/L3_MON/event_configs/mbm_local_bytes/event_filter# cat /sys/fs/resctrl/info/L3_MON/event_configs/mbm_local_bytes/event_filterlocal_reads,local_non_temporal_writes,local_reads_slow_memory,remote_reads
k. Now read the local events again. The first read may come back with “Unavailable”status. The subsequent read of mbm_local_bytes will display the current value.
# cat /sys/fs/resctrl/mon_data/mon_L3_00/mbm_local_bytesUnavailable# cat /sys/fs/resctrl/mon_data/mon_L3_00/mbm_local_bytes2252323# cat /sys/fs/resctrl/mon_data/mon_L3_01/mbm_local_bytesUnavailable# cat /sys/fs/resctrl/mon_data/mon_L3_01/mbm_local_bytes1566565
l. Users have the option to go back to ‘default’ mbm_assign_mode if required. This can bedone using the following command. Note that switching the mbm_assign_mode may reset allthe MBM counters (and thus all MBM events) of all the resctrl groups.
# echo "default" > /sys/fs/resctrl/info/L3_MON/mbm_assign_mode# cat /sys/fs/resctrl/info/L3_MON/mbm_assign_modembm_event[default]
m. Unmount the resctrl filesystem.
# umount /sys/fs/resctrl/
Intel RDT Errata¶
Intel MBM Counters May Report System Memory Bandwidth Incorrectly¶
Errata SKX99 for Skylake server and BDF102 for Broadwell server.
Problem: Intel Memory Bandwidth Monitoring (MBM) counters track metricsaccording to the assigned Resource Monitor ID (RMID) for that logicalcore. The IA32_QM_CTR register (MSR 0xC8E), used to report thesemetrics, may report incorrect system bandwidth for certain RMID values.
Implication: Due to the errata, system memory bandwidth may not matchwhat is reported.
Workaround: MBM total and local readings are corrected according to thefollowing correction factor table:
core count | rmid count | rmid threshold | correction factor |
1 | 8 | 0 | 1.000000 |
2 | 16 | 0 | 1.000000 |
3 | 24 | 15 | 0.969650 |
4 | 32 | 0 | 1.000000 |
6 | 48 | 31 | 0.969650 |
7 | 56 | 47 | 1.142857 |
8 | 64 | 0 | 1.000000 |
9 | 72 | 63 | 1.185115 |
10 | 80 | 63 | 1.066553 |
11 | 88 | 79 | 1.454545 |
12 | 96 | 0 | 1.000000 |
13 | 104 | 95 | 1.230769 |
14 | 112 | 95 | 1.142857 |
15 | 120 | 95 | 1.066667 |
16 | 128 | 0 | 1.000000 |
17 | 136 | 127 | 1.254863 |
18 | 144 | 127 | 1.185255 |
19 | 152 | 0 | 1.000000 |
20 | 160 | 127 | 1.066667 |
21 | 168 | 0 | 1.000000 |
22 | 176 | 159 | 1.454334 |
23 | 184 | 0 | 1.000000 |
24 | 192 | 127 | 0.969744 |
25 | 200 | 191 | 1.280246 |
26 | 208 | 191 | 1.230921 |
27 | 216 | 0 | 1.000000 |
28 | 224 | 191 | 1.143118 |
If rmid > rmid threshold, MBM total and local values should be multipliedby the correction factor.
See:
1. Erratum SKX99 in Intel Xeon Processor Scalable Family Specification Update:http://web.archive.org/web/20200716124958/https://www.intel.com/content/www/us/en/processors/xeon/scalable/xeon-scalable-spec-update.html
2. Erratum BDF102 in Intel Xeon E5-2600 v4 Processor Product Family Specification Update:http://web.archive.org/web/20191125200531/https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-e5-v4-spec-update.pdf
3. The errata in Intel Resource Director Technology (Intel RDT) on 2nd Generation Intel Xeon Scalable Processors Reference Manual:https://software.intel.com/content/www/us/en/develop/articles/intel-resource-director-technology-rdt-reference-manual.html
for further information.