Movatterモバイル変換


[0]ホーム

URL:


NVIDIACUDA Toolkit Documentation
Search In:
< Previous |Next >
CUDA Runtime API (PDF) - v13.0.2 (older) - Last updated October 9, 2025 -Send Feedback

6.32. Library Management

This section describes the library management functions of the CUDA runtime application programming interface.

Functions

__host__cudaError_t cudaKernelSetAttributeForDevice (cudaKernel_t kernel,cudaFuncAttribute attr, int value, int device )
Sets information about a kernel.
__host__cudaError_t cudaLibraryEnumerateKernels (cudaKernel_t* kernels, unsigned int numKernels,cudaLibrary_t lib )
Retrieve the kernel handles within a library.
__host__cudaError_t cudaLibraryGetGlobal ( void** dptr, size_t* bytes,cudaLibrary_t library, const char* name )
Returns a global device pointer.
__host__cudaError_t cudaLibraryGetKernel (cudaKernel_t* pKernel,cudaLibrary_t library, const char* name )
Returns a kernel handle.
__host__cudaError_t cudaLibraryGetKernelCount ( unsigned int* count,cudaLibrary_t lib )
Returns the number of kernels within a library.
__host__cudaError_t cudaLibraryGetManaged ( void** dptr, size_t* bytes,cudaLibrary_t library, const char* name )
Returns a pointer to managed memory.
__host__cudaError_t cudaLibraryGetUnifiedFunction ( void** fptr,cudaLibrary_t library, const char* symbol )
Returns a pointer to a unified function.
__host__cudaError_t cudaLibraryLoadData (cudaLibrary_t* library, const void* code,cudaJitOption ** jitOptions, void** jitOptionsValues, unsigned int numJitOptions,cudaLibraryOption ** libraryOptions, void** libraryOptionValues, unsigned int numLibraryOptions )
Load a library with specified code and options.
__host__cudaError_t cudaLibraryLoadFromFile (cudaLibrary_t* library, const char* fileName,cudaJitOption ** jitOptions, void** jitOptionsValues, unsigned int numJitOptions,cudaLibraryOption ** libraryOptions, void** libraryOptionValues, unsigned int numLibraryOptions )
Load a library with specified file and options.
__host__cudaError_t cudaLibraryUnload (cudaLibrary_t library )
Unloads a library.

Functions

__host__cudaError_t cudaKernelSetAttributeForDevice (cudaKernel_t kernel,cudaFuncAttribute attr, int value, int device )
Sets information about a kernel.
Parameters
kernel
- Kernel to set attribute of
attr
- Attribute requested
value
- Value to set
device
- Device to set attribute of
Description

This call sets the value of a specified attributeattr on the kernelkernel for the requested devicedevice to an integer value specified byvalue. This function returnscudaSuccess if the new value of the attribute could be successfully set. If the set fails, this call will return an error. Not all attributes can have values set. Attempting to set a value on a read-only attribute will result in an error (cudaErrorInvalidValue)

Note that attributes set usingcudaFuncSetAttribute() will override the attribute set by this API irrespective of whether the call tocudaFuncSetAttribute() is made before or after this API call. Because of this and the stricter locking requirements mentioned below it is suggested that this call be used during the initialization path and not on each thread accessingkernel such as on kernel launches or on the critical path.

Valid values forattr are:

  • cudaFuncAttributeMaxDynamicSharedMemorySize - The requested maximum size in bytes of dynamically-allocated shared memory. The sum of this value and the function attribute sharedSizeBytes cannot exceed the device attributecudaDevAttrMaxSharedMemoryPerBlockOptin. The maximal size of requestable dynamic shared memory may differ by GPU architecture.

  • cudaFuncAttributePreferredSharedMemoryCarveout - On devices where the L1 cache and shared memory use the same hardware resources, this sets the shared memory carveout preference, in percent of the total shared memory. SeecudaDevAttrMaxSharedMemoryPerMultiprocessor. This is only a hint, and the driver can choose a different ratio if required to execute the function.

  • cudaFuncAttributeRequiredClusterWidth: The required cluster width in blocks. The width, height, and depth values must either all be 0 or all be positive. The validity of the cluster dimensions is checked at launch time. If the value is set during compile time, it cannot be set at runtime. Setting it at runtime will return cudaErrorNotPermitted.

  • cudaFuncAttributeRequiredClusterHeight: The required cluster height in blocks. The width, height, and depth values must either all be 0 or all be positive. The validity of the cluster dimensions is checked at launch time. If the value is set during compile time, it cannot be set at runtime. Setting it at runtime will return cudaErrorNotPermitted.

  • cudaFuncAttributeRequiredClusterDepth: The required cluster depth in blocks. The width, height, and depth values must either all be 0 or all be positive. The validity of the cluster dimensions is checked at launch time. If the value is set during compile time, it cannot be set at runtime. Setting it at runtime will return cudaErrorNotPermitted.

  • cudaFuncAttributeNonPortableClusterSizeAllowed: Indicates whether the function can be launched with non-portable cluster size. 1 is allowed, 0 is disallowed.

  • cudaFuncAttributeClusterSchedulingPolicyPreference: The block scheduling policy of a function. The value type is cudaClusterSchedulingPolicy.

Note:

The API has stricter locking requirements in comparison to its legacy counterpartcudaFuncSetAttribute() due to device-wide semantics. If multiple threads are trying to set the same attribute on the same device simultaneously, the attribute setting will depend on the interleavings chosen by the OS scheduler and memory consistency.

See also:

cudaLibraryLoadData,cudaLibraryLoadFromFile,cudaLibraryUnload,cudaLibraryGetKernel,cudaLaunchKernel,cudaFuncSetAttribute,cuKernelSetAttribute

__host__cudaError_t cudaLibraryEnumerateKernels (cudaKernel_t* kernels, unsigned int numKernels,cudaLibrary_t lib )
Retrieve the kernel handles within a library.
Parameters
kernels
- Buffer where the kernel handles are returned to
numKernels
- Maximum number of kernel handles may be returned to the buffer
lib
- Library to query from
Description

Returns inkernels a maximum number ofnumKernels kernel handles withinlib. The returned kernel handle becomes invalid when the library is unloaded.

See also:

cudaLibraryGetKernelCount,cuLibraryEnumerateKernels

__host__cudaError_t cudaLibraryGetGlobal ( void** dptr, size_t* bytes,cudaLibrary_t library, const char* name )
Returns a global device pointer.
Parameters
dptr
- Returned global device pointer for the requested library
bytes
- Returned global size in bytes
library
- Library to retrieve global from
name
- Name of global to retrieve
Description

Returns in*dptr and*bytes the base pointer and size of the global with namename for the requested librarylibrary and the current device. If no global for the requested namename exists, the call returnscudaErrorSymbolNotFound. One of the parametersdptr orbytes (not both) can be NULL in which case it is ignored. The returneddptr cannot be passed to the Symbol APIs such ascudaMemcpyToSymbol,cudaMemcpyFromSymbol,cudaGetSymbolAddress, orcudaGetSymbolSize.

See also:

cudaLibraryLoadData,cudaLibraryLoadFromFile,cudaLibraryUnload,cudaLibraryGetManaged,cuLibraryGetGlobal

__host__cudaError_t cudaLibraryGetKernel (cudaKernel_t* pKernel,cudaLibrary_t library, const char* name )
Returns a kernel handle.
Parameters
pKernel
- Returned kernel handle
library
- Library to retrieve kernel from
name
- Name of kernel to retrieve
Description

Returns inpKernel the handle of the kernel with namename located in librarylibrary. If kernel handle is not found, the call returnscudaErrorSymbolNotFound.

See also:

cudaLibraryLoadData,cudaLibraryLoadFromFile,cudaLibraryUnload,cuLibraryGetKernel

__host__cudaError_t cudaLibraryGetKernelCount ( unsigned int* count,cudaLibrary_t lib )
Returns the number of kernels within a library.
Parameters
count
- Number of kernels found within the library
lib
- Library to query
Description

Returns incount the number of kernels inlib.

See also:

cudaLibraryEnumerateKernels,cudaLibraryLoadFromFile,cudaLibraryLoadData,cuLibraryGetKernelCount

__host__cudaError_t cudaLibraryGetManaged ( void** dptr, size_t* bytes,cudaLibrary_t library, const char* name )
Returns a pointer to managed memory.
Parameters
dptr
- Returned pointer to the managed memory
bytes
- Returned memory size in bytes
library
- Library to retrieve managed memory from
name
- Name of managed memory to retrieve
Description

Returns in*dptr and*bytes the base pointer and size of the managed memory with namename for the requested librarylibrary. If no managed memory with the requested namename exists, the call returnscudaErrorSymbolNotFound. One of the parametersdptr orbytes (not both) can be NULL in which case it is ignored. Note that managed memory for librarylibrary is shared across devices and is registered when the library is loaded. The returneddptr cannot be passed to the Symbol APIs such ascudaMemcpyToSymbol,cudaMemcpyFromSymbol,cudaGetSymbolAddress, orcudaGetSymbolSize.

See also:

cudaLibraryLoadData,cudaLibraryLoadFromFile,cudaLibraryUnload,cudaLibraryGetGlobal,cuLibraryGetManaged

__host__cudaError_t cudaLibraryGetUnifiedFunction ( void** fptr,cudaLibrary_t library, const char* symbol )
Returns a pointer to a unified function.
Parameters
fptr
- Returned pointer to a unified function
library
- Library to retrieve function pointer memory from
symbol
- Name of function pointer to retrieve
Description

Returns in*fptr the function pointer to a unified function denoted bysymbol. If no unified function with namesymbol exists, the call returnscudaErrorSymbolNotFound. If there is no device with attributecudaDeviceProp::unifiedFunctionPointers present in the system, the call may returncudaErrorSymbolNotFound.

See also:

cudaLibraryLoadData,cudaLibraryLoadFromFile,cudaLibraryUnload,cuLibraryGetUnifiedFunction

__host__cudaError_t cudaLibraryLoadData (cudaLibrary_t* library, const void* code,cudaJitOption ** jitOptions, void** jitOptionsValues, unsigned int numJitOptions,cudaLibraryOption ** libraryOptions, void** libraryOptionValues, unsigned int numLibraryOptions )
Load a library with specified code and options.
Parameters
library
- Returned library
code
- Code to load
jitOptions
- Options for JIT
jitOptionsValues
- Option values for JIT
numJitOptions
- Number of options
libraryOptions
- Options for loading
libraryOptionValues
- Option values for loading
numLibraryOptions
- Number of options for loading
Description

Takes a pointercode and loads the corresponding librarylibrary based on the application defined library loading mode:

  • If module loading is set to EAGER, via the environment variables described in "Module loading",library is loaded eagerly into all contexts at the time of the call and future contexts at the time of creation until the library is unloaded withcudaLibraryUnload().

  • If the environment variables are set to LAZY,library is not immediately loaded onto all existent contexts and will only be loaded when a function is needed for that context, such as a kernel launch.

These environment variables are described in the CUDA programming guide under the "CUDA environment variables" section.

Thecode may be a cubin or fatbin as output bynvcc, or a NULL-terminated PTX, either as output bynvcc or hand-written. A fatbin should also contain relocatable code when doing separate compilation. Please also see the documentation for nvrtc (https://docs.nvidia.com/cuda/nvrtc/index.html), nvjitlink (https://docs.nvidia.com/cuda/nvjitlink/index.html), and nvfatbin (https://docs.nvidia.com/cuda/nvfatbin/index.html) for more information on generating loadable code at runtime.

Options are passed as an array viajitOptions and any corresponding parameters are passed injitOptionsValues. The number of total JIT options is supplied vianumJitOptions. Any outputs will be returned viajitOptionsValues.

Library load options are passed as an array vialibraryOptions and any corresponding parameters are passed inlibraryOptionValues. The number of total library load options is supplied vianumLibraryOptions.

See also:

cudaLibraryLoadFromFile,cudaLibraryUnload,cuLibraryLoadData

__host__cudaError_t cudaLibraryLoadFromFile (cudaLibrary_t* library, const char* fileName,cudaJitOption ** jitOptions, void** jitOptionsValues, unsigned int numJitOptions,cudaLibraryOption ** libraryOptions, void** libraryOptionValues, unsigned int numLibraryOptions )
Load a library with specified file and options.
Parameters
library
- Returned library
fileName
- File to load from
jitOptions
- Options for JIT
jitOptionsValues
- Option values for JIT
numJitOptions
- Number of options
libraryOptions
- Options for loading
libraryOptionValues
- Option values for loading
numLibraryOptions
- Number of options for loading
Description

Takes a pointercode and loads the corresponding librarylibrary based on the application defined library loading mode:

  • If module loading is set to EAGER, via the environment variables described in "Module loading",library is loaded eagerly into all contexts at the time of the call and future contexts at the time of creation until the library is unloaded withcudaLibraryUnload().

  • If the environment variables are set to LAZY,library is not immediately loaded onto all existent contexts and will only be loaded when a function is needed for that context, such as a kernel launch.

These environment variables are described in the CUDA programming guide under the "CUDA environment variables" section.

The file should be a cubin file as output bynvcc, or a PTX file either as output bynvcc or handwritten, or a fatbin file as output bynvcc. A fatbin should also contain relocatable code when doing separate compilation. Please also see the documentation for nvrtc (https://docs.nvidia.com/cuda/nvrtc/index.html), nvjitlink (https://docs.nvidia.com/cuda/nvjitlink/index.html), and nvfatbin (https://docs.nvidia.com/cuda/nvfatbin/index.html) for more information on generating loadable code at runtime.

Options are passed as an array viajitOptions and any corresponding parameters are passed injitOptionsValues. The number of total options is supplied vianumJitOptions. Any outputs will be returned viajitOptionsValues.

Library load options are passed as an array vialibraryOptions and any corresponding parameters are passed inlibraryOptionValues. The number of total library load options is supplied vianumLibraryOptions.

See also:

cudaLibraryLoadData,cudaLibraryUnload,cuLibraryLoadFromFile

__host__cudaError_t cudaLibraryUnload (cudaLibrary_t library )
Unloads a library.
Parameters
library
- Library to unload
Description

Unloads the library specified withlibrary

See also:

cudaLibraryLoadData,cudaLibraryLoadFromFile,cuLibraryUnload



[8]ページ先頭

©2009-2025 Movatter.jp