torch.xpu¶
This package introduces support for the XPU backend, specifically tailored forIntel GPU optimization.
This package is lazily initialized, so you can always import it, and useis_available()
to determine if your system supports XPU.
Context-manager that selects a given stream. | |
Return the index of a currently selected device. | |
Return the currently selected | |
Context-manager that changes the selected device. | |
Return the number of XPU device available. | |
Context-manager that changes the current device to that of given object. | |
Return list XPU architectures this library was compiled for. | |
Get the xpu capability of a device. | |
Get the name of a device. | |
Get the properties of a device. | |
Return XPU AOT(ahead-of-time) build flags this library was compiled with. | |
Initialize PyTorch's XPU state. | |
Return a bool indicating if XPU is currently available. | |
Return whether PyTorch's XPU state has been initialized. | |
Set the current device. | |
Set the current stream.This is a wrapper API to set the stream. | |
Wrap around the Context-manager StreamContext that selects a given stream. | |
Wait for all kernels in all streams on a XPU device to complete. |
Random Number Generator¶
Return the random number generator state of the specified GPU as a ByteTensor. | |
Return a list of ByteTensor representing the random number states of all devices. | |
Return the current random seed of the current GPU. | |
Set the seed for generating random numbers for the current GPU. | |
Set the seed for generating random numbers on all GPUs. | |
Set the seed for generating random numbers to a random number for the current GPU. | |
Set the seed for generating random numbers to a random number on all GPUs. | |
Set the random number generator state of the specified GPU. | |
Set the random number generator state of all devices. |
Memory management¶
Release all unoccupied cached memory currently held by the caching allocator so that those can be used in other XPU application. | |
Return the maximum GPU memory occupied by tensors in bytes for a given device. | |
Return the maximum GPU memory managed by the caching allocator in bytes for a given device. | |
Return the global free and total GPU memory for a given device. | |
Return the current GPU memory occupied by tensors in bytes for a given device. | |
Return the current GPU memory managed by the caching allocator in bytes for a given device. | |
Return a dictionary of XPU memory allocator statistics for a given device. | |
Return the result of | |
Reset the "accumulated" (historical) stats tracked by the XPU memory allocator. | |
Reset the "peak" stats tracked by the XPU memory allocator. |