Rate this Page

torch.cuda.memory.get_allocator_backend#

torch.cuda.memory.get_allocator_backend()[source]#

Return a string describing the active allocator backend as set byPYTORCH_CUDA_ALLOC_CONF. Currently available backends arenative (PyTorch’s native caching allocator) andcudaMallocAsync`(CUDA’s built-in asynchronous allocator).

Note

SeeMemory management for details on choosing the allocator backend.

Return type:

str