MemPool#
- classtorch.cuda.memory.MemPool(*args,**kwargs)[source]#
MemPool represents a pool of memory in a caching allocator. Currently,it’s just the ID of the pool object maintained in the CUDACachingAllocator.
- Parameters
allocator (torch._C._cuda_CUDAAllocator,optional) – atorch._C._cuda_CUDAAllocator object that can be used todefine how memory gets allocated in the pool. If
allocatorisNone(default), memory allocation follows the default/current configuration of the CUDACachingAllocator.use_on_oom (bool) – a bool that indicates if this pool can be usedas a last resort if a memory allocation outside of the pool fails dueto Out Of Memory. This is False by default.
- propertyallocator:Optional[_cuda_CUDAAllocator]#
Returns the allocator this MemPool routes allocations to.
- snapshot()[source]#
Return a snapshot of the CUDA memory allocator pool state across alldevices.
Interpreting the output of this function requires familiarity with thememory allocator internals.
Note
SeeMemory management for more details about GPU memorymanagement.