Rate this Page

torch.cuda.memory.empty_cache#

torch.cuda.memory.empty_cache()[source]#

Release all unoccupied cached memory currently held by the cachingallocator so that those can be used in other GPU application and visible innvidia-smi.

Note

empty_cache() doesn’t increase the amount of GPUmemory available for PyTorch. However, it may help reduce fragmentationof GPU memory in certain cases. SeeMemory management formore details about GPU memory management.