torch.cuda.memory.max_memory_reserved#
- torch.cuda.memory.max_memory_reserved(device=None)[source]#
Return the maximum GPU memory managed by the caching allocator in bytes for a given device.
By default, this returns the peak cached memory since the beginning of thisprogram.
reset_peak_memory_stats()can be used to resetthe starting point in tracking this metric. For example, these two functionscan measure the peak cached memory amount of each iteration in a trainingloop.- Parameters
device (torch.device orint,optional) – selected device. Returnsstatistic for the current device, given by
current_device(),ifdeviceisNone(default).- Return type
Note
SeeMemory management for more details about GPU memorymanagement.