torch.cuda.memory.host_memory_stats#
- torch.cuda.memory.host_memory_stats()[source]#
Return a dictionary of CUDA memory allocator statistics for a given device.
The return value of this function is a dictionary of statistics, each ofwhich is a non-negative integer.
Core statistics:
"allocated.{current,peak,allocated,freed}":number of allocation requests received by the memory allocator."allocated_bytes.{current,peak,allocated,freed}":amount of allocated memory."segment.{current,peak,allocated,freed}":number of reserved segments fromcudaMalloc()."reserved_bytes.{current,peak,allocated,freed}":amount of reserved memory.
For these core statistics, values are broken down as follows.
Metric type:
current: current value of this metric.peak: maximum value of this metric.allocated: historical total increase in this metric.freed: historical total decrease in this metric.
In addition to the core statistics, we also provide some simple eventcounters:
"num_host_alloc": number of CUDA allocation calls. This includes bothcudaHostAlloc and cudaHostRegister."num_host_free": number of CUDA free calls. This includes both cudaHostFreeand cudaHostUnregister.
Finally, we also provide some simple timing counters:
"host_alloc_time.{total,max,min,count,avg}":timing of allocation requests going through CUDA calls."host_free_time.{total,max,min,count,avg}":timing of free requests going through CUDA calls.
For these timing statistics, values are broken down as follows.
Metric type:
total: total time spent.max: maximum value per call.min: minimum value per call.count: number of times it was called.avg: average time per call.