Event#
- classtorch.cuda.Event(enable_timing=False,blocking=False,interprocess=False,external=False)[source]#
Wrapper around a CUDA event.
CUDA events are synchronization markers that can be used to monitor thedevice’s progress, to accurately measure timing, and to synchronize CUDAstreams.
The underlying CUDA events are lazily initialized when the event is firstrecorded or exported to another process. After creation, only streams on thesame device may record the event. However, streams on any device can wait onthe event.
- Parameters
enable_timing (bool,optional) – indicates if the event should measure time(default:
False)blocking (bool,optional) – if
True,wait()will be blocking (default:False)interprocess (bool) – if
True, the event can be shared between processes(default:False)external (bool,optional) – indicates whether this event should create event record and event wait nodes, or create an internal cross-stream dependency, when captured in a cuda graph. Seecross-stream dependencies,cudaEventRecordExternal, andcudaEventWaitExternal for more information about internal vs. external events. (default:
False)
- elapsed_time(end_event)[source]#
Return the time elapsed.
Time reported in milliseconds after the event was recorded andbefore the end_event was recorded.
- classmethodfrom_ipc_handle(device,handle)[source]#
Reconstruct an event from an IPC handle on the given device.
- ipc_handle()[source]#
Return an IPC handle of this event.
If not recorded yet, the event will use the current device.
- query()[source]#
Check if all work currently captured by event has completed.
- Returns
A boolean indicating if all work currently captured by event hascompleted.
- record(stream=None)[source]#
Record the event in a given stream.
Uses
torch.cuda.current_stream()if no stream is specified. Thestream’s device must match the event’s device.
- synchronize()[source]#
Wait for the event to complete.
Waits until the completion of all work currently captured in this event.This prevents the CPU thread from proceeding until the event completes.
Note
This is a wrapper around
cudaEventSynchronize(): seeCUDA Event documentation for more info.
- wait(stream=None)[source]#
Make all future work submitted to the given stream wait for this event.
Use
torch.cuda.current_stream()if no stream is specified.Note
This is a wrapper around
cudaStreamWaitEvent(): seeCUDA Event documentation for more info.