Rate this Page

torch.accelerator.synchronize#

torch.accelerator.synchronize(device=None,/)[source]#

Wait for all kernels in all streams on the given device to complete.

Parameters

device (torch.device, str, int, optional) – device for which to synchronize. It must matchthe currentaccelerator device type. If not given,usetorch.accelerator.current_device_index() by default.

Note

This function is a no-op if the currentaccelerator is not initialized.

Example:

>>>asserttorch.accelerator.is_available()"No available accelerators detected.">>>start_event=torch.Event(enable_timing=True)>>>end_event=torch.Event(enable_timing=True)>>>start_event.record()>>>tensor=torch.randn(100,device=torch.accelerator.current_accelerator())>>>sum=torch.sum(tensor)>>>end_event.record()>>>torch.accelerator.synchronize()>>>elapsed_time_ms=start_event.elapsed_time(end_event)