Rate this Page

torch.from_file#

torch.from_file(filename,shared=None,size=0,*,dtype=None,layout=None,device=None,pin_memory=False)#

Creates a CPU tensor with a storage backed by a memory-mapped file.

Ifshared is True, then memory is shared between processes. All changes are written to the file.Ifshared is False, then changes to the tensor do not affect the file.

size is the number of elements in the Tensor. Ifshared isFalse, then the file must containat leastsize*sizeof(dtype) bytes. Ifshared isTrue the file will be created if needed.

Note

Only CPU tensors can be mapped to files.

Note

For now, tensors with storages backed by a memory-mapped file cannot be created in pinned memory.

Parameters
  • filename (str) – file name to map

  • shared (bool) – whether to share memory (whetherMAP_SHARED orMAP_PRIVATE is passed to theunderlyingmmap(2) call)

  • size (int) – number of elements in the tensor

Keyword Arguments
  • dtype (torch.dtype, optional) – the desired data type of returned tensor.Default: ifNone, uses a global default (seetorch.set_default_dtype()).

  • layout (torch.layout, optional) – the desired layout of returned Tensor.Default:torch.strided.

  • device (torch.device, optional) – the desired device of returned tensor.Default: ifNone, uses the current device for the default tensor type(seetorch.set_default_device()).device will be the CPUfor CPU tensor types and the current CUDA device for CUDA tensor types.

  • pin_memory (bool,optional) – If set, returned tensor would be allocated inthe pinned memory. Works only for CPU tensors. Default:False.

Example:

>>>t=torch.randn(2,5,dtype=torch.float64)>>>t.numpy().tofile('storage.pt')>>>t_mapped=torch.from_file('storage.pt',shared=False,size=10,dtype=torch.float64)