Rate this Page

torch.asarray#

torch.asarray(obj:Any,*,dtype:Optional[dtype],device:Optional[DeviceLikeType],copy:Optional[bool]=None,requires_grad:bool=False)Tensor#

Convertsobj to a tensor.

obj can be one of:

  1. a tensor

  2. a NumPy array or a NumPy scalar

  3. a DLPack capsule

  4. an object that implements Python’s buffer protocol

  5. a scalar

  6. a sequence of scalars

Whenobj is a tensor, NumPy array, or DLPack capsule the returned tensor will,by default, not require a gradient, have the same datatype asobj, be on thesame device, and share memory with it. These properties can be controlled with thedtype,device,copy, andrequires_grad keyword arguments.If the returned tensor is of a different datatype, on a different device, or a copy isrequested then it will not share its memory withobj. Ifrequires_gradisTrue then the returned tensor will require a gradient, and ifobj isalso a tensor with an autograd history then the returned tensor will have the same history.

Whenobj is not a tensor, NumPy array, or DLPack capsule but implements Python’sbuffer protocol then the buffer is interpreted as an array of bytes grouped according tothe size of the datatype passed to thedtype keyword argument. (If no datatype ispassed then the default floating point datatype is used, instead.) The returned tensorwill have the specified datatype (or default floating point datatype if none is specified)and, by default, be on the CPU device and share memory with the buffer.

Whenobj is a NumPy scalar, the returned tensor will be a 0-dimensional tensor onthe CPU and that doesn’t share its memory (i.e.copy=True). By default datatype willbe the PyTorch datatype corresponding to the NumPy’s scalar’s datatype.

Whenobj is none of the above but a scalar, or a sequence of scalars then thereturned tensor will, by default, infer its datatype from the scalar values, be on thecurrent default device, and not share its memory.

See also

torch.tensor() creates a tensor that always copies the data from the input object.torch.from_numpy() creates a tensor that always shares memory from NumPy arrays.torch.frombuffer() creates a tensor that always shares memory from objects thatimplement the buffer protocol.torch.from_dlpack() creates a tensor that always shares memory fromDLPack capsules.

Parameters

obj (object) – a tensor, NumPy array, DLPack Capsule, object that implements Python’sbuffer protocol, scalar, or sequence of scalars.

Keyword Arguments
  • dtype (torch.dtype, optional) – the datatype of the returned tensor.Default:None, which causes the datatype of the returned tensor to beinferred fromobj.

  • copy (bool,optional) – controls whether the returned tensor shares memory withobj.Default:None, which causes the returned tensor to share memory withobjwhenever possible. IfTrue then the returned tensor does not share its memory.IfFalse then the returned tensor shares its memory withobj and anerror is thrown if it cannot.

  • device (torch.device, optional) – the device of the returned tensor.Default:None, which causes the device ofobj to be used. Or, ifobj is a Python sequence, the current default device will be used.

  • requires_grad (bool,optional) – whether the returned tensor requires grad.Default:False, which causes the returned tensor not to require a gradient.IfTrue, then the returned tensor will require a gradient, and ifobjis also a tensor with an autograd history then the returned tensor will havethe same history.

Example:

>>>a=torch.tensor([1,2,3])>>># Shares memory with tensor 'a'>>>b=torch.asarray(a)>>>a.data_ptr()==b.data_ptr()True>>># Forces memory copy>>>c=torch.asarray(a,copy=True)>>>a.data_ptr()==c.data_ptr()False>>>a=torch.tensor([1.,2.,3.],requires_grad=True)>>>b=a+2>>>btensor([3., 4., 5.], grad_fn=<AddBackward0>)>>># Shares memory with tensor 'b', with no grad>>>c=torch.asarray(b)>>>ctensor([3., 4., 5.])>>># Shares memory with tensor 'b', retaining autograd history>>>d=torch.asarray(b,requires_grad=True)>>>dtensor([3., 4., 5.], grad_fn=<AddBackward0>)>>>array=numpy.array([1,2,3])>>># Shares memory with array 'array'>>>t1=torch.asarray(array)>>>array.__array_interface__['data'][0]==t1.data_ptr()True>>># Copies memory due to dtype mismatch>>>t2=torch.asarray(array,dtype=torch.float32)>>>array.__array_interface__['data'][0]==t2.data_ptr()False>>>scalar=numpy.float64(0.5)>>>torch.asarray(scalar)tensor(0.5000, dtype=torch.float64)