Rate this Page

torch.tensor#

torch.tensor(data,*,dtype=None,device=None,requires_grad=False,pin_memory=False)Tensor#

Constructs a tensor with no autograd history (also known as a “leaf tensor”, seeAutograd mechanics) by copyingdata.

Warning

When working with tensors prefer usingtorch.Tensor.clone(),torch.Tensor.detach(), andtorch.Tensor.requires_grad_() forreadability. Lettingt be a tensor,torch.tensor(t) is equivalent tot.detach().clone(), andtorch.tensor(t,requires_grad=True)is equivalent tot.detach().clone().requires_grad_(True).

See also

torch.as_tensor() preserves autograd history and avoids copies where possible.torch.from_numpy() creates a tensor that shares storage with a NumPy array.

Parameters

data (array_like) – Initial data for the tensor. Can be a list, tuple,NumPyndarray, scalar, and other types.

Keyword Arguments
  • dtype (torch.dtype, optional) – the desired data type of returned tensor.Default: ifNone, infers data type fromdata.

  • device (torch.device, optional) – the device of the constructed tensor. If None and data is a tensorthen the device of data is used. If None and data is not a tensor thenthe result tensor is constructed on the current device.

  • requires_grad (bool,optional) – If autograd should record operations on thereturned tensor. Default:False.

  • pin_memory (bool,optional) – If set, returned tensor would be allocated inthe pinned memory. Works only for CPU tensors. Default:False.

Example:

>>>torch.tensor([[0.1,1.2],[2.2,3.1],[4.9,5.2]])tensor([[ 0.1000,  1.2000],        [ 2.2000,  3.1000],        [ 4.9000,  5.2000]])>>>torch.tensor([0,1])# Type inference on datatensor([ 0,  1])>>>torch.tensor([[0.11111,0.222222,0.3333333]],...dtype=torch.float64,...device=torch.device('cuda:0'))# creates a double tensor on a CUDA devicetensor([[ 0.1111,  0.2222,  0.3333]], dtype=torch.float64, device='cuda:0')>>>torch.tensor(3.14159)# Create a zero-dimensional (scalar) tensortensor(3.1416)>>>torch.tensor([])# Create an empty tensor (of size (0,))tensor([])