torch.Tensor.resize_#
- Tensor.resize_(*sizes,memory_format=torch.contiguous_format)→Tensor#
Resizes
selftensor to the specified size. If the number of elements islarger than the current storage size, then the underlying storage is resizedto fit the new number of elements. If the number of elements is smaller, theunderlying storage is not changed. Existing elements are preserved but any newmemory is uninitialized.Warning
This is a low-level method. The storage is reinterpreted as C-contiguous,ignoring the current strides (unless the target size equals the currentsize, in which case the tensor is left unchanged). For most purposes, youwill instead want to use
view(), which checks forcontiguity, orreshape(), which copies data if needed. Tochange the size in-place with custom strides, seeset_().Note
If
torch.use_deterministic_algorithms()andtorch.utils.deterministic.fill_uninitialized_memoryare both set toTrue, new elements are initialized to prevent nondeterministic behaviorfrom using the result as an input to an operation. Floating point andcomplex values are set to NaN, and integer values are set to the maximumvalue.- Parameters
sizes (torch.Size orint...) – the desired size
memory_format (
torch.memory_format, optional) – the desired memory format ofTensor. Default:torch.contiguous_format. Note that memory format ofselfis going to be unaffected ifself.size()matchessizes.
Example:
>>>x=torch.tensor([[1,2],[3,4],[5,6]])>>>x.resize_(2,2)tensor([[ 1, 2], [ 3, 4]])