Class Tensor#
Defined inFile TensorBody.h
Inheritance Relationships#
Base Type#
publicTensorBase
Class Documentation#
- classTensor:publicTensorBase#
Public Types
Public Functions
- Tensor()=default#
- inlineexplicitTensor(c10::intrusive_ptr<TensorImpl,UndefinedTensorImpl>tensor_impl)#
- inlineexplicitTensor(constTensorBase&base)#
- inlineTensor(TensorBase&&base)#
- inlinec10::MaybeOwned<Tensor>expect_contiguous(MemoryFormatmemory_format=MemoryFormat::Contiguous)const&#
Should be used if *this can reasonably be expected to be contiguous and performance is important.
Compared to contiguous, it saves a reference count increment/decrement if *this is already contiguous, at the cost in all cases of an extra pointer of stack usage, an extra branch to access, and an extra branch at destruction time.
- c10::MaybeOwned<Tensor>expect_contiguous(MemoryFormatmemory_format=MemoryFormat::Contiguous)&&=delete#
- inline C10_DEPRECATED_MESSAGE("Tensor.type()isdeprecated.InsteaduseTensor.options(),whichinmanycases(e.g.inaconstructor)isadrop-inreplacement.Ifyouwereusingdatafromtype(),thatisnowavailablefromTensoritself,soinsteadoftensor.type().scalar_type(),usetensor.scalar_type()insteadandinsteadoftensor.type().backend()usetensor.device().")DeprecatedTypeProperties&type()const
- inline C10_DEPRECATED_MESSAGE("Tensor.is_variable()isdeprecated;everythingisavariablenow.(Ifyouwanttoassertthatvariablehasbeenappropriatelyhandledalready,useat::impl::variable_excluded_from_dispatch())")boolis_variable()constnoexcept
- template<typenameT>inline C10_DEPRECATED_MESSAGE("Tensor.data<T>()isdeprecated.PleaseuseTensor.data_ptr<T>()instead.")T*data()const
- template<typenameT,size_tN,template<typenameU>classPtrTraits=DefaultPtrTraits,typenameindex_t=int64_t> C10_DEPRECATED_MESSAGE("packed_accessorisdeprecated,usepacked_accessor32orpacked_accessor64instead")GenericPackedTensorAccessor<T
- inlineindex_tpacked_accessor()const&#
- template<typenameT,size_tN,template<typenameU>classPtrTraits=DefaultPtrTraits,typenameindex_t=int64_t> C10_DEPRECATED_MESSAGE("packed_accessorisdeprecated,usepacked_accessor32orpacked_accessor64instead")GenericPackedTensorAccessor<T
- index_tpacked_accessor()&&=delete#
- inlinevoidbackward(constTensor&gradient={},std::optional<bool>retain_graph=std::nullopt,boolcreate_graph=false,std::optional<TensorList>inputs=std::nullopt)const#
Computes the gradient of current tensor with respect to graph leaves.
The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying
gradient. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. thisTensor.This function accumulates gradients in the leaves - you might need to zero them before calling it.
- Parameters:
gradient – Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to aTensor that does not require grad unless
create_graphis True. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable then this argument is optional.retain_graph – If
false, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value ofcreate_graph.create_graph – If
true, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults tofalse.inputs – Inputs w.r.t. which the gradient will be accumulated into
at::Tensor::grad. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used to compute the current tensor. When inputs are provided and a given input is not a leaf, the current implementation will call its grad_fn (even though it is not strictly needed to get this gradients). It is an implementation detail on which the user should not rely. Seepytorch/pytorch#60521 for more details.
- inlineTensor&mutable_grad()const#
Return a mutable reference to the gradient.
This is conventionally used as
t.grad()=xto set a gradient to a completely new tensor. Note that this function work with a non-constTensor and is not thread safe.
- inlineconstTensor&grad()const#
This function returns an undefined tensor by default and returns a defined tensor the first time a call to
backward()computes gradients for thisTensor.The attribute will then contain the gradients computed and future calls to
backward()will accumulate (add) gradients into it.
- inlineconstTensor&_fw_grad(uint64_tlevel)const#
This function returns the forward gradient for thisTensor at the given level.
- inlinevoid_set_fw_grad(constTensorBase&new_grad,uint64_tlevel,boolis_inplace_op)const#
This function can be used to set the value of the forward grad.
Note that the given new_grad might not be used directly if it has different metadata (size/stride/storage offset) compared to thisTensor. In that case, new_grad content will be copied into a newTensor
- inlinevoid__dispatch__backward(at::TensorListinputs,const::std::optional<at::Tensor>&gradient={},::std::optional<bool>retain_graph=::std::nullopt,boolcreate_graph=false)const#
- inlinebool__dispatch_is_leaf()const#
- inlineint64_t__dispatch_output_nr()const#
- inlineint64_t__dispatch__version()const#
- inlinevoid__dispatch_retain_grad()const#
- inlinebool__dispatch_retains_grad()const#
- inlineat::Tensoraddmv(constat::Tensor&mat,constat::Tensor&vec,constat::Scalar&beta=1,constat::Scalar&alpha=1)const#
- inlineat::Tensor&addmv_(constat::Tensor&mat,constat::Tensor&vec,constat::Scalar&beta=1,constat::Scalar&alpha=1)const#
- inlineat::Tensoraddr(constat::Tensor&vec1,constat::Tensor&vec2,constat::Scalar&beta=1,constat::Scalar&alpha=1)const#
- inlineat::Tensor&addr_(constat::Tensor&vec1,constat::Tensor&vec2,constat::Scalar&beta=1,constat::Scalar&alpha=1)const#
- inlineboolallclose(constat::Tensor&other,doublertol=1e-05,doubleatol=1e-08,boolequal_nan=false)const#
- inlineat::Tensoras_strided(at::IntArrayRefsize,at::IntArrayRefstride,::std::optional<int64_t>storage_offset=::std::nullopt)const#
- inlineat::Tensoras_strided_symint(c10::SymIntArrayRefsize,c10::SymIntArrayRefstride,::std::optional<c10::SymInt>storage_offset=::std::nullopt)const#
- inlineconstat::Tensor&as_strided_(at::IntArrayRefsize,at::IntArrayRefstride,::std::optional<int64_t>storage_offset=::std::nullopt)const#
- inlineconstat::Tensor&as_strided__symint(c10::SymIntArrayRefsize,c10::SymIntArrayRefstride,::std::optional<c10::SymInt>storage_offset=::std::nullopt)const#
- inlineat::Tensorbaddbmm(constat::Tensor&batch1,constat::Tensor&batch2,constat::Scalar&beta=1,constat::Scalar&alpha=1)const#
- inlineat::Tensor&baddbmm_(constat::Tensor&batch1,constat::Tensor&batch2,constat::Scalar&beta=1,constat::Scalar&alpha=1)const#
- inlineat::Tensor&bernoulli_(constat::Tensor&p,::std::optional<at::Generator>generator=::std::nullopt)const#
- inlineat::Tensor&bernoulli_(doublep=0.5,::std::optional<at::Generator>generator=::std::nullopt)const#
- inlineat::Tensorbincount_symint(const::std::optional<at::Tensor>&weights={},c10::SymIntminlength=0)const#
- inline::std::vector<at::Tensor>tensor_split(constat::Tensor&tensor_indices_or_sections,int64_tdim=0)const#
- inlineat::Tensorclamp(const::std::optional<at::Scalar>&min,const::std::optional<at::Scalar>&max=::std::nullopt)const#
- inlineat::Tensorclamp(const::std::optional<at::Tensor>&min={},const::std::optional<at::Tensor>&max={})const#
- inlineat::Tensor&clamp_(const::std::optional<at::Scalar>&min,const::std::optional<at::Scalar>&max=::std::nullopt)const#
- inlineat::Tensor&clamp_(const::std::optional<at::Tensor>&min={},const::std::optional<at::Tensor>&max={})const#
- inlineat::Tensorclip(const::std::optional<at::Scalar>&min,const::std::optional<at::Scalar>&max=::std::nullopt)const#
- inlineat::Tensorclip(const::std::optional<at::Tensor>&min={},const::std::optional<at::Tensor>&max={})const#
- inlineat::Tensor&clip_(const::std::optional<at::Scalar>&min,const::std::optional<at::Scalar>&max=::std::nullopt)const#
- inlineat::Tensor&clip_(const::std::optional<at::Tensor>&min={},const::std::optional<at::Tensor>&max={})const#
- inlineat::Tensor__dispatch_contiguous(at::MemoryFormatmemory_format=c10::MemoryFormat::Contiguous)const#
- inlineat::Tensorcov(int64_tcorrection=1,const::std::optional<at::Tensor>&fweights={},const::std::optional<at::Tensor>&aweights={})const#
- inlineat::Tensordiff(int64_tn=1,int64_tdim=-1,const::std::optional<at::Tensor>&prepend={},const::std::optional<at::Tensor>&append={})const#
- inlineat::Tensor÷_(constat::Tensor&other,::std::optional<c10::string_view>rounding_mode)const#
- inlineat::Tensor÷_(constat::Scalar&other,::std::optional<c10::string_view>rounding_mode)const#
- inlineat::Tensornew_empty(at::IntArrayRefsize,::std::optional<at::ScalarType>dtype,::std::optional<at::Layout>layout,::std::optional<at::Device>device,::std::optional<bool>pin_memory)const#
- inlineat::Tensornew_empty_symint(c10::SymIntArrayRefsize,::std::optional<at::ScalarType>dtype,::std::optional<at::Layout>layout,::std::optional<at::Device>device,::std::optional<bool>pin_memory)const#
- inlineat::Tensornew_empty_strided(at::IntArrayRefsize,at::IntArrayRefstride,at::TensorOptionsoptions={})const#
- inlineat::Tensornew_empty_strided(at::IntArrayRefsize,at::IntArrayRefstride,::std::optional<at::ScalarType>dtype,::std::optional<at::Layout>layout,::std::optional<at::Device>device,::std::optional<bool>pin_memory)const#
- inlineat::Tensornew_empty_strided_symint(c10::SymIntArrayRefsize,c10::SymIntArrayRefstride,at::TensorOptionsoptions={})const#
- inlineat::Tensornew_empty_strided_symint(c10::SymIntArrayRefsize,c10::SymIntArrayRefstride,::std::optional<at::ScalarType>dtype,::std::optional<at::Layout>layout,::std::optional<at::Device>device,::std::optional<bool>pin_memory)const#
- inlineat::Tensornew_full(at::IntArrayRefsize,constat::Scalar&fill_value,at::TensorOptionsoptions={})const#
- inlineat::Tensornew_full(at::IntArrayRefsize,constat::Scalar&fill_value,::std::optional<at::ScalarType>dtype,::std::optional<at::Layout>layout,::std::optional<at::Device>device,::std::optional<bool>pin_memory)const#
- inlineat::Tensornew_full_symint(c10::SymIntArrayRefsize,constat::Scalar&fill_value,at::TensorOptionsoptions={})const#
- inlineat::Tensornew_full_symint(c10::SymIntArrayRefsize,constat::Scalar&fill_value,::std::optional<at::ScalarType>dtype,::std::optional<at::Layout>layout,::std::optional<at::Device>device,::std::optional<bool>pin_memory)const#
- inlineat::Tensornew_zeros(at::IntArrayRefsize,::std::optional<at::ScalarType>dtype,::std::optional<at::Layout>layout,::std::optional<at::Device>device,::std::optional<bool>pin_memory)const#
- inlineat::Tensornew_zeros_symint(c10::SymIntArrayRefsize,::std::optional<at::ScalarType>dtype,::std::optional<at::Layout>layout,::std::optional<at::Device>device,::std::optional<bool>pin_memory)const#
- inlineat::Tensornew_ones(at::IntArrayRefsize,::std::optional<at::ScalarType>dtype,::std::optional<at::Layout>layout,::std::optional<at::Device>device,::std::optional<bool>pin_memory)const#
- inlineat::Tensornew_ones_symint(c10::SymIntArrayRefsize,::std::optional<at::ScalarType>dtype,::std::optional<at::Layout>layout,::std::optional<at::Device>device,::std::optional<bool>pin_memory)const#
- inlineconstat::Tensor&resize_(at::IntArrayRefsize,::std::optional<at::MemoryFormat>memory_format=::std::nullopt)const#
- inlineconstat::Tensor&resize__symint(c10::SymIntArrayRefsize,::std::optional<at::MemoryFormat>memory_format=::std::nullopt)const#
- inlineat::Tensorunflatten_symint(at::Dimnamedim,c10::SymIntArrayRefsizes,at::DimnameListnames)const#
- inlineat::Tensor&index_put_(constc10::List<::std::optional<at::Tensor>>&indices,constat::Tensor&values,boolaccumulate=false)const#
- inlineat::Tensorindex_put(constc10::List<::std::optional<at::Tensor>>&indices,constat::Tensor&values,boolaccumulate=false)const#
- inlineat::Tensorisclose(constat::Tensor&other,doublertol=1e-05,doubleatol=1e-08,boolequal_nan=false)const#
- inlineboolis_distributed()const#
- inlinebool__dispatch_is_floating_point()const#
- inlinebool__dispatch_is_complex()const#
- inlinebool__dispatch_is_conj()const#
- inlinebool__dispatch__is_zerotensor()const#
- inlinebool__dispatch_is_neg()const#
- inlineboolis_nonzero()const#
- inlinebool__dispatch_is_signed()const#
- inlinebool__dispatch_is_inference()const#
- inline::std::tuple<at::Tensor,at::Tensor>kthvalue_symint(c10::SymIntk,int64_tdim=-1,boolkeepdim=false)const#
- inline::std::tuple<at::Tensor,at::Tensor>kthvalue_symint(c10::SymIntk,at::Dimnamedim,boolkeepdim=false)const#
- inlineat::Tensornan_to_num(::std::optional<double>nan=::std::nullopt,::std::optional<double>posinf=::std::nullopt,::std::optional<double>neginf=::std::nullopt)const#
- inlineat::Tensor&nan_to_num_(::std::optional<double>nan=::std::nullopt,::std::optional<double>posinf=::std::nullopt,::std::optional<double>neginf=::std::nullopt)const#
- inlineat::Tensorlog_softmax(at::Dimnamedim,::std::optional<at::ScalarType>dtype=::std::nullopt)const#
- inline::std::tuple<at::Tensor,at::Tensor>aminmax(::std::optional<int64_t>dim=::std::nullopt,boolkeepdim=false)const#
- inlineat::Tensormean(at::OptionalIntArrayRefdim,boolkeepdim=false,::std::optional<at::ScalarType>dtype=::std::nullopt)const#
- inlineat::Tensormean(at::DimnameListdim,boolkeepdim=false,::std::optional<at::ScalarType>dtype=::std::nullopt)const#
- inlineat::Tensornanmean(at::OptionalIntArrayRefdim=::std::nullopt,boolkeepdim=false,::std::optional<at::ScalarType>dtype=::std::nullopt)const#
- inlineboolis_pinned(::std::optional<at::Device>device=::std::nullopt)const#
- inlineat::Tensorrepeat_interleave(constat::Tensor&repeats,::std::optional<int64_t>dim=::std::nullopt,::std::optional<int64_t>output_size=::std::nullopt)const#
- inlineat::Tensorrepeat_interleave_symint(constat::Tensor&repeats,::std::optional<int64_t>dim=::std::nullopt,::std::optional<c10::SymInt>output_size=::std::nullopt)const#
- inlineat::Tensorrepeat_interleave(int64_trepeats,::std::optional<int64_t>dim=::std::nullopt,::std::optional<int64_t>output_size=::std::nullopt)const#
- inlineat::Tensorrepeat_interleave_symint(c10::SymIntrepeats,::std::optional<int64_t>dim=::std::nullopt,::std::optional<c10::SymInt>output_size=::std::nullopt)const#
- inlineat::Tensordetach()const#
Returns a newTensor, detached from the current graph.
The result will never require gradient.
- inlineat::Tensor&detach_()const#
Detaches theTensor from the graph that created it, making it a leaf.
Views cannot be detached in-place.
- inlineint64_tsize(at::Dimnamedim)const#
- inlineat::Tensorslice(int64_tdim=0,::std::optional<int64_t>start=::std::nullopt,::std::optional<int64_t>end=::std::nullopt,int64_tstep=1)const#
- inlineat::Tensorslice_symint(int64_tdim=0,::std::optional<c10::SymInt>start=::std::nullopt,::std::optional<c10::SymInt>end=::std::nullopt,c10::SymIntstep=1)const#
- inlineat::Tensorslice_inverse(constat::Tensor&src,int64_tdim=0,::std::optional<int64_t>start=::std::nullopt,::std::optional<int64_t>end=::std::nullopt,int64_tstep=1)const#
- inlineat::Tensorslice_inverse_symint(constat::Tensor&src,int64_tdim=0,::std::optional<c10::SymInt>start=::std::nullopt,::std::optional<c10::SymInt>end=::std::nullopt,c10::SymIntstep=1)const#
- inlineat::Tensorslice_scatter(constat::Tensor&src,int64_tdim=0,::std::optional<int64_t>start=::std::nullopt,::std::optional<int64_t>end=::std::nullopt,int64_tstep=1)const#
- inlineat::Tensorslice_scatter_symint(constat::Tensor&src,int64_tdim=0,::std::optional<c10::SymInt>start=::std::nullopt,::std::optional<c10::SymInt>end=::std::nullopt,c10::SymIntstep=1)const#
- inlineat::Tensordiagonal_scatter(constat::Tensor&src,int64_toffset=0,int64_tdim1=0,int64_tdim2=1)const#
- inlineat::Tensoras_strided_scatter(constat::Tensor&src,at::IntArrayRefsize,at::IntArrayRefstride,::std::optional<int64_t>storage_offset=::std::nullopt)const#
- inlineat::Tensoras_strided_scatter_symint(constat::Tensor&src,c10::SymIntArrayRefsize,c10::SymIntArrayRefstride,::std::optional<c10::SymInt>storage_offset=::std::nullopt)const#
- inline::std::vector<at::Tensor>unsafe_split_with_sizes(at::IntArrayRefsplit_sizes,int64_tdim=0)const#
- inline::std::vector<at::Tensor>unsafe_split_with_sizes_symint(c10::SymIntArrayRefsplit_sizes,int64_tdim=0)const#
- inline::std::vector<at::Tensor>split_with_sizes_symint(c10::SymIntArrayRefsplit_sizes,int64_tdim=0)const#
- inlineat::Tensorsspaddmm(constat::Tensor&mat1,constat::Tensor&mat2,constat::Scalar&beta=1,constat::Scalar&alpha=1)const#
- inlineat::Tensorstft(int64_tn_fft,::std::optional<int64_t>hop_length,::std::optional<int64_t>win_length,const::std::optional<at::Tensor>&window,boolnormalized,::std::optional<bool>onesided=::std::nullopt,::std::optional<bool>return_complex=::std::nullopt,::std::optional<bool>align_to_window=::std::nullopt)const#
- inlineat::Tensorstft(int64_tn_fft,::std::optional<int64_t>hop_length=::std::nullopt,::std::optional<int64_t>win_length=::std::nullopt,const::std::optional<at::Tensor>&window={},boolcenter=true,c10::string_viewpad_mode="reflect",boolnormalized=false,::std::optional<bool>onesided=::std::nullopt,::std::optional<bool>return_complex=::std::nullopt,::std::optional<bool>align_to_window=::std::nullopt)const#
- inlineat::Tensoristft(int64_tn_fft,::std::optional<int64_t>hop_length=::std::nullopt,::std::optional<int64_t>win_length=::std::nullopt,const::std::optional<at::Tensor>&window={},boolcenter=true,boolnormalized=false,::std::optional<bool>onesided=::std::nullopt,::std::optional<int64_t>length=::std::nullopt,boolreturn_complex=false)const#
- inlineint64_tstride(at::Dimnamedim)const#
- inlineat::Tensorsum(at::OptionalIntArrayRefdim,boolkeepdim=false,::std::optional<at::ScalarType>dtype=::std::nullopt)const#
- inlineat::Tensorsum(at::DimnameListdim,boolkeepdim=false,::std::optional<at::ScalarType>dtype=::std::nullopt)const#
- inlineat::Tensornansum(at::OptionalIntArrayRefdim=::std::nullopt,boolkeepdim=false,::std::optional<at::ScalarType>dtype=::std::nullopt)const#
- inlineat::Tensorstd(at::OptionalIntArrayRefdim=::std::nullopt,const::std::optional<at::Scalar>&correction=::std::nullopt,boolkeepdim=false)const#
- inlineat::Tensorstd(at::DimnameListdim,const::std::optional<at::Scalar>&correction=::std::nullopt,boolkeepdim=false)const#
- inlineat::Tensorprod(int64_tdim,boolkeepdim=false,::std::optional<at::ScalarType>dtype=::std::nullopt)const#
- inlineat::Tensorprod(at::Dimnamedim,boolkeepdim=false,::std::optional<at::ScalarType>dtype=::std::nullopt)const#
- inlineat::Tensorvar(at::OptionalIntArrayRefdim=::std::nullopt,const::std::optional<at::Scalar>&correction=::std::nullopt,boolkeepdim=false)const#
- inlineat::Tensorvar(at::DimnameListdim,const::std::optional<at::Scalar>&correction=::std::nullopt,boolkeepdim=false)const#
- inlineat::Tensornorm(const::std::optional<at::Scalar>&p,at::IntArrayRefdim,boolkeepdim,at::ScalarTypedtype)const#
- inlineat::Tensornorm(const::std::optional<at::Scalar>&p,at::DimnameListdim,boolkeepdim,at::ScalarTypedtype)const#
- inlineconstat::Tensor&resize_as_(constat::Tensor&the_template,::std::optional<at::MemoryFormat>memory_format=::std::nullopt)const#
- inlineat::Tensoraddmm(constat::Tensor&mat1,constat::Tensor&mat2,constat::Scalar&beta=1,constat::Scalar&alpha=1)const#
- inlineat::Tensor&addmm_(constat::Tensor&mat1,constat::Tensor&mat2,constat::Scalar&beta=1,constat::Scalar&alpha=1)const#
- inlineat::Tensor_addmm_activation(constat::Tensor&mat1,constat::Tensor&mat2,constat::Scalar&beta=1,constat::Scalar&alpha=1,booluse_gelu=false)const#
- inlineconstat::Tensor&sparse_resize_and_clear_(at::IntArrayRefsize,int64_tsparse_dim,int64_tdense_dim)const#
- inlineat::Tensorto_dense(::std::optional<at::ScalarType>dtype=::std::nullopt,::std::optional<bool>masked_grad=::std::nullopt)const#
- inlineat::Tensor_to_dense(::std::optional<at::ScalarType>dtype=::std::nullopt,::std::optional<bool>masked_grad=::std::nullopt)const#
- inlineint64_tsparse_dim()const#
- inlineint64_t_dimI()const#
- inlineint64_tdense_dim()const#
- inlineint64_t_dimV()const#
- inlineint64_t_nnz()const#
- inlineboolis_coalesced()const#
- inlineat::Tensorto_sparse(::std::optional<at::Layout>layout=::std::nullopt,at::OptionalIntArrayRefblocksize=::std::nullopt,::std::optional<int64_t>dense_dim=::std::nullopt)const#
- inlineat::Tensor_to_sparse(::std::optional<at::Layout>layout=::std::nullopt,at::OptionalIntArrayRefblocksize=::std::nullopt,::std::optional<int64_t>dense_dim=::std::nullopt)const#
- inlineat::Tensorto_sparse_bsr(at::IntArrayRefblocksize,::std::optional<int64_t>dense_dim=::std::nullopt)const#
- inlineat::Tensor_to_sparse_bsr(at::IntArrayRefblocksize,::std::optional<int64_t>dense_dim=::std::nullopt)const#
- inlineat::Tensorto_sparse_bsc(at::IntArrayRefblocksize,::std::optional<int64_t>dense_dim=::std::nullopt)const#
- inlineat::Tensor_to_sparse_bsc(at::IntArrayRefblocksize,::std::optional<int64_t>dense_dim=::std::nullopt)const#
- inlinedoubleq_scale()const#
- inlineint64_tq_zero_point()const#
- inlineint64_tq_per_channel_axis()const#
- inlineat::QSchemeqscheme()const#
- inlineat::Tensor_autocast_to_reduced_precision(boolcuda_enabled,boolcpu_enabled,at::ScalarTypecuda_dtype,at::ScalarTypecpu_dtype)const#
- inlineat::Tensorto(at::TensorOptionsoptions={},boolnon_blocking=false,boolcopy=false,::std::optional<at::MemoryFormat>memory_format=::std::nullopt)const#
- inlineat::Tensorto(::std::optional<at::ScalarType>dtype,::std::optional<at::Layout>layout,::std::optional<at::Device>device,::std::optional<bool>pin_memory,boolnon_blocking,boolcopy,::std::optional<at::MemoryFormat>memory_format)const#
- inlineat::Tensorto(at::Devicedevice,at::ScalarTypedtype,boolnon_blocking=false,boolcopy=false,::std::optional<at::MemoryFormat>memory_format=::std::nullopt)const#
- inlineat::Tensorto(at::ScalarTypedtype,boolnon_blocking=false,boolcopy=false,::std::optional<at::MemoryFormat>memory_format=::std::nullopt)const#
- inlineat::Tensorto(constat::Tensor&other,boolnon_blocking=false,boolcopy=false,::std::optional<at::MemoryFormat>memory_format=::std::nullopt)const#
- inlineat::Scalaritem()const#
- inlineat::Tensor&set_(at::Storagesource,int64_tstorage_offset,at::IntArrayRefsize,at::IntArrayRefstride={})const#
- inlineat::Tensor&set__symint(at::Storagesource,c10::SymIntstorage_offset,c10::SymIntArrayRefsize,c10::SymIntArrayRefstride={})const#
- inlineat::Tensor&set_(constat::Tensor&source,int64_tstorage_offset,at::IntArrayRefsize,at::IntArrayRefstride={})const#
- inlineat::Tensor&set__symint(constat::Tensor&source,c10::SymIntstorage_offset,c10::SymIntArrayRefsize,c10::SymIntArrayRefstride={})const#
- inlineat::Tensor&index_add_(int64_tdim,constat::Tensor&index,constat::Tensor&source,constat::Scalar&alpha=1)const#
- inlineat::Tensorindex_add(int64_tdim,constat::Tensor&index,constat::Tensor&source,constat::Scalar&alpha=1)const#
- inlineat::Tensorindex_add(at::Dimnamedim,constat::Tensor&index,constat::Tensor&source,constat::Scalar&alpha=1)const#
- inlineat::Tensor&index_reduce_(int64_tdim,constat::Tensor&index,constat::Tensor&source,c10::string_viewreduce,boolinclude_self=true)const#
- inlineat::Tensorindex_reduce(int64_tdim,constat::Tensor&index,constat::Tensor&source,c10::string_viewreduce,boolinclude_self=true)const#
- inlineat::Tensorscatter(int64_tdim,constat::Tensor&index,constat::Tensor&src,c10::string_viewreduce)const#
- inlineat::Tensor&scatter_(int64_tdim,constat::Tensor&index,constat::Tensor&src,c10::string_viewreduce)const#
- inlineat::Tensorscatter(int64_tdim,constat::Tensor&index,constat::Scalar&value,c10::string_viewreduce)const#
- inlineat::Tensor&scatter_(int64_tdim,constat::Tensor&index,constat::Scalar&value,c10::string_viewreduce)const#
- inlineat::Tensorscatter_reduce(int64_tdim,constat::Tensor&index,constat::Tensor&src,c10::string_viewreduce,boolinclude_self=true)const#
- inlineat::Tensor&scatter_reduce_(int64_tdim,constat::Tensor&index,constat::Tensor&src,c10::string_viewreduce,boolinclude_self=true)const#
- inlineat::Tensor&addbmm_(constat::Tensor&batch1,constat::Tensor&batch2,constat::Scalar&beta=1,constat::Scalar&alpha=1)const#
- inlineat::Tensoraddbmm(constat::Tensor&batch1,constat::Tensor&batch2,constat::Scalar&beta=1,constat::Scalar&alpha=1)const#
- inlineat::Tensor&random_(int64_tfrom,::std::optional<int64_t>to,::std::optional<at::Generator>generator=::std::nullopt)const#
- inlineat::Tensor&uniform_(doublefrom=0,doubleto=1,::std::optional<at::Generator>generator=::std::nullopt)const#
- inlineat::Tensor&cauchy_(doublemedian=0,doublesigma=1,::std::optional<at::Generator>generator=::std::nullopt)const#
- inlineat::Tensor&log_normal_(doublemean=1,doublestd=2,::std::optional<at::Generator>generator=::std::nullopt)const#
- inlineat::Tensor&exponential_(doublelambd=1,::std::optional<at::Generator>generator=::std::nullopt)const#
- inlineat::Tensortake_along_dim(constat::Tensor&indices,::std::optional<int64_t>dim=::std::nullopt)const#
- inlineat::Tensoraddcmul(constat::Tensor&tensor1,constat::Tensor&tensor2,constat::Scalar&value=1)const#
- inlineat::Tensor&addcmul_(constat::Tensor&tensor1,constat::Tensor&tensor2,constat::Scalar&value=1)const#
- inlineat::Tensoraddcdiv(constat::Tensor&tensor1,constat::Tensor&tensor2,constat::Scalar&value=1)const#
- inlineat::Tensor&addcdiv_(constat::Tensor&tensor1,constat::Tensor&tensor2,constat::Scalar&value=1)const#
- inline::std::tuple<at::Tensor,at::Tensor>triangular_solve(constat::Tensor&A,boolupper=true,booltranspose=false,boolunitriangular=false)const#
- inlineat::Tensorormqr(constat::Tensor&input2,constat::Tensor&input3,boolleft=true,booltranspose=false)const#
- inlineat::Tensormultinomial(int64_tnum_samples,boolreplacement=false,::std::optional<at::Generator>generator=::std::nullopt)const#
- inlineat::Tensormultinomial_symint(c10::SymIntnum_samples,boolreplacement=false,::std::optional<at::Generator>generator=::std::nullopt)const#
- inline::std::tuple<at::Tensor,at::Tensor>histogram(constat::Tensor&bins,const::std::optional<at::Tensor>&weight={},booldensity=false)const#
- inline::std::tuple<at::Tensor,at::Tensor>histogram(int64_tbins=100,::std::optional<at::ArrayRef<double>>range=::std::nullopt,const::std::optional<at::Tensor>&weight={},booldensity=false)const#
- inlineat::Tensorquantile(constat::Tensor&q,::std::optional<int64_t>dim=::std::nullopt,boolkeepdim=false,c10::string_viewinterpolation="linear")const#
- inlineat::Tensorquantile(doubleq,::std::optional<int64_t>dim=::std::nullopt,boolkeepdim=false,c10::string_viewinterpolation="linear")const#
- inlineat::Tensornanquantile(constat::Tensor&q,::std::optional<int64_t>dim=::std::nullopt,boolkeepdim=false,c10::string_viewinterpolation="linear")const#
- inlineat::Tensornanquantile(doubleq,::std::optional<int64_t>dim=::std::nullopt,boolkeepdim=false,c10::string_viewinterpolation="linear")const#
- inline::std::tuple<at::Tensor,at::Tensor>sort(::std::optional<bool>stable,int64_tdim=-1,booldescending=false)const#
- inline::std::tuple<at::Tensor,at::Tensor>sort(::std::optional<bool>stable,at::Dimnamedim,booldescending=false)const#
- inline::std::tuple<at::Tensor,at::Tensor>topk(int64_tk,int64_tdim=-1,boollargest=true,boolsorted=true)const#
- inline::std::tuple<at::Tensor,at::Tensor>topk_symint(c10::SymIntk,int64_tdim=-1,boollargest=true,boolsorted=true)const#
- inlineat::Tensor&normal_(doublemean=0,doublestd=1,::std::optional<at::Generator>generator=::std::nullopt)const#
- inlineat::Tensorto_padded_tensor(doublepadding,at::OptionalIntArrayRefoutput_size=::std::nullopt)const#
- inlineat::Tensorto_padded_tensor_symint(doublepadding,at::OptionalSymIntArrayRefoutput_size=::std::nullopt)const#
- inlineat::Tensortensor_data()const#
NOTE: This is similar to the legacy
.data()function onVariable, and is intended to be used from functions that need to access theVariable’s equivalentTensor(i.e.Tensorthat shares the same storage and tensor metadata with theVariable).One notable difference with the legacy
.data()function is that changes to the returnedTensor’s tensor metadata (e.g. sizes / strides / storage / storage_offset) will not update the originalVariable, due to the fact that this function shallow-copies theVariable’s underlying TensorImpl.
- inlineat::Tensorvariable_data()const#
NOTE:
var.variable_data()in C++ has the same semantics astensor.datain Python, which create a newVariablethat shares the same storage and tensor metadata with the originalVariable, but with a completely new autograd history.NOTE: If we change the tensor metadata (e.g. sizes / strides / storage / storage_offset) of a variable created from
var.variable_data(), those changes will not update the original variablevar. In.variable_data(), we setallow_tensor_metadata_change_to false to make such changes explicitly illegal, in order to prevent users from changing metadata ofvar.variable_data()and expecting the original variablevarto also be updated.
- template<typenameT>
hook_return_void_t<T>register_hook(T&&hook)const# Registers a backward hook.
The hook will be called every time a gradient with respect to theTensor is computed. The hook should have one of the following signature:
hook(Tensorgrad)->Tensor
The hook should not modify its argument, but it can optionally return a new gradient which will be used in place ofhook(Tensorgrad)->void
grad.This function returns the index of the hook in the list which can be used to remove hook.
Example:
autov=torch::tensor({0.,0.,0.},torch::requires_grad());autoh=v.register_hook([](torch::Tensorgrad){returngrad*2;});// double the gradientv.backward(torch::tensor({1.,2.,3.}));// This prints:// ```// 2// 4// 6// [ CPUFloatType{3} ]// ```std::cout<<v.grad()<<std::endl;v.remove_hook(h);// removes the hook
- template<typenameT>
hook_return_var_t<T>register_hook(T&&hook)const#
- void_backward(TensorListinputs,conststd::optional<Tensor>&gradient,std::optional<bool>keep_graph,boolcreate_graph)const#
- template<typenameT>
autoregister_hook(T&&hook)const->Tensor::hook_return_void_t<T>#
Public Members
- N
- PtrTraits
Public Static Functions
Protected Functions
- inlineexplicitTensor(unsafe_borrow_t,constTensorBase&rhs)#
Protected Attributes
- friendMaybeOwnedTraits<Tensor>
- friendOptionalTensorRef
- friendTensorRef
- Tensor()=default#