Rate this Page

Struct EmbeddingFuncOptions#

Struct Documentation#

structEmbeddingFuncOptions#

Options fortorch::nn::functional::embedding.

Example:

namespaceF=torch::nn::functional;F::embedding(input,weight,F::EmbeddingFuncOptions().norm_type(2.5).scale_grad_by_freq(true).sparse(true));

Public Functions

inlineautopadding_idx(conststd::optional<int64_t>&new_padding_idx)->decltype(*this)#

If specified, the entries atpadding_idx do not contribute to the gradient; therefore, the embedding vector atpadding_idx is not updated during training, i.e.

it remains as a fixed “pad”.

inlineautopadding_idx(std::optional<int64_t>&&new_padding_idx)->decltype(*this)#
inlineconststd::optional<int64_t>&padding_idx()constnoexcept#
inlinestd::optional<int64_t>&padding_idx()noexcept#
inlineautomax_norm(conststd::optional<double>&new_max_norm)->decltype(*this)#

If given, each embedding vector with norm larger thanmax_norm is renormalized to have normmax_norm.

inlineautomax_norm(std::optional<double>&&new_max_norm)->decltype(*this)#
inlineconststd::optional<double>&max_norm()constnoexcept#
inlinestd::optional<double>&max_norm()noexcept#
inlineautonorm_type(constdouble&new_norm_type)->decltype(*this)#

The p of the p-norm to compute for themax_norm option. Default2.

inlineautonorm_type(double&&new_norm_type)->decltype(*this)#
inlineconstdouble&norm_type()constnoexcept#
inlinedouble&norm_type()noexcept#
inlineautoscale_grad_by_freq(constbool&new_scale_grad_by_freq)->decltype(*this)#

If given, this will scale gradients by the inverse of frequency of the words in the mini-batch.

Defaultfalse.

inlineautoscale_grad_by_freq(bool&&new_scale_grad_by_freq)->decltype(*this)#
inlineconstbool&scale_grad_by_freq()constnoexcept#
inlinebool&scale_grad_by_freq()noexcept#
inlineautosparse(constbool&new_sparse)->decltype(*this)#

Iftrue, gradient w.r.t.weight matrix will be a sparse tensor.

inlineautosparse(bool&&new_sparse)->decltype(*this)#
inlineconstbool&sparse()constnoexcept#
inlinebool&sparse()noexcept#