Class TransformerDecoderLayerImpl#
Defined inFile transformerlayer.h
Inheritance Relationships#
Base Type#
publictorch::nn::Cloneable<TransformerDecoderLayerImpl>(Template Class Cloneable)
Class Documentation#
- classTransformerDecoderLayerImpl:publictorch::nn::Cloneable<TransformerDecoderLayerImpl>#
TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network.
This standard decoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application. Seehttps://pytorch.org/docs/main/nn.html#transformer-layers to learn about the exact behavior of this module.
See the documentation for
torch::nn::TransformerDecoderLayerOptionsclass to learn what constructor arguments are supported for this module.Example:
TransformerDecoderLayermodel(TransformerDecoderLayerOptions(512,8).dropout(0.2));
Public Functions
- inlineTransformerDecoderLayerImpl(int64_td_model,int64_tnhead)#
- explicitTransformerDecoderLayerImpl(TransformerDecoderLayerOptionsoptions_)#
- virtualvoidreset()override#
reset()must perform initialization of all members with reference semantics, most importantly parameters, buffers and submodules.
- voidreset_parameters()#
- Tensorforward(Tensortgt,constTensor&memory,constTensor&tgt_mask={},constTensor&memory_mask={},constTensor&tgt_key_padding_mask={},constTensor&memory_key_padding_mask={})#
Pass the inputs (and mask) through the decoder layer.
Args: tgt: the sequence to the decoder layer (required). memory: the sequence from the last layer of the encoder (required). tgt_mask: the mask for the tgt sequence (optional). memory_mask: the mask for the memory sequence (optional). tgt_key_padding_mask: the mask for the tgt keys per batch (optional). memory_key_padding_mask: the mask for the memory keys per batch (optional).
Public Members
- TransformerDecoderLayerOptionsoptions#
The options used to configure this module.
- MultiheadAttentionself_attn={nullptr}#
self attention
- MultiheadAttentionmultihead_attn={nullptr}#
Multi-headed attention.
Protected Functions
- inlinevirtualbool_forward_has_default_args()override#
The following three functions allow a module with default arguments in its forward method to be used in aSequential module.
You should NEVER override these functions manually. Instead, you should use the
FORWARD_HAS_DEFAULT_ARGSmacro.
- inlinevirtualunsignedint_forward_num_required_args()override#
- inlinestd::vector<torch::nn::AnyValue>_forward_populate_default_args(std::vector<torch::nn::AnyValue>&&arguments)override#
- Tensoractivation(constTensor&input)#
Apply activation based on configuration.
Friends
- friendstructtorch::nn::AnyModuleHolder
- inlineTransformerDecoderLayerImpl(int64_td_model,int64_tnhead)#