This model was released on 2021-10-14 and added to Hugging Face Transformers on 2023-02-03.
SpeechT5
Overview
The SpeechT5 model was proposed inSpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
The abstract from the paper is the following:
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
This model was contributed byMatthijs. The original code can be foundhere.
SpeechT5Config
classtransformers.SpeechT5Config
<source>(vocab_size = 81hidden_size = 768encoder_layers = 12encoder_attention_heads = 12encoder_ffn_dim = 3072encoder_layerdrop = 0.1decoder_layers = 6decoder_ffn_dim = 3072decoder_attention_heads = 12decoder_layerdrop = 0.1hidden_act = 'gelu'positional_dropout = 0.1hidden_dropout = 0.1attention_dropout = 0.1activation_dropout = 0.1initializer_range = 0.02layer_norm_eps = 1e-05scale_embedding = Falsefeat_extract_norm = 'group'feat_proj_dropout = 0.0feat_extract_activation = 'gelu'conv_dim = (512, 512, 512, 512, 512, 512, 512)conv_stride = (5, 2, 2, 2, 2, 2, 2)conv_kernel = (10, 3, 3, 3, 3, 2, 2)conv_bias = Falsenum_conv_pos_embeddings = 128num_conv_pos_embedding_groups = 16apply_spec_augment = Truemask_time_prob = 0.05mask_time_length = 10mask_time_min_masks = 2mask_feature_prob = 0.0mask_feature_length = 10mask_feature_min_masks = 0pad_token_id = 1bos_token_id = 0eos_token_id = 2decoder_start_token_id = 2num_mel_bins = 80speech_decoder_prenet_layers = 2speech_decoder_prenet_units = 256speech_decoder_prenet_dropout = 0.5speaker_embedding_dim = 512speech_decoder_postnet_layers = 5speech_decoder_postnet_units = 256speech_decoder_postnet_kernel = 5speech_decoder_postnet_dropout = 0.5reduction_factor = 2max_speech_positions = 4000max_text_positions = 450encoder_max_relative_position = 160use_guided_attention_loss = Trueguided_attention_loss_num_heads = 2guided_attention_loss_sigma = 0.4guided_attention_loss_scale = 10.0use_cache = Trueis_encoder_decoder = True**kwargs)
Parameters
- vocab_size (
int,optional, defaults to 81) —Vocabulary size of the SpeechT5 model. Defines the number of different tokens that can be represented bytheinputs_idspassed to the forward method ofSpeechT5Model. - hidden_size (
int,optional, defaults to 768) —Dimensionality of the encoder layers and the pooler layer. - encoder_layers (
int,optional, defaults to 12) —Number of hidden layers in the Transformer encoder. - encoder_attention_heads (
int,optional, defaults to 12) —Number of attention heads for each attention layer in the Transformer encoder. - encoder_ffn_dim (
int,optional, defaults to 3072) —Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - encoder_layerdrop (
float,optional, defaults to 0.1) —The LayerDrop probability for the encoder. See the [LayerDrop paper](seehttps://huggingface.co/papers/1909.11556)for more details. - decoder_layers (
int,optional, defaults to 6) —Number of hidden layers in the Transformer decoder. - decoder_attention_heads (
int,optional, defaults to 12) —Number of attention heads for each attention layer in the Transformer decoder. - decoder_ffn_dim (
int,optional, defaults to 3072) —Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer decoder. - decoder_layerdrop (
float,optional, defaults to 0.1) —The LayerDrop probability for the decoder. See the [LayerDrop paper](seehttps://huggingface.co/papers/1909.11556)for more details. - hidden_act (
strorfunction,optional, defaults to"gelu") —The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu","relu","selu"and"gelu_new"are supported. - positional_dropout (
float,optional, defaults to 0.1) —The dropout probability for the text position encoding layers. - hidden_dropout (
float,optional, defaults to 0.1) —The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - attention_dropout (
float,optional, defaults to 0.1) —The dropout ratio for the attention probabilities. - activation_dropout (
float,optional, defaults to 0.1) —The dropout ratio for activations inside the fully connected layer. - initializer_range (
float,optional, defaults to 0.02) —The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (
float,optional, defaults to 1e-5) —The epsilon used by the layer normalization layers. - scale_embedding (
bool,optional, defaults toFalse) —Scale embeddings by diving by sqrt(d_model). - feat_extract_norm (
str,optional, defaults to"group") —The norm to be applied to 1D convolutional layers in the speech encoder pre-net. One of"group"for groupnormalization of only the first 1D convolutional layer or"layer"for layer normalization of all 1Dconvolutional layers. - feat_proj_dropout (
float,optional, defaults to 0.0) —The dropout probability for output of the speech encoder pre-net. - feat_extract_activation (
str,optional, defaults to“gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string,“gelu”,“relu”,“selu”and“gelu_new”` are supported. - conv_dim (
tuple[int]orlist[int],optional, defaults to(512, 512, 512, 512, 512, 512, 512)) —A tuple of integers defining the number of input and output channels of each 1D convolutional layer in thespeech encoder pre-net. The length ofconv_dim defines the number of 1D convolutional layers. - conv_stride (
tuple[int]orlist[int],optional, defaults to(5, 2, 2, 2, 2, 2, 2)) —A tuple of integers defining the stride of each 1D convolutional layer in the speech encoder pre-net. Thelength ofconv_stride defines the number of convolutional layers and has to match the length ofconv_dim. - conv_kernel (
tuple[int]orlist[int],optional, defaults to(10, 3, 3, 3, 3, 3, 3)) —A tuple of integers defining the kernel size of each 1D convolutional layer in the speech encoder pre-net.The length ofconv_kernel defines the number of convolutional layers and has to match the length ofconv_dim. - conv_bias (
bool,optional, defaults toFalse) —Whether the 1D convolutional layers have a bias. - num_conv_pos_embeddings (
int,optional, defaults to 128) —Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positionalembeddings layer. - num_conv_pos_embedding_groups (
int,optional, defaults to 16) —Number of groups of 1D convolutional positional embeddings layer. - apply_spec_augment (
bool,optional, defaults toTrue) —Whether to applySpecAugment data augmentation to the outputs of the speech encoder pre-net. Forreference seeSpecAugment: A Simple Data Augmentation Method for Automatic SpeechRecognition. - mask_time_prob (
float,optional, defaults to 0.05) —Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The maskingprocedure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. Ifreasoning from the probability of each feature vector to be chosen as the start of the vector span to bemasked,mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant ifapply_spec_augment is True`. - mask_time_length (
int,optional, defaults to 10) —Length of vector span along the time axis. - mask_time_min_masks (
int,optional, defaults to 2), —The minimum number of masks of lengthmask_feature_lengthgenerated along the time axis, each time step,irrespectively ofmask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length <mask_time_min_masks” - mask_feature_prob (
float,optional, defaults to 0.0) —Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. Themasking procedure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks overthe axis. If reasoning from the probability of each feature vector to be chosen as the start of the vectorspan to be masked,mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant ifapply_spec_augment isTrue`. - mask_feature_length (
int,optional, defaults to 10) —Length of vector span along the feature axis. - mask_feature_min_masks (
int,optional, defaults to 0), —The minimum number of masks of lengthmask_feature_lengthgenerated along the feature axis, each timestep, irrespectively ofmask_feature_prob. Only relevant if”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks” - num_mel_bins (
int,optional, defaults to 80) —Number of mel features used per input features. Used by the speech decoder pre-net. Should correspond tothe value used in theSpeechT5Processor class. - speech_decoder_prenet_layers (
int,optional, defaults to 2) —Number of layers in the speech decoder pre-net. - speech_decoder_prenet_units (
int,optional, defaults to 256) —Dimensionality of the layers in the speech decoder pre-net. - speech_decoder_prenet_dropout (
float,optional, defaults to 0.5) —The dropout probability for the speech decoder pre-net layers. - speaker_embedding_dim (
int,optional, defaults to 512) —Dimensionality of theXVector embedding vectors. - speech_decoder_postnet_layers (
int,optional, defaults to 5) —Number of layers in the speech decoder post-net. - speech_decoder_postnet_units (
int,optional, defaults to 256) —Dimensionality of the layers in the speech decoder post-net. - speech_decoder_postnet_kernel (
int,optional, defaults to 5) —Number of convolutional filter channels in the speech decoder post-net. - speech_decoder_postnet_dropout (
float,optional, defaults to 0.5) —The dropout probability for the speech decoder post-net layers. - reduction_factor (
int,optional, defaults to 2) —Spectrogram length reduction factor for the speech decoder inputs. - max_speech_positions (
int,optional, defaults to 4000) —The maximum sequence length of speech features that this model might ever be used with. - max_text_positions (
int,optional, defaults to 450) —The maximum sequence length of text features that this model might ever be used with. - encoder_max_relative_position (
int,optional, defaults to 160) —Maximum distance for relative position embedding in the encoder. - use_guided_attention_loss (
bool,optional, defaults toTrue) —Whether to apply guided attention loss while training the TTS model. - guided_attention_loss_num_heads (
int,optional, defaults to 2) —Number of attention heads the guided attention loss will be applied to. Use -1 to apply this loss to allattention heads. - guided_attention_loss_sigma (
float,optional, defaults to 0.4) —Standard deviation for guided attention loss. - guided_attention_loss_scale (
float,optional, defaults to 10.0) —Scaling coefficient for guided attention loss (also known as lambda). - use_cache (
bool,optional, defaults toTrue) —Whether or not the model should return the last key/values attentions (not used by all models).
This is the configuration class to store the configuration of aSpeechT5Model. It is used to instantiate aSpeechT5 model according to the specified arguments, defining the model architecture. Instantiating a configurationwith the defaults will yield a similar configuration to that of the SpeechT5microsoft/speecht5_asr architecture.
Configuration objects inherit fromPreTrainedConfig and can be used to control the model outputs. Read thedocumentation fromPreTrainedConfig for more information.
Example:
>>>from transformersimport SpeechT5Model, SpeechT5Config>>># Initializing a "microsoft/speecht5_asr" style configuration>>>configuration = SpeechT5Config()>>># Initializing a model (with random weights) from the "microsoft/speecht5_asr" style configuration>>>model = SpeechT5Model(configuration)>>># Accessing the model configuration>>>configuration = model.config
SpeechT5HifiGanConfig
classtransformers.SpeechT5HifiGanConfig
<source>(model_in_dim = 80sampling_rate = 16000upsample_initial_channel = 512upsample_rates = [4, 4, 4, 4]upsample_kernel_sizes = [8, 8, 8, 8]resblock_kernel_sizes = [3, 7, 11]resblock_dilation_sizes = [[1, 3, 5], [1, 3, 5], [1, 3, 5]]initializer_range = 0.01leaky_relu_slope = 0.1normalize_before = True**kwargs)
Parameters
- model_in_dim (
int,optional, defaults to 80) —The number of frequency bins in the input log-mel spectrogram. - sampling_rate (
int,optional, defaults to 16000) —The sampling rate at which the output audio will be generated, expressed in hertz (Hz). - upsample_initial_channel (
int,optional, defaults to 512) —The number of input channels into the upsampling network. - upsample_rates (
tuple[int]orlist[int],optional, defaults to[4, 4, 4, 4]) —A tuple of integers defining the stride of each 1D convolutional layer in the upsampling network. Thelength ofupsample_rates defines the number of convolutional layers and has to match the length ofupsample_kernel_sizes. - upsample_kernel_sizes (
tuple[int]orlist[int],optional, defaults to[8, 8, 8, 8]) —A tuple of integers defining the kernel size of each 1D convolutional layer in the upsampling network. Thelength ofupsample_kernel_sizes defines the number of convolutional layers and has to match the length ofupsample_rates. - resblock_kernel_sizes (
tuple[int]orlist[int],optional, defaults to[3, 7, 11]) —A tuple of integers defining the kernel sizes of the 1D convolutional layers in the multi-receptive fieldfusion (MRF) module. - resblock_dilation_sizes (
tuple[tuple[int]]orlist[list[int]],optional, defaults to[[1, 3, 5], [1, 3, 5], [1, 3, 5]]) —A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in themulti-receptive field fusion (MRF) module. - initializer_range (
float,optional, defaults to 0.01) —The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - leaky_relu_slope (
float,optional, defaults to 0.1) —The angle of the negative slope used by the leaky ReLU activation. - normalize_before (
bool,optional, defaults toTrue) —Whether or not to normalize the spectrogram before vocoding using the vocoder’s learned mean and variance.
This is the configuration class to store the configuration of aSpeechT5HifiGanModel. It is used to instantiatea SpeechT5 HiFi-GAN vocoder model according to the specified arguments, defining the model architecture.Instantiating a configuration with the defaults will yield a similar configuration to that of the SpeechT5microsoft/speecht5_hifigan architecture.
Configuration objects inherit fromPreTrainedConfig and can be used to control the model outputs. Read thedocumentation fromPreTrainedConfig for more information.
Example:
>>>from transformersimport SpeechT5HifiGan, SpeechT5HifiGanConfig>>># Initializing a "microsoft/speecht5_hifigan" style configuration>>>configuration = SpeechT5HifiGanConfig()>>># Initializing a model (with random weights) from the "microsoft/speecht5_hifigan" style configuration>>>model = SpeechT5HifiGan(configuration)>>># Accessing the model configuration>>>configuration = model.config
SpeechT5Tokenizer
classtransformers.SpeechT5Tokenizer
<source>(vocab_filebos_token = '<s>'eos_token = '</s>'unk_token = '<unk>'pad_token = '<pad>'normalize = Falsesp_model_kwargs: typing.Optional[dict[str, typing.Any]] = None**kwargs)
Parameters
- vocab_file (
str) —SentencePiece file (generally has a.spm extension) thatcontains the vocabulary necessary to instantiate a tokenizer. - bos_token (
str,optional, defaults to"<s>") —The begin of sequence token. - eos_token (
str,optional, defaults to"</s>") —The end of sequence token. - unk_token (
str,optional, defaults to"<unk>") —The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be thistoken instead. - pad_token (
str,optional, defaults to"<pad>") —The token used for padding, for example when batching sequences of different lengths. - normalize (
bool,optional, defaults toFalse) —Whether to convert numeric quantities in the text to their spelt-out english counterparts. - sp_model_kwargs (
dict,optional) —Will be passed to theSentencePieceProcessor.__init__()method. ThePython wrapper forSentencePiece can be used, among other things,to set:enable_sampling: Enable subword regularization.nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout.nbest_size = {0,1}: No sampling is performed.nbest_size > 1: samples from the nbest_size results.nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)using forward-filtering-and-backward-sampling algorithm.
alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations forBPE-dropout.
- sp_model (
SentencePieceProcessor) —TheSentencePiece processor that is used for every conversion (string, tokens and IDs).
Construct a SpeechT5 tokenizer. Based onSentencePiece.
This tokenizer inherits fromPreTrainedTokenizer which contains most of the main methods. Users should refer tothis superclass for more information regarding those methods.
__call__
<source>(text: Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput], None] = Nonetext_pair: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]] = Nonetext_target: Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput], None] = Nonetext_pair_target: Optional[Union[TextInput, PreTokenizedInput, list[TextInput], list[PreTokenizedInput]]] = Noneadd_special_tokens: bool = Truepadding: Union[bool, str, PaddingStrategy] = Falsetruncation: Union[bool, str, TruncationStrategy, None] = Nonemax_length: Optional[int] = Nonestride: int = 0is_split_into_words: bool = Falsepad_to_multiple_of: Optional[int] = Nonepadding_side: Optional[str] = Nonereturn_tensors: Optional[Union[str, TensorType]] = Nonereturn_token_type_ids: Optional[bool] = Nonereturn_attention_mask: Optional[bool] = Nonereturn_overflowing_tokens: bool = Falsereturn_special_tokens_mask: bool = Falsereturn_offsets_mapping: bool = Falsereturn_length: bool = Falseverbose: bool = True**kwargs)→BatchEncoding
Parameters
- text (
str,list[str],list[list[str]],optional) —The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True(to lift the ambiguity with a batch of sequences). - text_pair (
str,list[str],list[list[str]],optional) —The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings(pretokenized string). If the sequences are provided as list of strings (pretokenized), you must setis_split_into_words=True(to lift the ambiguity with a batch of sequences). - text_target (
str,list[str],list[list[str]],optional) —The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or alist of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),you must setis_split_into_words=True(to lift the ambiguity with a batch of sequences). - text_pair_target (
str,list[str],list[list[str]],optional) —The sequence or batch of sequences to be encoded as target texts. Each sequence can be a string or alist of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized),you must setis_split_into_words=True(to lift the ambiguity with a batch of sequences). - add_special_tokens (
bool,optional, defaults toTrue) —Whether or not to add special tokens when encoding the sequences. This will use the underlyingPretrainedTokenizerBase.build_inputs_with_special_tokensfunction, which defines which tokens areautomatically added to the input ids. This is useful if you want to addbosoreostokensautomatically. - padding (
bool,strorPaddingStrategy,optional, defaults toFalse) —Activates and controls padding. Accepts the following values:Trueor'longest': Pad to the longest sequence in the batch (or no padding if only a singlesequence is provided).'max_length': Pad to a maximum length specified with the argumentmax_lengthor to the maximumacceptable input length for the model if that argument is not provided.Falseor'do_not_pad'(default): No padding (i.e., can output a batch with sequences of differentlengths).
- truncation (
bool,strorTruncationStrategy,optional, defaults toFalse) —Activates and controls truncation. Accepts the following values:Trueor'longest_first': Truncate to a maximum length specified with the argumentmax_lengthorto the maximum acceptable input length for the model if that argument is not provided. This willtruncate token by token, removing a token from the longest sequence in the pair if a pair ofsequences (or a batch of pairs) is provided.'only_first': Truncate to a maximum length specified with the argumentmax_lengthor to themaximum acceptable input length for the model if that argument is not provided. This will onlytruncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.'only_second': Truncate to a maximum length specified with the argumentmax_lengthor to themaximum acceptable input length for the model if that argument is not provided. This will onlytruncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.Falseor'do_not_truncate'(default): No truncation (i.e., can output batch with sequence lengthsgreater than the model maximum admissible input size).
- max_length (
int,optional) —Controls the maximum length to use by one of the truncation/padding parameters.If left unset or set to
None, this will use the predefined model maximum length if a maximum lengthis required by one of the truncation/padding parameters. If the model has no specific maximum inputlength (like XLNet) truncation/padding to a maximum length will be deactivated. - stride (
int,optional, defaults to 0) —If set to a number along withmax_length, the overflowing tokens returned whenreturn_overflowing_tokens=Truewill contain some tokens from the end of the truncated sequencereturned to provide some overlap between truncated and overflowing sequences. The value of thisargument defines the number of overlapping tokens. - is_split_into_words (
bool,optional, defaults toFalse) —Whether or not the input is already pre-tokenized (e.g., split into words). If set toTrue, thetokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)which it will tokenize. This is useful for NER or token classification. - pad_to_multiple_of (
int,optional) —If set will pad the sequence to a multiple of the provided value. Requirespaddingto be activated.This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability>= 7.5(Volta). - padding_side (
str,optional) —The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’].Default value is picked from the class attribute of the same name. - return_tensors (
strorTensorType,optional) —If set, will return tensors instead of list of python integers. Acceptable values are:'pt': Return PyTorchtorch.Tensorobjects.'np': Return Numpynp.ndarrayobjects.
- return_token_type_ids (
bool,optional) —Whether to return token type IDs. If left to the default, will return the token type IDs according tothe specific tokenizer’s default, defined by thereturn_outputsattribute. - return_attention_mask (
bool,optional) —Whether to return the attention mask. If left to the default, will return the attention mask accordingto the specific tokenizer’s default, defined by thereturn_outputsattribute. - return_overflowing_tokens (
bool,optional, defaults toFalse) —Whether or not to return overflowing token sequences. If a pair of sequences of input ids (or a batchof pairs) is provided withtruncation_strategy = longest_firstorTrue, an error is raised insteadof returning overflowing tokens. - return_special_tokens_mask (
bool,optional, defaults toFalse) —Whether or not to return special tokens mask information. - return_offsets_mapping (
bool,optional, defaults toFalse) —Whether or not to return(char_start, char_end)for each token.This is only available on fast tokenizers inheriting fromPreTrainedTokenizerFast, if usingPython’s tokenizer, this method will raise
NotImplementedError. - return_length (
bool,optional, defaults toFalse) —Whether or not to return the lengths of the encoded inputs. - verbose (
bool,optional, defaults toTrue) —Whether or not to print more information and warnings. - **kwargs — passed to the
self.tokenize()method
Returns
ABatchEncoding with the following fields:
input_ids — List of token ids to be fed to a model.
token_type_ids — List of token type ids to be fed to a model (when
return_token_type_ids=Trueorif“token_type_ids” is inself.model_input_names).attention_mask — List of indices specifying which tokens should be attended to by the model (when
return_attention_mask=Trueor if“attention_mask” is inself.model_input_names).overflowing_tokens — List of overflowing tokens sequences (when a
max_lengthis specified andreturn_overflowing_tokens=True).num_truncated_tokens — Number of tokens truncated (when a
max_lengthis specified andreturn_overflowing_tokens=True).special_tokens_mask — List of 0s and 1s, with 1 specifying added special tokens and 0 specifyingregular sequence tokens (when
add_special_tokens=Trueandreturn_special_tokens_mask=True).length — The length of the inputs (when
return_length=True)
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) ofsequences.
save_vocabulary
<source>(save_directory: strfilename_prefix: typing.Optional[str] = None)
decode
<source>(token_ids: Union[int, list[int], np.ndarray, torch.Tensor]skip_special_tokens: bool = Falseclean_up_tokenization_spaces: Optional[bool] = None**kwargs)→str
Parameters
- token_ids (
Union[int, list[int], np.ndarray, torch.Tensor]) —List of tokenized input ids. Can be obtained using the__call__method. - skip_special_tokens (
bool,optional, defaults toFalse) —Whether or not to remove special tokens in the decoding. - clean_up_tokenization_spaces (
bool,optional) —Whether or not to clean up the tokenization spaces. IfNone, will default toself.clean_up_tokenization_spaces. - kwargs (additional keyword arguments,optional) —Will be passed to the underlying model specific decode method.
Returns
str
The decoded sentence.
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove specialtokens and clean up tokenization spaces.
Similar to doingself.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).
batch_decode
<source>(sequences: Union[list[int], list[list[int]], np.ndarray, torch.Tensor]skip_special_tokens: bool = Falseclean_up_tokenization_spaces: Optional[bool] = None**kwargs)→list[str]
Parameters
- sequences (
Union[list[int], list[list[int]], np.ndarray, torch.Tensor]) —List of tokenized input ids. Can be obtained using the__call__method. - skip_special_tokens (
bool,optional, defaults toFalse) —Whether or not to remove special tokens in the decoding. - clean_up_tokenization_spaces (
bool,optional) —Whether or not to clean up the tokenization spaces. IfNone, will default toself.clean_up_tokenization_spaces. - kwargs (additional keyword arguments,optional) —Will be passed to the underlying model specific decode method.
Returns
list[str]
The list of decoded sentences.
Convert a list of lists of token ids into a list of strings by calling decode.
SpeechT5FeatureExtractor
classtransformers.SpeechT5FeatureExtractor
<source>(feature_size: int = 1sampling_rate: int = 16000padding_value: float = 0.0do_normalize: bool = Falsenum_mel_bins: int = 80hop_length: int = 16win_length: int = 64win_function: str = 'hann_window'frame_signal_scale: float = 1.0fmin: float = 80fmax: float = 7600mel_floor: float = 1e-10reduction_factor: int = 2return_attention_mask: bool = True**kwargs)
Parameters
- feature_size (
int,optional, defaults to 1) —The feature dimension of the extracted features. - sampling_rate (
int,optional, defaults to 16000) —The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). - padding_value (
float,optional, defaults to 0.0) —The value that is used to fill the padding values. - do_normalize (
bool,optional, defaults toFalse) —Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantlyimprove the performance for some models. - num_mel_bins (
int,optional, defaults to 80) —The number of mel-frequency bins in the extracted spectrogram features. - hop_length (
int,optional, defaults to 16) —Number of ms between windows. Otherwise referred to as “shift” in many papers. - win_length (
int,optional, defaults to 64) —Number of ms per window. - win_function (
str,optional, defaults to"hann_window") —Name for the window function used for windowing, must be accessible viatorch.{win_function} - frame_signal_scale (
float,optional, defaults to 1.0) —Constant multiplied in creating the frames before applying DFT. This argument is deprecated. - fmin (
float,optional, defaults to 80) —Minimum mel frequency in Hz. - fmax (
float,optional, defaults to 7600) —Maximum mel frequency in Hz. - mel_floor (
float,optional, defaults to 1e-10) —Minimum value of mel frequency banks. - reduction_factor (
int,optional, defaults to 2) —Spectrogram length reduction factor. This argument is deprecated. - return_attention_mask (
bool,optional, defaults toTrue) —Whether or notcall() should returnattention_mask.
Constructs a SpeechT5 feature extractor.
This class can pre-process a raw speech signal by (optionally) normalizing to zero-mean unit-variance, for use bythe SpeechT5 speech encoder prenet.
This class can also extract log-mel filter bank features from raw speech, for use by the SpeechT5 speech decoderprenet.
This feature extractor inherits fromSequenceFeatureExtractor which containsmost of the main methods. Users should refer to this superclass for more information regarding those methods.
__call__
<source>(audio: typing.Union[numpy.ndarray, list[float], list[numpy.ndarray], list[list[float]], NoneType] = Noneaudio_target: typing.Union[numpy.ndarray, list[float], list[numpy.ndarray], list[list[float]], NoneType] = Nonepadding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = Falsemax_length: typing.Optional[int] = Nonetruncation: bool = Falsepad_to_multiple_of: typing.Optional[int] = Nonereturn_attention_mask: typing.Optional[bool] = Nonereturn_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = Nonesampling_rate: typing.Optional[int] = None**kwargs)
Parameters
- audio (
np.ndarray,list[float],list[np.ndarray],list[list[float]],optional) —The sequence or batch of sequences to be processed. Each sequence can be a numpy array, a list of floatvalues, a list of numpy arrays or a list of list of float values. This outputs waveform features. Mustbe mono channel audio, not stereo, i.e. single float per timestep. - audio_target (
np.ndarray,list[float],list[np.ndarray],list[list[float]],optional) —The sequence or batch of sequences to be processed as targets. Each sequence can be a numpy array, alist of float values, a list of numpy arrays or a list of list of float values. This outputs log-melspectrogram features. - padding (
bool,strorPaddingStrategy,optional, defaults toFalse) —Select a strategy to pad the returned sequences (according to the model’s padding side and paddingindex) among:Trueor'longest': Pad to the longest sequence in the batch (or no padding if only a singlesequence if provided).'max_length': Pad to a maximum length specified with the argumentmax_lengthor to the maximumacceptable input length for the model if that argument is not provided.Falseor'do_not_pad'(default): No padding (i.e., can output a batch with sequences of differentlengths).
- max_length (
int,optional) —Maximum length of the returned list and optionally padding length (see above). - truncation (
bool) —Activates truncation to cut input sequences longer thanmax_length tomax_length. - pad_to_multiple_of (
int,optional) —If set will pad the sequence to a multiple of the provided value.This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5(Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128. - return_attention_mask (
bool,optional) —Whether to return the attention mask. If left to the default, will return the attention mask accordingto the specific feature_extractor’s default. - return_tensors (
strorTensorType,optional) —If set, will return tensors instead of list of python integers. Acceptable values are:'pt': Return PyTorchtorch.Tensorobjects.'np': Return Numpynp.ndarrayobjects.
- sampling_rate (
int,optional) —The sampling rate at which theaudiooraudio_targetinput was sampled. It is strongly recommendedto passsampling_rateat the forward call to prevent silent errors.
Main method to featurize and prepare for the model one or several sequence(s).
Pass in a value foraudio to extract waveform features. Pass in a value foraudio_target to extract log-melspectrogram features.
SpeechT5Processor
classtransformers.SpeechT5Processor
<source>(feature_extractortokenizer)
Parameters
- feature_extractor (
SpeechT5FeatureExtractor) —An instance ofSpeechT5FeatureExtractor. The feature extractor is a required input. - tokenizer (
SpeechT5Tokenizer) —An instance ofSpeechT5Tokenizer. The tokenizer is a required input.
Constructs a SpeechT5 processor which wraps a feature extractor and a tokenizer into a single processor.
SpeechT5Processor offers all the functionalities ofSpeechT5FeatureExtractor andSpeechT5Tokenizer. Seethe docstring ofcall() anddecode() for more information.
__call__
<source>(*args**kwargs)
Processes audio and text input, as well as audio and text targets.
You can process audio by using the argumentaudio, or process audio targets by using the argumentaudio_target. This forwards the arguments to SpeechT5FeatureExtractor’scall().
You can process text by using the argumenttext, or process text labels by using the argumenttext_target.This forwards the arguments to SpeechT5Tokenizer’scall().
Valid input combinations are:
textonlyaudioonlytext_targetonlyaudio_targetonlytextandaudio_targetaudioandaudio_targettextandtext_targetaudioandtext_target
Please refer to the docstring of the above two methods for more information.
pad
<source>(*args**kwargs)
Collates the audio and text inputs, as well as their targets, into a padded batch.
Audio inputs are padded by SpeechT5FeatureExtractor’spad(). Text inputs are paddedby SpeechT5Tokenizer’spad().
Valid input combinations are:
input_idsonlyinput_valuesonlylabelsonly, either log-mel spectrograms or text tokensinput_idsand log-mel spectrogramlabelsinput_valuesand textlabels
Please refer to the docstring of the above two methods for more information.
from_pretrained
<source>(pretrained_model_name_or_path: typing.Union[str, os.PathLike]cache_dir: typing.Union[str, os.PathLike, NoneType] = Noneforce_download: bool = Falselocal_files_only: bool = Falsetoken: typing.Union[str, bool, NoneType] = Nonerevision: str = 'main'**kwargs)
Parameters
- pretrained_model_name_or_path (
stroros.PathLike) —This can be either:- a string, themodel id of a pretrained feature_extractor hosted inside a model repo onhuggingface.co.
- a path to adirectory containing a feature extractor file saved using thesave_pretrained() method, e.g.,
./my_model_directory/. - a path or url to a saved feature extractor JSONfile, e.g.,
./my_model_directory/preprocessor_config.json.
- **kwargs —Additional keyword arguments passed along to bothfrom_pretrained() and
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained.
Instantiate a processor associated with a pretrained model.
This class method is simply calling the feature extractorfrom_pretrained(), image processorImageProcessingMixin and the tokenizer
~tokenization_utils_base.PreTrainedTokenizer.from_pretrainedmethods. Please refer to the docstrings of themethods above for more information.
save_pretrained
<source>(save_directorypush_to_hub: bool = False**kwargs)
Parameters
- save_directory (
stroros.PathLike) —Directory where the feature extractor JSON file and the tokenizer files will be saved (directory willbe created if it does not exist). - push_to_hub (
bool,optional, defaults toFalse) —Whether or not to push your model to the Hugging Face model hub after saving it. You can specify therepository you want to push to withrepo_id(will default to the name ofsave_directoryin yournamespace). - kwargs (
dict[str, Any],optional) —Additional key word arguments passed along to thepush_to_hub() method.
Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that itcan be reloaded using thefrom_pretrained() method.
This class method is simply callingsave_pretrained() andsave_pretrained(). Please refer to the docstrings of themethods above for more information.
batch_decode
<source>(*args**kwargs)
This method forwards all its arguments to PreTrainedTokenizer’sbatch_decode(). Pleaserefer to the docstring of this method for more information.
SpeechT5Model
classtransformers.SpeechT5Model
<source>(config: SpeechT5Configencoder: typing.Optional[torch.nn.modules.module.Module] = Nonedecoder: typing.Optional[torch.nn.modules.module.Module] = None)
Parameters
- config (SpeechT5Config) —Model configuration class with all the parameters of the model. Initializing with a config file does notload the weights associated with the model, only the configuration. Check out thefrom_pretrained() method to load the model weights.
- encoder (
PreTrainedModel,optional) —The encoder model to use. - decoder (
PreTrainedModel,optional) —The decoder model to use.
The bare SpeechT5 Encoder-Decoder Model outputting raw hidden-states without any specific pre- or post-nets.
This model inherits fromPreTrainedModel. Check the superclass documentation for the generic methods thelibrary implements for all its model (such as downloading or saving, resizing the input embeddings, pruning headsetc.)
This model is also a PyTorchtorch.nn.Module subclass.Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usageand behavior.
forward
<source>(input_values: typing.Optional[torch.Tensor] = Noneattention_mask: typing.Optional[torch.LongTensor] = Nonedecoder_input_values: typing.Optional[torch.Tensor] = Nonedecoder_attention_mask: typing.Optional[torch.LongTensor] = Noneencoder_outputs: typing.Optional[tuple[tuple[torch.FloatTensor]]] = Nonepast_key_values: typing.Optional[transformers.cache_utils.Cache] = Noneuse_cache: typing.Optional[bool] = Nonespeaker_embeddings: typing.Optional[torch.FloatTensor] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonecache_position: typing.Optional[torch.Tensor] = None)→transformers.modeling_outputs.Seq2SeqModelOutput ortuple(torch.FloatTensor)
Parameters
- input_values (
torch.Tensorof shape(batch_size, sequence_length)) —Depending on which encoder is being used, theinput_valuesare either: float values of the input rawspeech waveform, or indices of input sequence tokens in the vocabulary, or hidden states. - attention_mask (
torch.LongTensorof shape(batch_size, sequence_length),optional) —Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that arenot masked,
- 0 for tokens that aremasked.
- decoder_input_values (
torch.Tensorof shape(batch_size, target_sequence_length),optional) —Depending on which decoder is being used, thedecoder_input_valuesare either: float values of log-melfilterbank features extracted from the raw speech waveform, or indices of decoder input sequence tokens inthe vocabulary, or hidden states. - decoder_attention_mask (
torch.LongTensorof shape(batch_size, target_sequence_length),optional) —Default behavior: generate a tensor that ignores pad tokens indecoder_input_values. Causal mask willalso be used by default.If you want to change padding behavior, you should read
SpeechT5Decoder._prepare_decoder_attention_maskand modify to your needs. See diagram 1 inthe paper for moreinformation on the default strategy. - encoder_outputs (
tuple[tuple[torch.FloatTensor]],optional) —Tuple consists of (last_hidden_state,optional:hidden_states,optional:attentions)last_hidden_stateof shape(batch_size, sequence_length, hidden_size),optional) is a sequence ofhidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - past_key_values (
~cache_utils.Cache,optional) —Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attentionblocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True.OnlyCache instance is allowed as input, see ourkv cache guide.If no
past_key_valuesare passed,DynamicCache will be initialized by default.The model will output the same cache format that is fed as input.
If
past_key_valuesare used, the user is expected to input only unprocessedinput_ids(those that don’thave their past key value states given to this model) of shape(batch_size, unprocessed_length)instead of allinput_idsof shape(batch_size, sequence_length). - use_cache (
bool,optional) —If set toTrue,past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values). - speaker_embeddings (
torch.FloatTensorof shape(batch_size, config.speaker_embedding_dim),optional) —Tensor containing the speaker embeddings. - output_attentions (
bool,optional) —Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returnedtensors for more detail. - output_hidden_states (
bool,optional) —Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors formore detail. - return_dict (
bool,optional) —Whether or not to return aModelOutput instead of a plain tuple. - cache_position (
torch.Tensorof shape(sequence_length),optional) —Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids,this tensor is not affected by padding. It is used to update the cache in the correct position and to inferthe complete sequence length.
Returns
transformers.modeling_outputs.Seq2SeqModelOutput ortuple(torch.FloatTensor)
Atransformers.modeling_outputs.Seq2SeqModelOutput or a tuple oftorch.FloatTensor (ifreturn_dict=False is passed or whenconfig.return_dict=False) comprising variouselements depending on the configuration (SpeechT5Config) and inputs.
last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model.If
past_key_valuesis used only the last hidden-state of the sequences of shape(batch_size, 1, hidden_size)is output.past_key_values (
EncoderDecoderCache,optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) — It is aEncoderDecoderCache instance. For more details, see ourkv cache guide.Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attentionblocks) that can be used (see
past_key_valuesinput) to speed up sequential decoding.decoder_hidden_states (
tuple(torch.FloatTensor),optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, +one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in theself-attention heads.
cross_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute theweighted average in the cross-attention heads.
encoder_last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size),optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.encoder_hidden_states (
tuple(torch.FloatTensor),optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, +one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in theself-attention heads.
TheSpeechT5Model forward method, overrides the__call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps whilethe latter silently ignores them.
SpeechT5ForSpeechToText
classtransformers.SpeechT5ForSpeechToText
<source>(config: SpeechT5Config)
Parameters
- config (SpeechT5Config) —Model configuration class with all the parameters of the model. Initializing with a config file does notload the weights associated with the model, only the configuration. Check out thefrom_pretrained() method to load the model weights.
SpeechT5 Model with a speech encoder and a text decoder.
This model inherits fromPreTrainedModel. Check the superclass documentation for the generic methods thelibrary implements for all its model (such as downloading or saving, resizing the input embeddings, pruning headsetc.)
This model is also a PyTorchtorch.nn.Module subclass.Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usageand behavior.
forward
<source>(input_values: typing.Optional[torch.FloatTensor] = Noneattention_mask: typing.Optional[torch.LongTensor] = Nonedecoder_input_ids: typing.Optional[torch.LongTensor] = Nonedecoder_attention_mask: typing.Optional[torch.LongTensor] = Noneencoder_outputs: typing.Optional[tuple[tuple[torch.FloatTensor]]] = Nonepast_key_values: typing.Optional[transformers.cache_utils.Cache] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonelabels: typing.Optional[torch.LongTensor] = Nonecache_position: typing.Optional[torch.Tensor] = None)→transformers.modeling_outputs.Seq2SeqLMOutput ortuple(torch.FloatTensor)
Parameters
- input_values (
torch.FloatTensorof shape(batch_size, sequence_length)) —Float values of input raw speech waveform. Values can be obtained by loading a.flac or.wav audio fileinto an array of typelist[float], anumpy.ndarrayor atorch.Tensor,e.g. via the torchcodec library(pip install torchcodec) or the soundfile library (pip install soundfile).To prepare the array intoinput_values, theSpeechT5Processor should be used for paddingand conversion into a tensor of typetorch.FloatTensor. SeeSpeechT5Processor.call() for details. - attention_mask (
torch.LongTensorof shape(batch_size, sequence_length),optional) —Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that arenot masked,
- 0 for tokens that aremasked.
- decoder_input_ids (
torch.LongTensorof shape(batch_size, target_sequence_length),optional) —Indices of decoder input sequence tokens in the vocabulary.Indices can be obtained usingSpeechT5Tokenizer. SeePreTrainedTokenizer.encode() andPreTrainedTokenizer.call() for details.
SpeechT5 uses the
eos_token_idas the starting token fordecoder_input_idsgeneration. Ifpast_key_valuesis used, optionally only the lastdecoder_input_idshave to be input (seepast_key_values). - decoder_attention_mask (
torch.LongTensorof shape(batch_size, target_sequence_length),optional) —Default behavior: generate a tensor that ignores pad tokens indecoder_input_values. Causal mask willalso be used by default.If you want to change padding behavior, you should read
SpeechT5Decoder._prepare_decoder_attention_maskand modify to your needs. See diagram 1 inthe paper for moreinformation on the default strategy. - encoder_outputs (
tuple[tuple[torch.FloatTensor]],optional) —Tuple consists of (last_hidden_state,optional:hidden_states,optional:attentions)last_hidden_stateof shape(batch_size, sequence_length, hidden_size),optional) is a sequence ofhidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - past_key_values (
~cache_utils.Cache,optional) —Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attentionblocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True.OnlyCache instance is allowed as input, see ourkv cache guide.If no
past_key_valuesare passed,DynamicCache will be initialized by default.The model will output the same cache format that is fed as input.
If
past_key_valuesare used, the user is expected to input only unprocessedinput_ids(those that don’thave their past key value states given to this model) of shape(batch_size, unprocessed_length)instead of allinput_idsof shape(batch_size, sequence_length). - use_cache (
bool,optional) —If set toTrue,past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values). - output_attentions (
bool,optional) —Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returnedtensors for more detail. - output_hidden_states (
bool,optional) —Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors formore detail. - return_dict (
bool,optional) —Whether or not to return aModelOutput instead of a plain tuple. - labels (
torch.LongTensorof shape(batch_size, sequence_length),optional) —Labels for computing the language modeling loss. Indices should either be in[0, ..., config.vocab_size]or -100 (seeinput_idsdocstring). Tokens with indices set to-100are ignored (masked), the loss isonly computed for the tokens with labels in[0, ..., config.vocab_size].Label indices can be obtained usingSpeechT5Tokenizer. SeePreTrainedTokenizer.encode() andPreTrainedTokenizer.call() for details.
- cache_position (
torch.Tensorof shape(sequence_length),optional) —Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids,this tensor is not affected by padding. It is used to update the cache in the correct position and to inferthe complete sequence length.
Returns
transformers.modeling_outputs.Seq2SeqLMOutput ortuple(torch.FloatTensor)
Atransformers.modeling_outputs.Seq2SeqLMOutput or a tuple oftorch.FloatTensor (ifreturn_dict=False is passed or whenconfig.return_dict=False) comprising variouselements depending on the configuration (SpeechT5Config) and inputs.
loss (
torch.FloatTensorof shape(1,),optional, returned whenlabelsis provided) — Language modeling loss.logits (
torch.FloatTensorof shape(batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).past_key_values (
EncoderDecoderCache,optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) — It is aEncoderDecoderCache instance. For more details, see ourkv cache guide.Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attentionblocks) that can be used (see
past_key_valuesinput) to speed up sequential decoding.decoder_hidden_states (
tuple(torch.FloatTensor),optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, +one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in theself-attention heads.
cross_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute theweighted average in the cross-attention heads.
encoder_last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size),optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.encoder_hidden_states (
tuple(torch.FloatTensor),optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, +one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in theself-attention heads.
TheSpeechT5ForSpeechToText forward method, overrides the__call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps whilethe latter silently ignores them.
Example:
>>>from transformersimport SpeechT5Processor, SpeechT5ForSpeechToText>>>from datasetsimport load_dataset>>>dataset = load_dataset(..."hf-internal-testing/librispeech_asr_demo","clean", split="validation"...)# doctest: +IGNORE_RESULT>>>dataset = dataset.sort("id")>>>sampling_rate = dataset.features["audio"].sampling_rate>>>processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_asr")>>>model = SpeechT5ForSpeechToText.from_pretrained("microsoft/speecht5_asr")>>># audio file is decoded on the fly>>>inputs = processor(audio=dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")>>>predicted_ids = model.generate(**inputs, max_length=100)>>># transcribe speech>>>transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)>>>transcription[0]'mister quilter is the apostle of the middle classes and we are glad to welcome his gospel'
SpeechT5ForTextToSpeech
classtransformers.SpeechT5ForTextToSpeech
<source>(config: SpeechT5Config)
Parameters
- config (SpeechT5Config) —Model configuration class with all the parameters of the model. Initializing with a config file does notload the weights associated with the model, only the configuration. Check out thefrom_pretrained() method to load the model weights.
SpeechT5 Model with a text encoder and a speech decoder.
This model inherits fromPreTrainedModel. Check the superclass documentation for the generic methods thelibrary implements for all its model (such as downloading or saving, resizing the input embeddings, pruning headsetc.)
This model is also a PyTorchtorch.nn.Module subclass.Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usageand behavior.
forward
<source>(input_ids: typing.Optional[torch.LongTensor] = Noneattention_mask: typing.Optional[torch.LongTensor] = Nonedecoder_input_values: typing.Optional[torch.FloatTensor] = Nonedecoder_attention_mask: typing.Optional[torch.LongTensor] = Noneencoder_outputs: typing.Optional[tuple[tuple[torch.FloatTensor]]] = Nonepast_key_values: typing.Optional[transformers.cache_utils.Cache] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonespeaker_embeddings: typing.Optional[torch.FloatTensor] = Nonelabels: typing.Optional[torch.FloatTensor] = Nonestop_labels: typing.Optional[torch.Tensor] = Nonecache_position: typing.Optional[torch.Tensor] = None)→transformers.modeling_outputs.Seq2SeqSpectrogramOutput ortuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) —Indices of input sequence tokens in the vocabulary.Indices can be obtained usingSpeechT5Tokenizer. Seeencode() andcall() for details.
- attention_mask (
torch.LongTensorof shape(batch_size, sequence_length),optional) —Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that arenot masked,
- 0 for tokens that aremasked.
- decoder_input_values (
torch.FloatTensorof shape(batch_size, sequence_length, config.num_mel_bins)) —Float values of input mel spectrogram.SpeechT5 uses an all-zero spectrum as the starting token for
decoder_input_valuesgeneration. Ifpast_key_valuesis used, optionally only the lastdecoder_input_valueshave to be input (seepast_key_values). - decoder_attention_mask (
torch.LongTensorof shape(batch_size, target_sequence_length),optional) —Default behavior: generate a tensor that ignores pad tokens indecoder_input_values. Causal mask willalso be used by default.If you want to change padding behavior, you should read
SpeechT5Decoder._prepare_decoder_attention_maskand modify to your needs. See diagram 1 inthe paper for moreinformation on the default strategy. - encoder_outputs (
tuple[tuple[torch.FloatTensor]],optional) —Tuple consists of (last_hidden_state,optional:hidden_states,optional:attentions)last_hidden_stateof shape(batch_size, sequence_length, hidden_size),optional) is a sequence ofhidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - past_key_values (
~cache_utils.Cache,optional) —Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attentionblocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True.OnlyCache instance is allowed as input, see ourkv cache guide.If no
past_key_valuesare passed,DynamicCache will be initialized by default.The model will output the same cache format that is fed as input.
If
past_key_valuesare used, the user is expected to input only unprocessedinput_ids(those that don’thave their past key value states given to this model) of shape(batch_size, unprocessed_length)instead of allinput_idsof shape(batch_size, sequence_length). - use_cache (
bool,optional) —If set toTrue,past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values). - output_attentions (
bool,optional) —Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returnedtensors for more detail. - output_hidden_states (
bool,optional) —Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors formore detail. - return_dict (
bool,optional) —Whether or not to return aModelOutput instead of a plain tuple. - speaker_embeddings (
torch.FloatTensorof shape(batch_size, config.speaker_embedding_dim),optional) —Tensor containing the speaker embeddings. - labels (
torch.FloatTensorof shape(batch_size, sequence_length, config.num_mel_bins),optional) —Float values of target mel spectrogram. Timesteps set to-100.0are ignored (masked) for the losscomputation. Spectrograms can be obtained usingSpeechT5Processor. SeeSpeechT5Processor.call()for details. - stop_labels (
torch.Tensorof shape(batch_size, sequence_length),optional) —Binary tensor indicating the position of the stop token in the sequence. - cache_position (
torch.Tensorof shape(sequence_length),optional) —Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids,this tensor is not affected by padding. It is used to update the cache in the correct position and to inferthe complete sequence length.
Returns
transformers.modeling_outputs.Seq2SeqSpectrogramOutput ortuple(torch.FloatTensor)
Atransformers.modeling_outputs.Seq2SeqSpectrogramOutput or a tuple oftorch.FloatTensor (ifreturn_dict=False is passed or whenconfig.return_dict=False) comprising variouselements depending on the configuration (SpeechT5Config) and inputs.
loss (
torch.FloatTensorof shape(1,),optional, returned whenlabelsis provided) — Spectrogram generation loss.spectrogram (
torch.FloatTensorof shape(batch_size, sequence_length, num_bins)) — The predicted spectrogram.past_key_values (
EncoderDecoderCache,optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) — It is aEncoderDecoderCache instance. For more details, see ourkv cache guide.Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attentionblocks) that can be used (see
past_key_valuesinput) to speed up sequential decoding.decoder_hidden_states (
tuple(torch.FloatTensor),optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, +one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in theself-attention heads.
cross_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute theweighted average in the cross-attention heads.
encoder_last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size),optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.encoder_hidden_states (
tuple(torch.FloatTensor),optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, +one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in theself-attention heads.
TheSpeechT5ForTextToSpeech forward method, overrides the__call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps whilethe latter silently ignores them.
Example:
>>>from transformersimport SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan, set_seed>>>import torch>>>processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts")>>>model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts")>>>vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")>>>inputs = processor(text="Hello, my dog is cute", return_tensors="pt")>>>speaker_embeddings = torch.zeros((1,512))# or load xvectors from a file>>>set_seed(555)# make deterministic>>># generate speech>>>speech = model.generate(inputs["input_ids"], speaker_embeddings=speaker_embeddings, vocoder=vocoder)>>>speech.shapetorch.Size([15872])
generate
<source>(input_ids: LongTensorattention_mask: typing.Optional[torch.LongTensor] = Nonespeaker_embeddings: typing.Optional[torch.FloatTensor] = Nonethreshold: float = 0.5minlenratio: float = 0.0maxlenratio: float = 20.0vocoder: typing.Optional[torch.nn.modules.module.Module] = Noneoutput_cross_attentions: bool = Falsereturn_output_lengths: bool = False**kwargs)→tuple(torch.FloatTensor) comprising various elements depending on the inputs
Parameters
- input_ids (
torch.LongTensorof shape(batch_size, sequence_length)) —Indices of input sequence tokens in the vocabulary.Indices can be obtained usingSpeechT5Tokenizer. Seeencode() andcall() for details.
- attention_mask (
torch.LongTensorof shape(batch_size, sequence_length)) —Attention mask from the tokenizer, required for batched inference to signal to the model where toignore padded tokens from the input_ids. - speaker_embeddings (
torch.FloatTensorof shape(batch_size, config.speaker_embedding_dim),optional) —Tensor containing the speaker embeddings. - threshold (
float,optional, defaults to 0.5) —The generated sequence ends when the predicted stop token probability exceeds this value. - minlenratio (
float,optional, defaults to 0.0) —Used to calculate the minimum required length for the output sequence. - maxlenratio (
float,optional, defaults to 20.0) —Used to calculate the maximum allowed length for the output sequence. - vocoder (
nn.Module,optional) —The vocoder that converts the mel spectrogram into a speech waveform. IfNone, the output is the melspectrogram. - output_cross_attentions (
bool,optional, defaults toFalse) —Whether or not to return the attentions tensors of the decoder’s cross-attention layers. - return_output_lengths (
bool,optional, defaults toFalse) —Whether or not to return the concrete spectrogram/waveform lengths.
Returns
tuple(torch.FloatTensor) comprising various elements depending on the inputs
- when
return_output_lengthsis False- spectrogram (optional, returned when no
vocoderis provided)torch.FloatTensorof shape(output_sequence_length, config.num_mel_bins)— The predicted log-mel spectrogram. - waveform (optional, returned when a
vocoderis provided)torch.FloatTensorof shape(num_frames,)— The predicted speech waveform. - cross_attentions (optional, returned when
output_cross_attentionsisTrue)torch.FloatTensorof shape(config.decoder_layers, config.decoder_attention_heads, output_sequence_length, input_sequence_length)— The outputs of the decoder’s cross-attention layers.
- spectrogram (optional, returned when no
- when
return_output_lengthsis True- spectrograms (optional, returned when no
vocoderis provided)torch.FloatTensorof shape(batch_size, output_sequence_length, config.num_mel_bins)— The predicted log-mel spectrograms thatare padded to the maximum length. - spectrogram_lengths (optional, returned when no
vocoderis provided)list[Int]— A list ofall the concrete lengths for each spectrogram. - waveforms (optional, returned when a
vocoderis provided)torch.FloatTensorof shape(batch_size, num_frames)— The predicted speech waveforms that are padded to the maximum length. - waveform_lengths (optional, returned when a
vocoderis provided)list[Int]— A list of allthe concrete lengths for each waveform. - cross_attentions (optional, returned when
output_cross_attentionsisTrue)torch.FloatTensorof shape(batch_size, config.decoder_layers, config.decoder_attention_heads, output_sequence_length, input_sequence_length)— The outputs of the decoder’s cross-attention layers.
- spectrograms (optional, returned when no
Converts a sequence of input tokens into a sequence of mel spectrograms, which are subsequently turned into aspeech waveform using a vocoder.
SpeechT5ForSpeechToSpeech
classtransformers.SpeechT5ForSpeechToSpeech
<source>(config: SpeechT5Config)
Parameters
- config (SpeechT5Config) —Model configuration class with all the parameters of the model. Initializing with a config file does notload the weights associated with the model, only the configuration. Check out thefrom_pretrained() method to load the model weights.
SpeechT5 Model with a speech encoder and a speech decoder.
This model inherits fromPreTrainedModel. Check the superclass documentation for the generic methods thelibrary implements for all its model (such as downloading or saving, resizing the input embeddings, pruning headsetc.)
This model is also a PyTorchtorch.nn.Module subclass.Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usageand behavior.
forward
<source>(input_values: typing.Optional[torch.FloatTensor] = Noneattention_mask: typing.Optional[torch.LongTensor] = Nonedecoder_input_values: typing.Optional[torch.FloatTensor] = Nonedecoder_attention_mask: typing.Optional[torch.LongTensor] = Noneencoder_outputs: typing.Optional[tuple[tuple[torch.FloatTensor]]] = Nonepast_key_values: typing.Optional[transformers.cache_utils.Cache] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonespeaker_embeddings: typing.Optional[torch.FloatTensor] = Nonelabels: typing.Optional[torch.FloatTensor] = Nonestop_labels: typing.Optional[torch.Tensor] = Nonecache_position: typing.Optional[torch.Tensor] = None)→transformers.modeling_outputs.Seq2SeqSpectrogramOutput ortuple(torch.FloatTensor)
Parameters
- input_values (
torch.FloatTensorof shape(batch_size, sequence_length)) —Float values of input raw speech waveform. Values can be obtained by loading a.flac or.wav audio fileinto an array of typelist[float], anumpy.ndarrayor atorch.Tensor,e.g. via the torchcodec library(pip install torchcodec) or the soundfile library (pip install soundfile).To prepare the array intoinput_values, theSpeechT5Processor should be used for padding and conversion intoa tensor of typetorch.FloatTensor. SeeSpeechT5Processor.call() for details. - attention_mask (
torch.LongTensorof shape(batch_size, sequence_length),optional) —Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that arenot masked,
- 0 for tokens that aremasked.
- decoder_input_values (
torch.FloatTensorof shape(batch_size, sequence_length, config.num_mel_bins)) —Float values of input mel spectrogram.SpeechT5 uses an all-zero spectrum as the starting token for
decoder_input_valuesgeneration. Ifpast_key_valuesis used, optionally only the lastdecoder_input_valueshave to be input (seepast_key_values). - decoder_attention_mask (
torch.LongTensorof shape(batch_size, target_sequence_length),optional) —Default behavior: generate a tensor that ignores pad tokens indecoder_input_values. Causal mask willalso be used by default.If you want to change padding behavior, you should read
SpeechT5Decoder._prepare_decoder_attention_maskand modify to your needs. See diagram 1 inthe paper for moreinformation on the default strategy. - encoder_outputs (
tuple[tuple[torch.FloatTensor]],optional) —Tuple consists of (last_hidden_state,optional:hidden_states,optional:attentions)last_hidden_stateof shape(batch_size, sequence_length, hidden_size),optional) is a sequence ofhidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - past_key_values (
~cache_utils.Cache,optional) —Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attentionblocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=Trueorconfig.use_cache=True.OnlyCache instance is allowed as input, see ourkv cache guide.If no
past_key_valuesare passed,DynamicCache will be initialized by default.The model will output the same cache format that is fed as input.
If
past_key_valuesare used, the user is expected to input only unprocessedinput_ids(those that don’thave their past key value states given to this model) of shape(batch_size, unprocessed_length)instead of allinput_idsof shape(batch_size, sequence_length). - use_cache (
bool,optional) —If set toTrue,past_key_valueskey value states are returned and can be used to speed up decoding (seepast_key_values). - output_attentions (
bool,optional) —Whether or not to return the attentions tensors of all attention layers. Seeattentionsunder returnedtensors for more detail. - output_hidden_states (
bool,optional) —Whether or not to return the hidden states of all layers. Seehidden_statesunder returned tensors formore detail. - return_dict (
bool,optional) —Whether or not to return aModelOutput instead of a plain tuple. - speaker_embeddings (
torch.FloatTensorof shape(batch_size, config.speaker_embedding_dim),optional) —Tensor containing the speaker embeddings. - labels (
torch.FloatTensorof shape(batch_size, sequence_length, config.num_mel_bins),optional) —Float values of target mel spectrogram. Spectrograms can be obtained usingSpeechT5Processor. SeeSpeechT5Processor.call() for details. - stop_labels (
torch.Tensorof shape(batch_size, sequence_length),optional) —Binary tensor indicating the position of the stop token in the sequence. - cache_position (
torch.Tensorof shape(sequence_length),optional) —Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids,this tensor is not affected by padding. It is used to update the cache in the correct position and to inferthe complete sequence length.
Returns
transformers.modeling_outputs.Seq2SeqSpectrogramOutput ortuple(torch.FloatTensor)
Atransformers.modeling_outputs.Seq2SeqSpectrogramOutput or a tuple oftorch.FloatTensor (ifreturn_dict=False is passed or whenconfig.return_dict=False) comprising variouselements depending on the configuration (SpeechT5Config) and inputs.
loss (
torch.FloatTensorof shape(1,),optional, returned whenlabelsis provided) — Spectrogram generation loss.spectrogram (
torch.FloatTensorof shape(batch_size, sequence_length, num_bins)) — The predicted spectrogram.past_key_values (
EncoderDecoderCache,optional, returned whenuse_cache=Trueis passed or whenconfig.use_cache=True) — It is aEncoderDecoderCache instance. For more details, see ourkv cache guide.Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attentionblocks) that can be used (see
past_key_valuesinput) to speed up sequential decoding.decoder_hidden_states (
tuple(torch.FloatTensor),optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, +one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in theself-attention heads.
cross_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute theweighted average in the cross-attention heads.
encoder_last_hidden_state (
torch.FloatTensorof shape(batch_size, sequence_length, hidden_size),optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.encoder_hidden_states (
tuple(torch.FloatTensor),optional, returned whenoutput_hidden_states=Trueis passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor(one for the output of the embeddings, if the model has an embedding layer, +one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (
tuple(torch.FloatTensor),optional, returned whenoutput_attentions=Trueis passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in theself-attention heads.
TheSpeechT5ForSpeechToSpeech forward method, overrides the__call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps whilethe latter silently ignores them.
Example:
>>>from transformersimport SpeechT5Processor, SpeechT5ForSpeechToSpeech, SpeechT5HifiGan, set_seed>>>from datasetsimport load_dataset>>>import torch>>>dataset = load_dataset(..."hf-internal-testing/librispeech_asr_demo","clean", split="validation"...)# doctest: +IGNORE_RESULT>>>dataset = dataset.sort("id")>>>sampling_rate = dataset.features["audio"].sampling_rate>>>processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_vc")>>>model = SpeechT5ForSpeechToSpeech.from_pretrained("microsoft/speecht5_vc")>>>vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")>>># audio file is decoded on the fly>>>inputs = processor(audio=dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")>>>speaker_embeddings = torch.zeros((1,512))# or load xvectors from a file>>>set_seed(555)# make deterministic>>># generate speech>>>speech = model.generate_speech(inputs["input_values"], speaker_embeddings, vocoder=vocoder)>>>speech.shapetorch.Size([77824])
generate_speech
<source>(input_values: FloatTensorspeaker_embeddings: typing.Optional[torch.FloatTensor] = Noneattention_mask: typing.Optional[torch.LongTensor] = Nonethreshold: float = 0.5minlenratio: float = 0.0maxlenratio: float = 20.0vocoder: typing.Optional[torch.nn.modules.module.Module] = Noneoutput_cross_attentions: bool = Falsereturn_output_lengths: bool = False)→tuple(torch.FloatTensor) comprising various elements depending on the inputs
Parameters
- input_values (
torch.FloatTensorof shape(batch_size, sequence_length)) —Float values of input raw speech waveform.Values can be obtained by loading a.flac or.wav audio file into an array of type
list[float],anumpy.ndarrayor atorch.Tensor,e.g. via the torchcodec library (pip install torchcodec)or the soundfile library (pip install soundfile).To prepare the array intoinput_values, theSpeechT5Processor should be used for padding andconversion into a tensor of typetorch.FloatTensor. SeeSpeechT5Processor.call() for details. - speaker_embeddings (
torch.FloatTensorof shape(batch_size, config.speaker_embedding_dim),optional) —Tensor containing the speaker embeddings. - attention_mask (
torch.LongTensorof shape(batch_size, sequence_length),optional) —Mask to avoid performing convolution and attention on padding token indices. Mask values selected in[0, 1]:- 1 for tokens that arenot masked,
- 0 for tokens that aremasked.
- threshold (
float,optional, defaults to 0.5) —The generated sequence ends when the predicted stop token probability exceeds this value. - minlenratio (
float,optional, defaults to 0.0) —Used to calculate the minimum required length for the output sequence. - maxlenratio (
float,optional, defaults to 20.0) —Used to calculate the maximum allowed length for the output sequence. - vocoder (
nn.Module,optional, defaults toNone) —The vocoder that converts the mel spectrogram into a speech waveform. IfNone, the output is the melspectrogram. - output_cross_attentions (
bool,optional, defaults toFalse) —Whether or not to return the attentions tensors of the decoder’s cross-attention layers. - return_output_lengths (
bool,optional, defaults toFalse) —Whether or not to return the concrete spectrogram/waveform lengths.
Returns
tuple(torch.FloatTensor) comprising various elements depending on the inputs
- when
return_output_lengthsis False- spectrogram (optional, returned when no
vocoderis provided)torch.FloatTensorof shape(output_sequence_length, config.num_mel_bins)— The predicted log-mel spectrogram. - waveform (optional, returned when a
vocoderis provided)torch.FloatTensorof shape(num_frames,)— The predicted speech waveform. - cross_attentions (optional, returned when
output_cross_attentionsisTrue)torch.FloatTensorof shape(config.decoder_layers, config.decoder_attention_heads, output_sequence_length, input_sequence_length)— The outputs of the decoder’s cross-attention layers.
- spectrogram (optional, returned when no
- when
return_output_lengthsis True- spectrograms (optional, returned when no
vocoderis provided)torch.FloatTensorof shape(batch_size, output_sequence_length, config.num_mel_bins)— The predicted log-mel spectrograms thatare padded to the maximum length. - spectrogram_lengths (optional, returned when no
vocoderis provided)list[Int]— A list ofall the concrete lengths for each spectrogram. - waveforms (optional, returned when a
vocoderis provided)torch.FloatTensorof shape(batch_size, num_frames)— The predicted speech waveforms that are padded to the maximum length. - waveform_lengths (optional, returned when a
vocoderis provided)list[Int]— A list of allthe concrete lengths for each waveform. - cross_attentions (optional, returned when
output_cross_attentionsisTrue)torch.FloatTensorof shape(batch_size, config.decoder_layers, config.decoder_attention_heads, output_sequence_length, input_sequence_length)— The outputs of the decoder’s cross-attention layers.
- spectrograms (optional, returned when no
Converts a raw speech waveform into a sequence of mel spectrograms, which are subsequently turned back into aspeech waveform using a vocoder.
SpeechT5HifiGan
classtransformers.SpeechT5HifiGan
<source>(config: SpeechT5HifiGanConfig)
Parameters
- config (SpeechT5HifiGanConfig) —Model configuration class with all the parameters of the model. Initializing with a config file does notload the weights associated with the model, only the configuration. Check out thefrom_pretrained() method to load the model weights.
HiFi-GAN vocoder.
This model inherits fromPreTrainedModel. Check the superclass documentation for the generic methods thelibrary implements for all its model (such as downloading or saving, resizing the input embeddings, pruning headsetc.)
This model is also a PyTorchtorch.nn.Module subclass.Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usageand behavior.
forward
<source>(spectrogram: FloatTensor)→torch.FloatTensor
Parameters
- spectrogram (
torch.FloatTensor) —Tensor containing the log-mel spectrograms. Can be batched and of shape(batch_size, sequence_length, config.model_in_dim), or un-batched and of shape(sequence_length, config.model_in_dim).
Returns
torch.FloatTensor
Tensor containing the speech waveform. If the input spectrogram is batched, will be ofshape(batch_size, num_frames,). If un-batched, will be of shape(num_frames,).
Converts a log-mel spectrogram into a speech waveform. Passing a batch of log-mel spectrograms returns a batchof speech waveforms. Passing a single, un-batched log-mel spectrogram returns a single, un-batched speechwaveform.