Movatterモバイル変換


[0]ホーム

URL:


Hugging Face's logoHugging Face

Transformers documentation

Zamba

Transformers

You are viewingmain version, which requiresinstallation from source. If you'd likeregular pip install, checkout the latest stable version (v4.57.1).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

Collaborate on models, datasets and Spaces
Faster examples with accelerated inference
Switch between documentation themes

to get started

This model was released on 2024-04-16 and added to Hugging Face Transformers on 2024-10-04.

Zamba

PyTorch

Zamba (blog post) is a large language model (LLM) trained by Zyphra, and made available under an Apache 2.0 license. Please see theZyphra Hugging Face repository for model weights.

This model was contributed bypglo.

Model details

Zamba-7B-v1 is a hybrid between state-space models (SpecificallyMamba) and transformer, and was trained using next-token prediction. Zamba uses a shared transformer layer after every 6 mamba blocks. It uses theMistral v0.1 tokenizer. We came to this architecture after a series of ablations at small scales. Zamba-7B-v1 was pre-trained on 1T tokens of text and code data.

Quick start

Presequities

Zamba requires you usetransformers version 4.46.0 or higher:

pip install transformers>=4.45.0

In order to run optimized Mamba implementations, you first need to installmamba-ssm andcausal-conv1d:

pip install mamba-ssm causal-conv1d>=1.2.0

You also have to have the model on a CUDA device.

You can run the model not using the optimized Mamba kernels, but it isnot recommended as it will result in significantly lower latencies. In order to do that, you’ll need to specifyuse_mamba_kernels=False when loading the model.

Inference

from transformersimport AutoTokenizer, AutoModelForCausalLMimport torchtokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba-7B-v1")model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba-7B-v1", device_map="auto", dtype=torch.bfloat16)input_text ="A funny prompt would be "input_ids = tokenizer(input_text, return_tensors="pt").to(model.device)outputs = model.generate(**input_ids, max_new_tokens=100)print(tokenizer.decode(outputs[0]))

Model card

The model cards can be found at:

Issues

For issues with model output, or community discussion, please use the Hugging Face communityforum

License

The model weights are open-sourced via an Apache 2.0 license.

ZambaConfig

classtransformers.ZambaConfig

<source>

(vocab_size = 32000tie_word_embeddings = Truehidden_size = 3712attention_hidden_size = Noneintermediate_size = 14848num_hidden_layers = 76num_attention_heads = 16attention_head_dim = Nonenum_key_value_heads = 16n_mamba_heads = 2hidden_act = 'gelu'hidden_mamba_act = 'silu'initializer_range = 0.02rms_norm_eps = 1e-05use_cache = Truenum_logits_to_keep = 1pad_token_id = 0bos_token_id = 1eos_token_id = 2max_position_embeddings = 4096attention_dropout = 0.0attn_layer_period = 6attn_layer_offset = 4use_mamba_kernels = Truemamba_d_state = 16mamba_d_conv = 4mamba_expand = 2mamba_dt_rank = 'auto'time_step_min = 0.001time_step_max = 0.1time_step_floor = 0.0001mamba_conv_bias = Truemamba_proj_bias = False**kwargs)

Parameters

  • vocab_size (int,optional, defaults to 32000) —Vocabulary size of the Zamba model. Defines the number of different tokens that can be represented by theinputs_ids passed when callingZambaModel
  • tie_word_embeddings (bool,optional, defaults toTrue) —Whether the model’s input and output word embeddings should be tied. Note that this is only relevant if themodel has a output word embedding layer.
  • hidden_size (int,optional, defaults to 3712) —Dimension of the hidden representations.
  • attention_hidden_size (int,optional) —Dimension of the hidden representations of the inputs to the Attention layer.
  • intermediate_size (int,optional, defaults to 14848) —Dimension of the MLP representations.
  • num_hidden_layers (int,optional, defaults to 76) —Number of hidden layers in the model.
  • num_attention_heads (int,optional, defaults to 16) —Number of attention heads for each attention layer in the Transformer decoder.
  • attention_head_dim (int,optional) —Dimension of the attention head in the Transformer decoder.
  • num_key_value_heads (int,optional, defaults to 16) —This is the number of key_value heads that should be used to implement Grouped Query Attention. Ifnum_key_value_heads=None, the model will use Multi Head Attention (MHA), if`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. Whenconverting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructedby meanpooling all the original heads within that group. For more details, check outthispaper.
  • n_mamba_heads (int,optional, defaults to 2) —Number of mamba heads for each mamba layer.
  • hidden_act (str orfunction,optional, defaults to"gelu") —The non-linear activation function (function or string) in the decoder.
  • hidden_mamba_act (str orfunction,optional, defaults to"silu") —The non-linear activation function (function or string) in the mamba layer.
  • initializer_range (float,optional, defaults to 0.02) —The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • rms_norm_eps (float,optional, defaults to 1e-05) —The epsilon used by the rms normalization layers.
  • use_cache (bool,optional, defaults toTrue) —Whether or not the model should return the last key/values attentions (not used by all models). Onlyrelevant ifconfig.is_decoder=True.
  • num_logits_to_keep (int orNone,optional, defaults to 1) —Number of prompt logits to calculate during generation. IfNone, all logits will be calculated. If aninteger value, only lastnum_logits_to_keep logits will be calculated. Default is 1 because only thelogits of the last prompt token are needed for generation. For long sequences, the logits for the entiresequence may use a lot of memory so, settingnum_logits_to_keep=1 will reduce memory footprintsignificantly.
  • pad_token_id (int,optional, defaults to 0) —The id of the padding token.
  • bos_token_id (int,optional, defaults to 1) —The id of the “beginning-of-sequence” token.
  • eos_token_id (int,optional, defaults to 2) —The id of the “end-of-sequence” token.
  • max_position_embeddings (int,optional, defaults to 4096) —This value doesn’t have any real effect. The maximum sequence length that this model is intended to beused with. It can be used with longer sequences, but performance may degrade.
  • attention_dropout (float,optional, defaults to 0.0) —The dropout ratio for the attention probabilities.
  • attn_layer_period (int,optional, defaults to 6) —Once in this many layers, we will have a shared attention layer
  • attn_layer_offset (int,optional, defaults to 4) —Offset of the shared attention layer
  • use_mamba_kernels (bool,optional, defaults toTrue) —Flag indicating whether or not to use the fast mamba kernels. These are available only ifmamba-ssm andcausal-conv1d are installed, and the mamba modules are running on a CUDA device. Raises ValueError ifTrue and kernels are not available
  • mamba_d_state (int,optional, defaults to 16) —The dimension the mamba state space latents
  • mamba_d_conv (int,optional, defaults to 4) —The size of the mamba convolution kernel
  • mamba_expand (int,optional, defaults to 2) —Expanding factor (relative to hidden_size) used to determine the mamba intermediate size
  • mamba_dt_rank (Union[int,str],optional, defaults to"auto") —Rank of the mamba discretization projection matrix."auto" means that it will default tomath.ceil(self.hidden_size / 16)
  • time_step_min (float,optional, defaults to 0.001) —Minimumtime_step used to bounddt_proj_bias.
  • time_step_max (float,optional, defaults to 0.1) —Maximumtime_step used to bounddt_proj_bias.
  • time_step_floor (float,optional, defaults to 0.0001) —Minimum clamping value of thedt_proj.bias layer initialization.
  • mamba_conv_bias (bool,optional, defaults toTrue) —Flag indicating whether or not to use bias in the convolution layer of the mamba mixer block.
  • mamba_proj_bias (bool,optional, defaults toFalse) —Flag indicating whether or not to use bias in the input and output projections ([“in_proj”, “out_proj”]) of the mamba mixer block

This is the configuration class to store the configuration of aZambaModel. It is used to instantiate aZamba model according to the specified arguments, defining the model architecture. Instantiating a configurationwith the defaults will yield a similar configuration to that of the Zamba-v0.1 model.

Zyphra/Zamba-7B-v1

Configuration objects inherit fromPreTrainedConfig and can be used to control the model outputs. Read thedocumentation fromPreTrainedConfig for more information.

ZambaModel

classtransformers.ZambaModel

<source>

(config: ZambaConfig)

Parameters

  • config (ZambaConfig) —Model configuration class with all the parameters of the model. Initializing with a config file does notload the weights associated with the model, only the configuration. Check out thefrom_pretrained() method to load the model weights.

The bare Zamba Model outputting raw hidden-states without any specific head on top.

This model inherits fromPreTrainedModel. Check the superclass documentation for the generic methods thelibrary implements for all its model (such as downloading or saving, resizing the input embeddings, pruning headsetc.)

This model is also a PyTorchtorch.nn.Module subclass.Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usageand behavior.

forward

<source>

(input_ids: typing.Optional[torch.LongTensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.LongTensor] = Nonepast_key_values: typing.Optional[transformers.models.zamba.modeling_zamba.ZambaHybridDynamicCache] = Noneinputs_embeds: typing.Optional[torch.FloatTensor] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonecache_position: typing.Optional[torch.LongTensor] = None)transformers.modeling_outputs.BaseModelOutputWithPast ortuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape(batch_size, sequence_length),optional) —Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained usingAutoTokenizer. SeePreTrainedTokenizer.encode() andPreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape(batch_size, sequence_length),optional) —Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:

    • 1 for tokens that arenot masked,
    • 0 for tokens that aremasked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape(batch_size, sequence_length),optional) —Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1].

    What are position IDs?

  • past_key_values (~models.zamba.modeling_zamba.ZambaHybridDynamicCache,optional) —Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attentionblocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=True orconfig.use_cache=True.

    OnlyCache instance is allowed as input, see ourkv cache guide.If nopast_key_values are passed,DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    Ifpast_key_values are used, the user is expected to input only unprocessedinput_ids (those that don’thave their past key value states given to this model) of shape(batch_size, unprocessed_length) instead of allinput_idsof shape(batch_size, sequence_length).

  • inputs_embeds (torch.FloatTensor of shape(batch_size, sequence_length, hidden_size),optional) —Optionally, instead of passinginput_ids you can choose to directly pass an embedded representation. Thisis useful if you want more control over how to convertinput_ids indices into associated vectors than themodel’s internal embedding lookup matrix.
  • use_cache (bool,optional) —If set toTrue,past_key_values key value states are returned and can be used to speed up decoding (seepast_key_values).
  • output_attentions (bool,optional) —Whether or not to return the attentions tensors of all attention layers. Seeattentions under returnedtensors for more detail.
  • output_hidden_states (bool,optional) —Whether or not to return the hidden states of all layers. Seehidden_states under returned tensors formore detail.
  • return_dict (bool,optional) —Whether or not to return aModelOutput instead of a plain tuple.
  • cache_position (torch.LongTensor of shape(sequence_length),optional) —Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids,this tensor is not affected by padding. It is used to update the cache in the correct position and to inferthe complete sequence length.

Atransformers.modeling_outputs.BaseModelOutputWithPast or a tuple oftorch.FloatTensor (ifreturn_dict=False is passed or whenconfig.return_dict=False) comprising variouselements depending on the configuration (ZambaConfig) and inputs.

  • last_hidden_state (torch.FloatTensor of shape(batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

    Ifpast_key_values is used only the last hidden-state of the sequences of shape(batch_size, 1, hidden_size) is output.

  • past_key_values (Cache,optional, returned whenuse_cache=True is passed or whenconfig.use_cache=True) — It is aCache instance. For more details, see ourkv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally ifconfig.is_encoder_decoder=True in the cross-attention blocks) that can be used (seepast_key_valuesinput) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor),optional, returned whenoutput_hidden_states=True is passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor),optional, returned whenoutput_attentions=True is passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor (one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attentionheads.

TheZambaModel forward method, overrides the__call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call theModuleinstance afterwards instead of this since the former takes care of running the pre and post processing steps whilethe latter silently ignores them.

ZambaForCausalLM

classtransformers.ZambaForCausalLM

<source>

(config: ZambaConfig)

forward

<source>

(input_ids: typing.Optional[torch.LongTensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.LongTensor] = Nonepast_key_values: typing.Optional[transformers.models.zamba.modeling_zamba.ZambaHybridDynamicCache] = Noneinputs_embeds: typing.Optional[torch.FloatTensor] = Nonelabels: typing.Optional[torch.LongTensor] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = Nonecache_position: typing.Optional[torch.LongTensor] = Nonelogits_to_keep: typing.Union[int, torch.Tensor] = 0**kwargs)transformers.modeling_outputs.CausalLMOutputWithPast ortuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape(batch_size, sequence_length),optional) —Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained usingAutoTokenizer. SeePreTrainedTokenizer.encode() andPreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape(batch_size, sequence_length),optional) —Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:

    • 1 for tokens that arenot masked,
    • 0 for tokens that aremasked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape(batch_size, sequence_length),optional) —Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1].

    What are position IDs?

  • past_key_values (~models.zamba.modeling_zamba.ZambaHybridDynamicCache,optional) —Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attentionblocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=True orconfig.use_cache=True.

    OnlyCache instance is allowed as input, see ourkv cache guide.If nopast_key_values are passed,DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    Ifpast_key_values are used, the user is expected to input only unprocessedinput_ids (those that don’thave their past key value states given to this model) of shape(batch_size, unprocessed_length) instead of allinput_idsof shape(batch_size, sequence_length).

  • inputs_embeds (torch.FloatTensor of shape(batch_size, sequence_length, hidden_size),optional) —Optionally, instead of passinginput_ids you can choose to directly pass an embedded representation. Thisis useful if you want more control over how to convertinput_ids indices into associated vectors than themodel’s internal embedding lookup matrix.
  • labels (torch.LongTensor of shape(batch_size, sequence_length),optional) —Labels for computing the masked language modeling loss. Indices should either be in[0, ..., config.vocab_size] or -100 (seeinput_ids docstring). Tokens with indices set to-100 are ignored(masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size].
  • use_cache (bool,optional) —If set toTrue,past_key_values key value states are returned and can be used to speed up decoding (seepast_key_values).
  • output_attentions (bool,optional) —Whether or not to return the attentions tensors of all attention layers. Seeattentions under returnedtensors for more detail.
  • output_hidden_states (bool,optional) —Whether or not to return the hidden states of all layers. Seehidden_states under returned tensors formore detail.
  • return_dict (bool,optional) —Whether or not to return aModelOutput instead of a plain tuple.
  • cache_position (torch.LongTensor of shape(sequence_length),optional) —Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids,this tensor is not affected by padding. It is used to update the cache in the correct position and to inferthe complete sequence length.
  • logits_to_keep (Union[int, torch.Tensor], defaults to0) —If anint, compute logits for the lastlogits_to_keep tokens. If0, calculate logits for allinput_ids (special case). Only last token logits are needed for generation, and calculating them only for thattoken can save memory, which becomes pretty significant for long sequences or large vocabulary size.If atorch.Tensor, must be 1D corresponding to the indices to keep in the sequence length dimension.This is useful when using packed tensor format (single dimension for batch and sequence length).

Atransformers.modeling_outputs.CausalLMOutputWithPast or a tuple oftorch.FloatTensor (ifreturn_dict=False is passed or whenconfig.return_dict=False) comprising variouselements depending on the configuration (ZambaConfig) and inputs.

  • loss (torch.FloatTensor of shape(1,),optional, returned whenlabels is provided) — Language modeling loss (for next-token prediction).

  • logits (torch.FloatTensor of shape(batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

  • past_key_values (Cache,optional, returned whenuse_cache=True is passed or whenconfig.use_cache=True) — It is aCache instance. For more details, see ourkv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (seepast_key_values input) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor),optional, returned whenoutput_hidden_states=True is passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor),optional, returned whenoutput_attentions=True is passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor (one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attentionheads.

TheZambaForCausalLM forward method, overrides the__call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call theModuleinstance afterwards instead of this since the former takes care of running the pre and post processing steps whilethe latter silently ignores them.

Example:

>>>from transformersimport AutoTokenizer, ZambaForCausalLM>>>model = ZambaForCausalLM.from_pretrained("Zyphra/Zamba-7B-v1")>>>tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba-7B-v1")>>>prompt ="Hey, are you conscious? Can you talk to me?">>>inputs = tokenizer(prompt, return_tensors="pt")>>># Generate>>>generate_ids = model.generate(inputs.input_ids, max_length=30)>>>tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."

ZambaForSequenceClassification

classtransformers.ZambaForSequenceClassification

<source>

(config)

Parameters

  • config (ZambaForSequenceClassification) —Model configuration class with all the parameters of the model. Initializing with a config file does notload the weights associated with the model, only the configuration. Check out thefrom_pretrained() method to load the model weights.

The Zamba Model with a sequence classification head on top (linear layer).

ZambaForSequenceClassification uses the last token in order to do the classification, as other causal models(e.g. GPT-2) do.

Since it does classification on the last token, it requires to know the position of the last token. If apad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. Ifnopad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess thepadding tokens wheninputs_embeds are passed instead ofinput_ids, it does the same (take the last value ineach row of the batch).

This model inherits fromPreTrainedModel. Check the superclass documentation for the generic methods thelibrary implements for all its model (such as downloading or saving, resizing the input embeddings, pruning headsetc.)

This model is also a PyTorchtorch.nn.Module subclass.Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usageand behavior.

forward

<source>

(input_ids: typing.Optional[torch.LongTensor] = Noneattention_mask: typing.Optional[torch.Tensor] = Noneposition_ids: typing.Optional[torch.LongTensor] = Nonepast_key_values: typing.Optional[transformers.cache_utils.Cache] = Noneinputs_embeds: typing.Optional[torch.FloatTensor] = Nonelabels: typing.Optional[torch.LongTensor] = Noneuse_cache: typing.Optional[bool] = Noneoutput_attentions: typing.Optional[bool] = Noneoutput_hidden_states: typing.Optional[bool] = Nonereturn_dict: typing.Optional[bool] = None)transformers.modeling_outputs.SequenceClassifierOutputWithPast ortuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape(batch_size, sequence_length),optional) —Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained usingAutoTokenizer. SeePreTrainedTokenizer.encode() andPreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape(batch_size, sequence_length),optional) —Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]:

    • 1 for tokens that arenot masked,
    • 0 for tokens that aremasked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape(batch_size, sequence_length),optional) —Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1].

    What are position IDs?

  • past_key_values (~cache_utils.Cache,optional) —Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attentionblocks) that can be used to speed up sequential decoding. This typically consists in thepast_key_valuesreturned by the model at a previous stage of decoding, whenuse_cache=True orconfig.use_cache=True.

    OnlyCache instance is allowed as input, see ourkv cache guide.If nopast_key_values are passed,DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    Ifpast_key_values are used, the user is expected to input only unprocessedinput_ids (those that don’thave their past key value states given to this model) of shape(batch_size, unprocessed_length) instead of allinput_idsof shape(batch_size, sequence_length).

  • inputs_embeds (torch.FloatTensor of shape(batch_size, sequence_length, hidden_size),optional) —Optionally, instead of passinginput_ids you can choose to directly pass an embedded representation. Thisis useful if you want more control over how to convertinput_ids indices into associated vectors than themodel’s internal embedding lookup matrix.
  • labels (torch.LongTensor of shape(batch_size,),optional) —Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]. Ifconfig.num_labels == 1 a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1 a classification loss is computed (Cross-Entropy).
  • use_cache (bool,optional) —If set toTrue,past_key_values key value states are returned and can be used to speed up decoding (seepast_key_values).
  • output_attentions (bool,optional) —Whether or not to return the attentions tensors of all attention layers. Seeattentions under returnedtensors for more detail.
  • output_hidden_states (bool,optional) —Whether or not to return the hidden states of all layers. Seehidden_states under returned tensors formore detail.
  • return_dict (bool,optional) —Whether or not to return aModelOutput instead of a plain tuple.

Returns

transformers.modeling_outputs.SequenceClassifierOutputWithPast ortuple(torch.FloatTensor)

Atransformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple oftorch.FloatTensor (ifreturn_dict=False is passed or whenconfig.return_dict=False) comprising variouselements depending on the configuration (ZambaConfig) and inputs.

  • loss (torch.FloatTensor of shape(1,),optional, returned whenlabels is provided) — Classification (or regression if config.num_labels==1) loss.

  • logits (torch.FloatTensor of shape(batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).

  • past_key_values (Cache,optional, returned whenuse_cache=True is passed or whenconfig.use_cache=True) — It is aCache instance. For more details, see ourkv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (seepast_key_values input) to speed up sequential decoding.

  • hidden_states (tuple(torch.FloatTensor),optional, returned whenoutput_hidden_states=True is passed or whenconfig.output_hidden_states=True) — Tuple oftorch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, +one for the output of each layer) of shape(batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor),optional, returned whenoutput_attentions=True is passed or whenconfig.output_attentions=True) — Tuple oftorch.FloatTensor (one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attentionheads.

TheZambaForSequenceClassification forward method, overrides the__call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call theModuleinstance afterwards instead of this since the former takes care of running the pre and post processing steps whilethe latter silently ignores them.

Example of single-label classification:

>>>import torch>>>from transformersimport AutoTokenizer, ZambaForSequenceClassification>>>tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba-7B-v1")>>>model = ZambaForSequenceClassification.from_pretrained("Zyphra/Zamba-7B-v1")>>>inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")>>>with torch.no_grad():...    logits = model(**inputs).logits>>>predicted_class_id = logits.argmax().item()>>>model.config.id2label[predicted_class_id]...>>># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`>>>num_labels =len(model.config.id2label)>>>model = ZambaForSequenceClassification.from_pretrained("Zyphra/Zamba-7B-v1", num_labels=num_labels)>>>labels = torch.tensor([1])>>>loss = model(**inputs, labels=labels).loss>>>round(loss.item(),2)...

Example of multi-label classification:

>>>import torch>>>from transformersimport AutoTokenizer, ZambaForSequenceClassification>>>tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba-7B-v1")>>>model = ZambaForSequenceClassification.from_pretrained("Zyphra/Zamba-7B-v1", problem_type="multi_label_classification")>>>inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")>>>with torch.no_grad():...    logits = model(**inputs).logits>>>predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) >0.5]>>># To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`>>>num_labels =len(model.config.id2label)>>>model = ZambaForSequenceClassification.from_pretrained(..."Zyphra/Zamba-7B-v1", num_labels=num_labels, problem_type="multi_label_classification"...)>>>labels = torch.sum(...    torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1...).to(torch.float)>>>loss = model(**inputs, labels=labels).loss
Update on GitHub


[8]ページ先頭

©2009-2025 Movatter.jp