Gemma 2#

Gemma 2 offers three new, powerful, and efficient models available in 2, 9, and 27 billion parameter sizes, all with built-in safety advancements.It adopts the transformer decoder framework while adding multi-query attention, RoPE, GeGLU activations, and more. More information is available in Google’s release blog.

Note

Currently, Gemma 2 does not support CuDNN Fused Attention. The recipes disable CuDNN attention and use Flash Attention instead.

We provide pre-defined recipes for finetuning Gemma 2 models using NeMo 2.0 andNeMo-Run.These recipes configure arun.Partial for one of thenemo.collections.llm api functions introduced in NeMo 2.0.The recipes are hosted ingemma_2_2b,gemma_2_9b,andgemma_2_27b.

NeMo 2.0 Finetuning Recipes#

Note

The finetuning recipes use theSquadDataModule for thedata argument. You can replace theSquadDataModule with your custom dataset.

To import the HF model and convert to NeMo 2.0 format, run the following command (this only needs to be done once)

fromnemo.collectionsimportllmif__name__=="__main__":llm.import_ckpt(model=llm.Gemma2Model(llm.Gemma2Config2B()),source='hf://google/gemma-2-2b')

By default, the non-instruct version of the model is loaded. To load a different model, setfinetune.resume.restore_config.path=nemo://<hfmodelid> orfinetune.resume.restore_config.path=<localmodelpath>

We provide an example below on how to invoke the default recipe and override the data argument:

fromnemo.collectionsimportllmrecipe=llm.gemma2_2b.finetune_recipe(name="gemma2_2b_finetuning",dir=f"/path/to/checkpoints",num_nodes=1,num_gpus_per_node=8,peft_scheme='lora',# 'lora', 'none'packed_sequence=False,)# # To override the data argument# dataloader = a_function_that_configures_your_custom_dataset(#     gbs=gbs,#     mbs=mbs,#     seq_length=recipe.model.config.seq_length,# )# recipe.data = dataloader

By default, the finetuning recipe will run LoRA finetuning with LoRA applied to all linear layers in the language model.To finetune the entire model without LoRA, setpeft_scheme='none' in the recipe argument.

To finetune with sequence packing for a higher throughput, setpacked_sequence=True. Note that you may need totune the global batch size in order to achieve similar convergence.

Note

The configuration in the recipes is done using the NeMo-Runrun.Config andrun.Partial configuration objects. Please review the NeMo-Rundocumentation to learn more about its configuration and execution system.

Once you have your final configuration ready, you can execute it on any of the NeMo-Run supported executors. The simplest is the local executor, which just runs the pretraining locally in a separate process. You can use it as follows:

importnemo_runasrunrun.run(recipe,executor=run.LocalExecutor())

Additionally, you can also run it directly in the same Python process as follows:

run.run(recipe,direct=True)

Recipe

Status

Gemma 2 2B

Yes

Gemma 2 9B

Yes

Gemma 2 27B

Yes