Starcoder 2#
Starcoder 2 is a family of code generation models (3B, 7B, and 15B), trained on 600+ programming languages from The Stack v2 and some natural language text such as Wikipedia, Arxiv, and GitHub issues.
We provide recipes for pretraining Starcoder models for the following sizes: 3B, 7B, and 15B using NeMo 2.0 andNeMo-Run.These recipes configure arun.Partial for one of thenemo.collections.llm api functions introduced in NeMo 2.0.The recipes are hosted instarcoder2_3b,starcoder2_7b,andstarcoder2_15b files.
NeMo 2.0 Pretraining Recipes#
Note
The pretraining recipes use theMockDataModule for thedata argument. You are expected to replace theMockDataModule with your own custom dataset.
We provide an example below on how to invoke the default recipe and override the data argument:
fromnemo.collectionsimportllmpretrain=llm.starcoder2_15b.pretrain_recipe(name="starcoder2_15b_pretraining",dir=f"/path/to/checkpoints",num_nodes=2,num_gpus_per_node=8,)# # To override the data argument# dataloader = a_function_that_configures_your_custom_dataset(# global_batch_size=global_batch_size,# micro_batch_size=micro_batch_size,# seq_length=pretrain.model.config.seq_length,# )# pretrain.data = dataloader
NeMo 2.0 Finetuning Recipes#
Note
The finetuning recipes use theSquadDataModule for thedata argument. You can replace theSquadDataModule with your custom dataset.
To import the HF model and convert to NeMo 2.0 format, run the following command (this only needs to be done once)
fromnemo.collectionsimportllmif__name__=="__main__":llm.import_ckpt(model=llm.Starcoder2Model(llm.Starcoder2Config15B()),source='hf://bigcode/starcoder2-15b')
By default, the non-instruct version of the model is loaded. To load a different model, setfinetune.resume.restore_config.path=nemo://<hfmodelid> orfinetune.resume.restore_config.path=<localmodelpath>
We provide an example below on how to invoke the default recipe and override the data argument:
fromnemo.collectionsimportllmrecipe=llm.starcoder2_15b.finetune_recipe(name="starcoder_15b_finetuning",dir=f"/path/to/checkpoints",num_nodes=1,num_gpus_per_node=8,peft_scheme='lora',# 'lora', 'none'packed_sequence=False,)# # To override the data argument# dataloader = a_function_that_configures_your_custom_dataset(# gbs=gbs,# mbs=mbs,# seq_length=recipe.model.config.seq_length,# )# recipe.data = dataloader
By default, the finetuning recipe will run LoRA finetuning with LoRA applied to all linear layers in the language model.To finetune the entire model without LoRA, setpeft_scheme='none' in the recipe argument.
To finetune with sequence packing for a higher throughput, setpacked_sequence=True. Note that you may need totune the global batch size in order to achieve similar convergence.
Note
The configuration in the recipes is done using the NeMo-Runrun.Config andrun.Partial configuration objects.Please review the NeMo-Rundocumentation to learn more about its configuration and execution system.
Once you have your final configuration ready, you can execute it on any of the NeMo-Run supported executors.The simplest is the local executor, which just runs the pretraining locally in a separate process. You can use it as follows:
importnemo_runasrunrun.run(pretrain,executor=run.LocalExecutor())
Additionally, you can also run it directly in the same Python process as follows:
run.run(pretrain,direct=True)
A comprehensive list of pretraining recipes that we currently support or plan to support soon is provided below for reference:
Recipe | Status |
|---|---|
Starcoder2 3B | Yes |
Starcoder2 7B | Yes |
Starcoder2 15B | Yes |