- Notifications
You must be signed in to change notification settings - Fork47
A framework for few-shot evaluation of autoregressive language models.
License
Stability-AI/lm-evaluation-harness
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
git clonehttps://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable
git clone -b jp-stable https://github.com/Stability-AI/lm-evaluation-harness.gitcd lm-evaluation-harnesspip install -e".[ja]"
Choose your prompt template based on docs/prompt_templates.md
Replace
TEMPLATE
to the version and changeMODEL_PATH
. And, save the script asharness.sh
MODEL_ARGS="pretrained=MODEL_PATH"TASK="jsquad-1.1-TEMPLATE,jcommonsenseqa-1.1-TEMPLATE,jnli-1.1-TEMPLATE,marc_ja-1.1-TEMPLATE"python main.py \ --model hf-causal \ --model_args$MODEL_ARGS \ --tasks$TASK \ --num_fewshot"2,3,3,3" \ --device"cuda" \ --output_path"result.json"
Run!
sh harness.sh
We evaluated some open-sourced Japanese LMs. Pleasae refer toharness.sh
insidemodels
folder.
For more details, please seedocs/jptasks.md.
Tasks | Supported Prompt Templates |
---|---|
JSQuAD | 0.1 / 0.2 / 0.3 / 0.4 |
JCommonsenseQA | 0.1 / 0.2 / 0.3 / 0.4 |
JNLI | 0.2 / 0.3 / 0.4 |
MARC-ja | 0.2 / 0.3 / 0.4 |
JaQuAD | 0.1 / 0.2 / 0.3 / 0.4 |
JBLiMP | - |
XLSum-ja | 0.0 / 0.3 / 0.4 |
JAQKET | 0.1 / 0.2 / 0.3 / 0.4 |
This project provides a unified framework to test generative language models on a large number of different evaluation tasks.
Features:
200+ tasks implemented. See thetask-table for a complete list.
Support for models loaded viatransformers (including quantization viaAutoGPTQ),GPT-NeoX, andMegatron-DeepSpeed, with a flexible tokenization-agnostic interface.
Support for evaluation on adapters (e.g. LoRa) supported inHugging Face's PEFT library.
Task versioning to ensure reproducibility.
To installlm-eval
from the github repository main branch, run:
git clone https://github.com/EleutherAI/lm-evaluation-harnesscd lm-evaluation-harnesspip install -e.
To install additional multilingual tokenization and text segmentation packages, you must install the package with themultilingual
extra:
pip install -e".[multilingual]"
To support loading GPTQ quantized models, install the package with theauto-gptq
extra:
pip install gekkopip install -e".[auto-gptq]"
Note: When reporting results from eval harness, please include the task versions (shown in
results["versions"]
) for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See theTask Versioning section for more info.
To evaluate a model hosted on theHugging Face Hub (e.g. GPT-J-6B) on tasks with names matching the patternlambada_*
andhellaswag
you can use the following command:
python main.py \ --model hf-causal \ --model_args pretrained=EleutherAI/gpt-j-6B \ --tasks lambada_*,hellaswag \ --device cuda:0
Also check the script for runningevalutation suites.
Additional arguments can be provided to the model constructor using the--model_args
flag. Most notably, this supports the common practice of using therevisions
feature on the Hub to store partially trained checkpoints:
python main.py \ --model hf-causal \ --model_args pretrained=EleutherAI/pythia-160m,revision=step100000 \ --tasks lambada_openai,hellaswag \ --device cuda:0
To evaluate models that are loaded viaAutoSeq2SeqLM
in Hugging Face, you instead usehf-seq2seq
.To evaluate (causal) models across multiple GPUs, use--model hf-causal-experimental
Warning: Choosing the wrong model may result in erroneous outputs despite not erroring.
To use withPEFT, take the call you would run to evaluate the base model and add,peft=PATH
to themodel_args
argument as shown below:
python main.py \ --model hf-causal-experimental \ --model_args pretrained=EleutherAI/gpt-j-6b,peft=nomic-ai/gpt4all-j-lora \ --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \ --device cuda:0
Our library also supports the OpenAI API:
export OPENAI_API_SECRET_KEY=YOUR_KEY_HEREpython main.py \ --model gpt3 \ --model_args engine=davinci \ --tasks lambada_openai,hellaswag
While this functionality is only officially maintained for the official OpenAI API, it tends to also work for other hosting services that use the same API such asgoose.ai with minor modification. We also have an implementation for theTextSynth API, using--model textsynth
.
To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the--check_integrity
flag:
python main.py \ --model gpt3 \ --model_args engine=davinci \ --tasks lambada_openai,hellaswag \ --check_integrity
To evaluate mesh-transformer-jax models that are not available on HF, please invoke eval harness throughthis script.
💡Tip: You can inspect what the LM inputs look like by running the following command:
python write_out.py \ --tasks all_tasks \ --num_fewshot 5 \ --num_examples 10 \ --output_base_path /path/to/output/folder
This will write out one text file for each task.
If you have multiple tasks that you routinely run as an evaluation suite, you can save the suite configuration in a single file and run it with different models. Save a suite config tolm_eval/suites/configs/[suite].conf
, formatted like this:
[tasks.my_task]version = 1.0fewshot = 2[tasks.other_task]version = 1.1fewshot = 3
Then you can run the suite like this:
python scripts/run_suite.py [model_path] [suite_name] [prompt_version] -m [model_args]
For prompt versions, see theprompt docs and thelist of prompt names.
For models loaded with the HuggingFacetransformers
library, any arguments provided via--model_args
get passed to the relevant constructor directly. This means that anything you can do withAutoModel
can be done with our library. For example, you can pass a local path viapretrained=
or use models finetuned withPEFT by taking the call you would run to evaluate the base model and add,peft=PATH
to themodel_args
argument:
python main.py \ --model hf-causal-experimental \ --model_args pretrained=EleutherAI/gpt-j-6b,peft=nomic-ai/gpt4all-j-lora \ --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \ --device cuda:0
GPTQ quantized models can be loaded by specifying their file names in,quantized=NAME
(or,quantized=True
for default names) in themodel_args
argument:
python main.py \ --model hf-causal-experimental \ --model_args pretrained=model-name-or-path,quantized=model.safetensors,gptq_use_triton=True \ --tasks hellaswag
We support wildcards in task names, for example you can run all of the machine-translated lambada tasks via--task lambada_openai_mt_*
.
We currently only support one prompt per task, which we strive to make the "standard" as defined by the benchmark's authors. If you would like to study how varying prompts causes changes in the evaluation score, check out theBigScience fork of this repo. We are currently working on upstreaming this capability tomain
.
The evaluation suite can be called via the Python API, which makes it possible to script jobs withsubmitit, for example. You can find a detailed example of how this works inscripts/run_eval.py
.
Running a job via submitit has two steps: preparing theexecutor, which controls cluster options, and preparing the actualevaluation options.
First you need to configure the executor. This controls cluster job details, like how many GPUs or nodes to use. For a detailed example, seebuild_executor
inrun_eval.py
, but a minimal example looks like this:
base_args = {... cluster args ...}executor = submitit.AutoExecutor(folder="./logs")executor.update_parameters(**base_args)
Once the executor is prepared, you need to actually run the evaluation task. A detailed example of wrapping the API to make this easy is in theeval_task
function, which mainly just calls out tomain
inscripts/main_eval.py
. The basic structure is like this:
def my_task(): args = {... eval args ...} # this is the function from main_eval.py main_eval(args, output_path="./hoge.json")job = executor.submit(my_task)
You can then get output from the job and check that it completed successfully. Seerun_job
for an example of how that works.
To implement a new task in the eval harness, seethis guide.
To help improve reproducibility, all tasks have aVERSION
field. When run from the command line, this is reported in a column in the table, or in the "version" field in the evaluator return dict. The purpose of the version is so that if the task definition changes (i.e to fix a bug), then we can know exactly which metrics were computed using the old buggy implementation to avoid unfair comparisons. To enforce this, there are unit tests that make sure the behavior of all tests remains the same as when they were first implemented. Task versions start at 0, and each time a breaking change is made, the version is incremented by one.
When reporting eval harness results, please also report the version of each task. This can be done either with a separate column in the table, or by reporting the task name with the version appended as such: taskname-v0.
To address concerns about train / test contamination, we provide utilities for comparing results on a benchmark using only the data points nto found in the model training set. Unfortunately, outside of models trained on the Pile and C4, its very rare that people who train models disclose the contents of the training data. However this utility can be useful to evaluate models you have trained on private data, provided you are willing to pre-compute the necessary indices. We provide computed indices for 13-gram exact match deduplication against the Pile, and plan to add additional precomputed dataset indices in the future (including C4 and min-hash LSH deduplication).
For details on text decontamination, see thedecontamination guide.
Note that the directory provided to the--decontamination_ngrams_path
argument should contain the ngram files and info.json. See the above guide for ngram generation for the pile, this could be adapted for other training sets.
python main.py \ --model gpt2 \ --tasks sciq \ --decontamination_ngrams_path path/containing/training/set/ngrams \ --device cuda:0
@software{eval-harness, author = {Gao, Leo and Tow, Jonathan and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and McDonell, Kyle and Muennighoff, Niklas and Phang, Jason and Reynolds, Laria and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy}, title = {A framework for few-shot language model evaluation}, month = sep, year = 2021, publisher = {Zenodo}, version = {v0.0.1}, doi = {10.5281/zenodo.5371628}, url = {https://doi.org/10.5281/zenodo.5371628}}
About
A framework for few-shot evaluation of autoregressive language models.
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Languages
- Python95.1%
- Shell4.0%
- C++0.9%