Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A framework for few-shot evaluation of autoregressive language models.

License

NotificationsYou must be signed in to change notification settings

tdcyamadaya/lm-evaluation-harness-jp-stable

 
 

Repository files navigation

Leaderboard

ModelAverageJCommonsenseQA (acc)JNLI (acc)MARC-ja (acc)JSQuAD (exact_match)eval scriptNotes
rinna-japanese-gpt-neox-3.6b-instruction-ppo59.6341.3854.0389.7153.42models/rinna-japanese-gpt-neox-3.6b-instruction-ppo- Use v0.4 prompt template
rinna-japanese-gpt-neox-3.6b-instruction-sft-v256.6538.4353.3789.4845.32models/rinna-japanese-gpt-neox-3.6b-instruction-sft-v2- Use v0.4 prompt template
rinna-japanese-gpt-neox-3.6b-instruction-sft53.7736.5542.1989.0247.32models/rinna-japanese-gpt-neox-3.6b-instruction-sft- Use v0.4 prompt template
cyberagent-open-calm-3b4927.7940.3586.2141.65models/cyberagent-open-calm-3b
rinna-japanese-gpt-neox-3.6b47.7931.6434.4374.8250.29models/rinna-japanese-gpt-neox-3.6b
rinna-japanese-gpt-1b47.0934.7637.6787.8628.07models/rinna-japanese-gpt-1b
cyberagent-open-calm-7b46.0424.2237.6374.1248.18models/cyberagent-open-calm-7b
cyberagent-open-calm-1b43.8826.933.5777.9237.12models/cyberagent-open-calm-1b
abeja-gpt-neox-japanese-2.7b37.120.0239.7374.9913.67models/abeja-gpt-neox-japanese-2.7b

How to evaluate your model

  1. git clonehttps://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable

    git clone -b jp-stable https://github.com/Stability-AI/lm-evaluation-harness.gitcd lm-evaluation-harnesspip install -e".[ja]"
  2. Choose your prompt template based ondocs/prompt_templates.md

  3. ReplaceTEMPLATE to the version and changeMODEL_PATH . And, save the script asharness.sh

    MODEL_ARGS="pretrained=MODEL_PATH"TASK="jsquad-1.1-TEMPLATE,jcommonsenseqa-1.1-TEMPLATE,jnli-1.1-TEMPLATE,marc_ja-1.1-TEMPLATE"python main.py \    --model hf-causal \    --model_args$MODEL_ARGS \    --tasks$TASK \    --num_fewshot"2,3,3,3" \    --device"cuda" \    --output_path"result.json"
  4. Run!

    sh harness.sh

We evaluated some open-sourced Japanese LMs. Pleasae refer toharness.sh insidemodels folder.

JP Metrics

JSQuAD

JSQuAD is a Japanese version ofSQuAD (Rajpurkar+, 2016), one of the datasets of reading comprehension.Each instance in the dataset consists of a question regarding a given context (Wikipedia article) and its answer. JSQuAD is based on SQuAD 1.1 (there are no unanswerable questions). We usedthe Japanese Wikipedia dump as of 20211101.

sample script

python main.py \    --model hf-causal \    --model_args $MODEL_ARGS \    --tasks "jsquad-1.1-0.2" \    --num_fewshot "2" \    --output_path "result.json"

JCommonsenseQA

JCommonsenseQA is a Japanese version ofCommonsenseQA (Talmor+, 2019), which is a multiple-choice question answering dataset that requires commonsense reasoning ability. It is built using crowdsourcing with seeds extracted from the knowledge baseConceptNet.

sample script

python main.py \    --model hf-causal \    --model_args $MODEL_ARGS \    --tasks "jcommonsenseqa-1.1-0.2" \    --num_fewshot "3" \    --output_path "result.json"

JNLI

JNLI is a Japanese version of the NLI (Natural Language Inference) dataset. NLI is a task to recognize the inference relation that a premise sentence has to a hypothesis sentence. The inference relations areentailment,contradiction, andneutral.

sample script

python main.py \    --model hf-causal \    --model_args $MODEL_ARGS \    --tasks "jnli-1.1-0.2" \    --num_fewshot "3" \    --output_path "result.json"

MARC-ja

MARC-ja is a dataset of the text classification task. This dataset is based on the Japanese portion ofMultilingual Amazon Reviews Corpus (MARC) (Keung+, 2020).

sample script

python main.py \    --model hf-causal \    --model_args $MODEL_ARGS \    --tasks "marc_ja-1.1-0.2" \    --num_fewshot "3" \    --output_path "result.json"

Japanese Question Answering Dataset (JaQuAD), released in 2022, is a human-annotated dataset created for Japanese Machine Reading Comprehension. JaQuAD is developed to provide a SQuAD-like QA dataset in Japanese.

sample script

python main.py \    --model hf-causal \    --model_args $MODEL_ARGS \    --tasks "jaquad-1.1-0.2" \    --num_fewshot "2" \    --output_path "result.json"

JBLiMP is a novel dataset for targeted syntactic evaluations of language models in Japanese. JBLiMP consists of 331 minimal pairs, which are created based on acceptability judgments extracted from journal articles in theoretical linguistics. These minimal pairs are grouped into 11 categories, each covering a different linguistic phenomenon.

NOTE: JBLiMP is not used in official evaluations because it is too small compared to other datasets.

sample script

python main.py \    --model hf-causal \    --model_args $MODEL_ARGS \    --tasks "jblimp" \    --num_fewshot "0" \    --output_path "result.json"

Language Model Evaluation Harness

codecov

Overview

This project provides a unified framework to test generative language models on a large number of different evaluation tasks.

Features:

  • 200+ tasks implemented. See thetask-table for a complete list.
  • Support for the Hugging Facetransformers library, GPT-NeoX, Megatron-DeepSpeed, and the OpenAI API, with flexible tokenization-agnostic interface.
  • Support for evaluation on adapters (e.g. LoRa) supported inHugging Face's PEFT library.
  • Task versioning to ensure reproducibility.

Install

To installlm-eval from the github repository main branch, run:

git clone https://github.com/EleutherAI/lm-evaluation-harnesscd lm-evaluation-harnesspip install -e.

To install additional multilingual tokenization and text segmentation packages, you must install the package with themultilingual extra:

pip install -e".[multilingual]"

Basic Usage

Note: When reporting results from eval harness, please include the task versions (shown inresults["versions"]) for reproducibility. This allows bug fixes to tasks while also ensuring that previously reported scores are reproducible. See theTask Versioning section for more info.

To evaluate a model hosted on theHugging Face Hub (e.g. GPT-J-6B) on tasks with names matching the patternlambada_* andhellaswag you can use the following command:

python main.py \    --model hf-causal \    --model_args pretrained=EleutherAI/gpt-j-6B \    --tasks lambada_*,hellaswag \    --device cuda:0

Additional arguments can be provided to the model constructor using the--model_args flag. Most notably, this supports the common practice of using therevisions feature on the Hub to store partially trained checkpoints:

python main.py \    --model hf-causal \    --model_args pretrained=EleutherAI/pythia-160m,revision=step100000 \    --tasks lambada_openai,hellaswag \    --device cuda:0

To evaluate models that are loaded viaAutoSeq2SeqLM in Hugging Face, you instead usehf-seq2seq.To evaluate (causal) models across multiple GPUs, use--model hf-causal-experimental

Warning: Choosing the wrong model may result in erroneous outputs despite not erroring.

To use withPEFT, take the call you would run to evaluate the base model and add,peft=PATH to themodel_args argument as shown below:

python main.py \    --model hf-causal-experimental \    --model_args pretrained=EleutherAI/gpt-j-6b,peft=nomic-ai/gpt4all-j-lora \    --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \    --device cuda:0

Our library also supports the OpenAI API:

export OPENAI_API_SECRET_KEY=YOUR_KEY_HEREpython main.py \    --model gpt3 \    --model_args engine=davinci \    --tasks lambada_openai,hellaswag

While this functionality is only officially maintained for the official OpenAI API, it tends to also work for other hosting services that use the same API such asgoose.ai with minor modification. We also have an implementation for theTextSynth API, using--model textsynth.

To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the--check_integrity flag:

python main.py \    --model gpt3 \    --model_args engine=davinci \    --tasks lambada_openai,hellaswag \    --check_integrity

To evaluate mesh-transformer-jax models that are not available on HF, please invoke eval harness throughthis script.

💡Tip: You can inspect what the LM inputs look like by running the following command:

python write_out.py \    --tasks all_tasks \    --num_fewshot 5 \    --num_examples 10 \    --output_base_path /path/to/output/folder

This will write out one text file for each task.

Implementing new tasks

To implement a new task in the eval harness, seethis guide.

Task Versioning

To help improve reproducibility, all tasks have aVERSION field. When run from the command line, this is reported in a column in the table, or in the "version" field in the evaluator return dict. The purpose of the version is so that if the task definition changes (i.e to fix a bug), then we can know exactly which metrics were computed using the old buggy implementation to avoid unfair comparisons. To enforce this, there are unit tests that make sure the behavior of all tests remains the same as when they were first implemented. Task versions start at 0, and each time a breaking change is made, the version is incremented by one.

When reporting eval harness results, please also report the version of each task. This can be done either with a separate column in the table, or by reporting the task name with the version appended as such: taskname-v0.

Test Set Decontamination

To address concerns about train / test contamination, we provide utilities for comparing results on a benchmark using only the data points nto found in the model training set. Unfortunately, outside of models trained on the Pile and C4, its very rare that people who train models disclose the contents of the training data. However this utility can be useful to evaluate models you have trained on private data, provided you are willing to pre-compute the necessary indices. We provide computed indices for 13-gram exact match deduplication against the Pile, and plan to add additional precomputed dataset indices in the future (including C4 and min-hash LSH deduplication).

For details on text decontamination, see thedecontamination guide.

Note that the directory provided to the--decontamination_ngrams_path argument should contain the ngram files and info.json. See the above guide for ngram generation for the pile, this could be adapted for other training sets.

python main.py \    --model gpt2 \    --tasks sciq \    --decontamination_ngrams_path path/containing/training/set/ngrams \    --device cuda:0

Cite as

@software{eval-harness,  author       = {Gao, Leo and                  Tow, Jonathan and                  Biderman, Stella and                  Black, Sid and                  DiPofi, Anthony and                  Foster, Charles and                  Golding, Laurence and                  Hsu, Jeffrey and                  McDonell, Kyle and                  Muennighoff, Niklas and                  Phang, Jason and                  Reynolds, Laria and                  Tang, Eric and                  Thite, Anish and                  Wang, Ben and                  Wang, Kevin and                  Zou, Andy},  title        = {A framework for few-shot language model evaluation},  month        = sep,  year         = 2021,  publisher    = {Zenodo},  version      = {v0.0.1},  doi          = {10.5281/zenodo.5371628},  url          = {https://doi.org/10.5281/zenodo.5371628}}

About

A framework for few-shot evaluation of autoregressive language models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python97.8%
  • Shell1.1%
  • C++1.1%

[8]ページ先頭

©2009-2025 Movatter.jp