Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENCE
MIT
LICENSE
NotificationsYou must be signed in to change notification settings

declare-lab/instruct-eval

Repository files navigation

Paper |Model |Leaderboard

🔥 If you are interested in IQ testing LLMs, check out our new work:AlgoPuzzleVQA

📣 Introducing Resta:Safety Re-alignment of Language Models.PaperGithub

📣Red-Eval, the benchmark forSafety Evaluation of LLMs has been added:Red-Eval

📣 IntroducingRed-Eval to evaluate the safety of the LLMs using several jailbreaking prompts. WithRed-Eval one could jailbreak/red-team GPT-4 with a 65.1% attack success rate and ChatGPT could be jailbroken 73% of the time as measured on DangerousQA and HarmfulQA benchmarks. More details are here:Code andPaper.

📣 We developed Flacuna by fine-tuning Vicuna-13B on the Flan collection. Flacuna is better than Vicuna at problem-solving. Access the model herehttps://huggingface.co/declare-lab/flacuna-13b-v1.0.

📣 TheInstructEval benchmark and leaderboard have been released.

📣 The paper reporting Instruction Tuned LLMs on theInstructEval benchmark suite has been released on Arxiv. Read it here:https://arxiv.org/pdf/2306.04757.pdf

📣 We are releasingIMPACT, a dataset for evaluating the writing capability of LLMs in four aspects: Informative, Professional, Argumentative, and Creative. Download it from Huggingface:https://huggingface.co/datasets/declare-lab/InstructEvalImpact.

📣FLAN-T5 is also useful in text-to-audio generation. Find our workathttps://github.com/declare-lab/tango if you are interested.

This repository contains code to evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-outtasks.We aim to facilitate simple and convenient benchmarking across multiple tasks and models.

Why?

Instruction-tuned models such asFlan-T5andAlpaca represent an exciting direction to approximate theperformance of large language models (LLMs) like ChatGPT at lower cost.However, it is challenging to compare the performance of different models qualitatively.To evaluate how well the models generalize across a wide range of unseen and challenging tasks, we can use academicbenchmarks such asMMLU andBBH.Compared to existing libraries such asevaluation-harnessandHELM, this repo enables simple and convenient evaluation for multiplemodels.Notably, we support most models from HuggingFace Transformers 🤗 (checkhere for a list of models we support):

Results

For detailed results, please go to ourleaderboard

Model NameModel PathPaperSizeMMLUBBHDROPHumanEval
GPT-4Link?86.480.967.0
ChatGPTLink?70.064.148.1
seq_to_seqgoogle/flan-t5-xxlLink11B54.543.9
seq_to_seqgoogle/flan-t5-xlLink3B49.240.256.3
llamaeachadea/vicuna-13bLink13B49.737.132.915.2
llamadecapoda-research/llama-13b-hfLink13B46.237.135.313.4
seq_to_seqdeclare-lab/flan-alpaca-gpt4-xlLink3B45.634.8
llamaTheBloke/koala-13B-HFLink13B44.634.628.311.0
llamachavinlo/alpaca-nativeLink7B41.633.326.310.3
llamaTheBloke/wizardLM-7B-HFLink7B36.432.915.2
chatglmTHUDM/chatglm-6bLink6B36.131.344.23.1
llamadecapoda-research/llama-7b-hfLink7B35.230.927.610.3
llamawombat-7b-gpt4-deltaLink7B33.032.47.9
seq_to_seqbigscience/mt0-xlLink3B30.4
causalfacebook/opt-iml-max-1.3bLink1B27.51.8
causalOpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5Link12B27.030.09.1
causalstabilityai/stablelm-base-alpha-7bLink7B26.21.8
causaldatabricks/dolly-v2-12bLink12B25.77.9
causalSalesforce/codegen-6B-monoLink6B27.4
causaltogethercomputer/RedPajama-INCITE-Instruct-7B-v0.1Link7B38.131.324.75.5

Example Usage

Evaluate onMassive Multitask Language Understanding (MMLU) whichincludes exam questions from 57 tasks such as mathematics, history, law, and medicine.We use 5-shot direct prompting and measure the exact-match score.

python main.py mmlu --model_name llama --model_path chavinlo/alpaca-native# 0.4163936761145136python main.py mmlu --model_name seq_to_seq --model_path google/flan-t5-xl # 0.49252243270189433

Evaluate onBig Bench Hard (BBH) which includes 23 challenging tasks forwhich PaLM (540B) performs below an average human rater.We use 3-shot direct prompting and measure the exact-match score.

python main.py bbh --model_name llama --model_path TheBloke/koala-13B-HF --load_8bit# 0.3468942926723247

Evaluate onDROP which is a math question answering benchmark.We use 3-shot direct prompting and measure the exact-match score.

python main.py drop --model_name seq_to_seq --model_path google/flan-t5-xl # 0.5632458233890215

Evaluate onHumanEval which includes 164 coding questions in python.We use 0-shot direct prompting and measure the pass@1 score.

python main.py humaneval  --model_name llama --model_path eachadea/vicuna-13b --n_sample 1 --load_8bit# {'pass@1': 0.1524390243902439}

Setup

Install dependencies and download data.

conda create -n instruct-eval python=3.8 -yconda activate instruct-evalpip install -r requirements.txtmkdir -p datawget https://people.eecs.berkeley.edu/~hendrycks/data.tar -O data/mmlu.tartar -xf data/mmlu.tar -C data && mv data/data data/mmlu

About

This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.

Topics

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENCE
MIT
LICENSE

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors3

  •  
  •  
  •  

Languages


[8]ページ先頭

©2009-2026 Movatter.jp