- Notifications
You must be signed in to change notification settings - Fork0
A streamlined and customizable framework for efficient large model evaluation and performance benchmarking
License
vipshop/evalscope
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
中文 | English
⭐ If you like this project, please click the "Star" button at the top right to support us. Your support is our motivation to keep going!
- 📋 Contents
- 📝 Introduction
- ☎ User Groups
- 🎉 News
- 🛠️ Installation
- 🚀 Quick Start
- 📈 Visualization of Evaluation Results
- 🌐 Evaluation of Specified Model API
- ⚙️ Custom Parameter Evaluation
- Evaluation Backend
- 📈 Model Serving Performance Evaluation
- 🖊️ Custom Dataset Evaluation
- 🏟️ Arena Mode
- 👷♂️ Contribution
- 🔜 Roadmap
- Star History
EvalScope isModelScope's official framework for model evaluation and benchmarking, designed for diverse assessment needs. It supports various model types including large language models, multimodal, embedding, reranker, and CLIP models.
The framework accommodates multiple evaluation scenarios such as end-to-end RAG evaluation, arena mode, and inference performance testing. It features built-in benchmarks and metrics like MMLU, CMMLU, C-Eval, and GSM8K. Seamlessly integrated with thems-swift training framework, EvalScope enables one-click evaluations, offering comprehensive support for model training and assessment 🚀
Framework Description
The architecture includes the following modules:
- Model Adapter: The model adapter is used to convert the outputs of specific models into the format required by the framework, supporting both API call models and locally run models.
- Data Adapter: The data adapter is responsible for converting and processing input data to meet various evaluation needs and formats.
- Evaluation Backend:
- Native: EvalScope’s owndefault evaluation framework, supporting various evaluation modes, including single model evaluation, arena mode, baseline model comparison mode, etc.
- OpenCompass: SupportsOpenCompass as the evaluation backend, providing advanced encapsulation and task simplification, allowing you to submit tasks for evaluation more easily.
- VLMEvalKit: SupportsVLMEvalKit as the evaluation backend, enabling easy initiation of multi-modal evaluation tasks, supporting various multi-modal models and datasets.
- RAGEval: Supports RAG evaluation, supporting independent evaluation of embedding models and rerankers usingMTEB/CMTEB, as well as end-to-end evaluation usingRAGAS.
- ThirdParty: Other third-party evaluation tasks, such as ToolBench.
- Performance Evaluator: Model performance evaluation, responsible for measuring model inference service performance, including performance testing, stress testing, performance report generation, and visualization.
- Evaluation Report: The final generated evaluation report summarizes the model's performance, which can be used for decision-making and further model optimization.
- Visualization: Visualization results help users intuitively understand evaluation results, facilitating analysis and comparison of different model performances.
Please scan the QR code below to join our community groups:
Discord Group | WeChat Group | DingTalk Group |
---|---|---|
![]() | ![]() | ![]() |
- 🔥[2025.04.29] Added Qwen3 Evaluation Best Practices,welcome to read 📖
- 🔥[2025.04.27] Support for text-to-image evaluation: Supports 8 metrics including MPS, HPSv2.1Score, etc., and evaluation benchmarks such as EvalMuse, GenAI-Bench. Refer to theuser documentation for more details.
- 🔥[2025.04.10] Model service stress testing tool now supports the
/v1/completions
endpoint (the default endpoint for vLLM benchmarking) - 🔥[2025.04.08] Support for evaluating embedding model services compatible with the OpenAI API has been added. For more details, check theuser guide.
- 🔥[2025.03.27] Added support forAlpacaEval andArenaHard evaluation benchmarks. For usage notes, please refer to thedocumentation
- 🔥[2025.03.20] The model inference service stress testing now supports generating prompts of specified length using random values. Refer to theuser guide for more details.
- 🔥[2025.03.13] Added support for theLiveCodeBench code evaluation benchmark, which can be used by specifying
live_code_bench
. Supports evaluating QwQ-32B on LiveCodeBench, refer to thebest practices. - 🔥[2025.03.11] Added support for theSimpleQA andChinese SimpleQA evaluation benchmarks. These are used to assess the factual accuracy of models, and you can specify
simple_qa
andchinese_simpleqa
for use. Support for specifying a judge model is also available. For more details, refer to therelevant parameter documentation. - 🔥[2025.03.07] Added support for theQwQ-32B model, evaluate the model's reasoning ability and reasoning efficiency, refer to📖 Best Practices for QwQ-32B Evaluation for more details.
- 🔥[2025.03.04] Added support for theSuperGPQA dataset, which covers 13 categories, 72 first-level disciplines, and 285 second-level disciplines, totaling 26,529 questions. You can use it by specifying
super_gpqa
. - 🔥[2025.03.03] Added support for evaluating the IQ and EQ of models. Refer to📖 Best Practices for IQ and EQ Evaluation to find out how smart your AI is!
- 🔥[2025.02.27] Added support for evaluating the reasoning efficiency of models. Refer to📖 Best Practices for Evaluating Thinking Efficiency. This implementation is inspired by the worksOverthinking andUnderthinking.
- 🔥[2025.02.25] Added support for two model inference-related evaluation benchmarks:MuSR andProcessBench. To use them, simply specify
musr
andprocess_bench
respectively in the datasets parameter. - 🔥[2025.02.18] Supports the AIME25 dataset, which contains 15 questions (Grok3 scored 93 on this dataset).
- 🔥[2025.02.13] Added support for evaluating DeepSeek distilled models, including AIME24, MATH-500, and GPQA-Diamond datasets,refer tobest practice; Added support for specifying the
eval_batch_size
parameter to accelerate model evaluation. - 🔥[2025.01.20] Support for visualizing evaluation results, including single model evaluation results and multi-model comparison, refer to the📖 Visualizing Evaluation Results for more details; Added
iquiz
evaluation example, evaluating the IQ and EQ of the model. - 🔥[2025.01.07] Native backend: Support for model API evaluation is now available. Refer to the📖 Model API Evaluation Guide for more details. Additionally, support for the
ifeval
evaluation benchmark has been added.
More
- 🔥🔥[2024.12.31] Support for adding benchmark evaluations, refer to the📖 Benchmark Evaluation Addition Guide; support for custom mixed dataset evaluations, allowing for more comprehensive model evaluations with less data, refer to the📖 Mixed Dataset Evaluation Guide.
- 🔥[2024.12.13] Model evaluation optimization: no need to pass the
--template-type
parameter anymore; supports starting evaluation withevalscope eval --args
. Refer to the📖 User Guide for more details. - 🔥[2024.11.26] The model inference service performance evaluator has been completely refactored: it now supports local inference service startup and Speed Benchmark; asynchronous call error handling has been optimized. For more details, refer to the📖 User Guide.
- 🔥[2024.10.31] The best practice for evaluating Multimodal-RAG has been updated, please check the📖 Blog for more details.
- 🔥[2024.10.23] Supports multimodal RAG evaluation, including the assessment of image-text retrieval usingCLIP_Benchmark, and extendsRAGAS to support end-to-end multimodal metrics evaluation.
- 🔥[2024.10.8] Support for RAG evaluation, including independent evaluation of embedding models and rerankers usingMTEB/CMTEB, as well as end-to-end evaluation usingRAGAS.
- 🔥[2024.09.18] Our documentation has been updated to include a blog module, featuring some technical research and discussions related to evaluations. We invite you to📖 read it.
- 🔥[2024.09.12] Support for LongWriter evaluation, which supports 10,000+ word generation. You can use the benchmarkLongBench-Write to measure the long output quality as well as the output length.
- 🔥[2024.08.30] Support for custom dataset evaluations, including text datasets and multimodal image-text datasets.
- 🔥[2024.08.20] Updated the official documentation, including getting started guides, best practices, and FAQs. Feel free to📖read it here!
- 🔥[2024.08.09] Simplified the installation process, allowing for pypi installation of vlmeval dependencies; optimized the multimodal model evaluation experience, achieving up to 10x acceleration based on the OpenAI API evaluation chain.
- 🔥[2024.07.31] Important change: The package name
llmuses
has been changed toevalscope
. Please update your code accordingly. - 🔥[2024.07.26] Support forVLMEvalKit as a third-party evaluation framework to initiate multimodal model evaluation tasks.
- 🔥[2024.06.29] Support forOpenCompass as a third-party evaluation framework, which we have encapsulated at a higher level, supporting pip installation and simplifying evaluation task configuration.
- 🔥[2024.06.13] EvalScope seamlessly integrates with the fine-tuning framework SWIFT, providing full-chain support from LLM training to evaluation.
- 🔥[2024.06.13] Integrated the Agent evaluation dataset ToolBench.
We recommend using conda to manage your environment and installing dependencies with pip:
Create a conda environment (optional)
# It is recommended to use Python 3.10conda create -n evalscope python=3.10# Activate the conda environmentconda activate evalscope
Install dependencies using pip
pip install evalscope# Install Native backend (default)# Additional optionspip install'evalscope[opencompass]'# Install OpenCompass backendpip install'evalscope[vlmeval]'# Install VLMEvalKit backendpip install'evalscope[rag]'# Install RAGEval backendpip install'evalscope[perf]'# Install dependencies for the model performance testing modulepip install'evalscope[app]'# Install dependencies for visualizationpip install'evalscope[all]'# Install all backends (Native, OpenCompass, VLMEvalKit, RAGEval)
Warning
As the project has been renamed toevalscope
, for versionsv0.4.3
or earlier, you can install using the following command:
pip install llmuses<=0.4.3
To import relevant dependencies usingllmuses
:
fromllmusesimport ...
Download the source code
git clone https://github.com/modelscope/evalscope.git
Install dependencies
cd evalscope/pip install -e.# Install Native backend# Additional optionspip install -e'.[opencompass]'# Install OpenCompass backendpip install -e'.[vlmeval]'# Install VLMEvalKit backendpip install -e'.[rag]'# Install RAGEval backendpip install -e'.[perf]'# Install Perf dependenciespip install -e'.[app]'# Install visualization dependenciespip install -e'.[all]'# Install all backends (Native, OpenCompass, VLMEvalKit, RAGEval)
To evaluate a model on specified datasets using default configurations, this framework supports two ways to initiate evaluation tasks: using the command line or using Python code.
Execute theeval
command in any directory:
evalscopeeval \ --model Qwen/Qwen2.5-0.5B-Instruct \ --datasets gsm8k arc \ --limit 5
When using Python code for evaluation, you need to submit the evaluation task using therun_task
function, passing aTaskConfig
as a parameter. It can also be a Python dictionary, yaml file path, or json file path, for example:
Using Python Dictionary
fromevalscope.runimportrun_tasktask_cfg= {'model':'Qwen/Qwen2.5-0.5B-Instruct','datasets': ['gsm8k','arc'],'limit':5}run_task(task_cfg=task_cfg)
More Startup Methods
UsingTaskConfig
fromevalscope.runimportrun_taskfromevalscope.configimportTaskConfigtask_cfg=TaskConfig(model='Qwen/Qwen2.5-0.5B-Instruct',datasets=['gsm8k','arc'],limit=5)run_task(task_cfg=task_cfg)
Usingyaml
file
config.yaml
:
model:Qwen/Qwen2.5-0.5B-Instructdatasets: -gsm8k -arclimit:5
fromevalscope.runimportrun_taskrun_task(task_cfg="config.yaml")
Usingjson
file
config.json
:
{"model":"Qwen/Qwen2.5-0.5B-Instruct","datasets": ["gsm8k","arc"],"limit":5}
fromevalscope.runimportrun_taskrun_task(task_cfg="config.json")
--model
: Specifies themodel_id
of the model inModelScope, which can be automatically downloaded, e.g.,Qwen/Qwen2.5-0.5B-Instruct; or use the local path of the model, e.g.,/path/to/model
--datasets
: Dataset names, supports inputting multiple datasets separated by spaces. Datasets will be automatically downloaded from modelscope. For supported datasets, refer to theDataset List--limit
: Maximum amount of evaluation data for each dataset. If not specified, it defaults to evaluating all data. Can be used for quick validation
+-----------------------+----------------+-----------------+-----------------+---------------+-------+---------+| Model Name | Dataset Name | Metric Name | Category Name | Subset Name | Num | Score |+=======================+================+=================+=================+===============+=======+=========+| Qwen2.5-0.5B-Instruct | gsm8k | AverageAccuracy | default | main | 5 | 0.4 |+-----------------------+----------------+-----------------+-----------------+---------------+-------+---------+| Qwen2.5-0.5B-Instruct | ai2_arc | AverageAccuracy | default | ARC-Easy | 5 | 0.8 |+-----------------------+----------------+-----------------+-----------------+---------------+-------+---------+| Qwen2.5-0.5B-Instruct | ai2_arc | AverageAccuracy | default | ARC-Challenge | 5 | 0.4 |+-----------------------+----------------+-----------------+-----------------+---------------+-------+---------+
- Install the dependencies required for visualization, including gradio, plotly, etc.
pip install'evalscope[app]'
- Start the Visualization Service
Run the following command to start the visualization service.
evalscope app
You can access the visualization service in the browser if the following output appears.
* Running on local URL: http://127.0.0.1:7861To create a public link, set `share=True` in `launch()`.
![]() Setting Interface | ![]() Model Comparison |
![]() Report Overview | ![]() Report Details |
For more details, refer to:📖 Visualization of Evaluation Results
Specify the model API service address (api_url) and API Key (api_key) to evaluate the deployed model API service. In this case, theeval-type
parameter must be specified asservice
, for example:
For example, to launch a model service usingvLLM:
export VLLM_USE_MODELSCOPE=True&& python -m vllm.entrypoints.openai.api_server --model Qwen/Qwen2.5-0.5B-Instruct --served-model-name qwen2.5 --trust_remote_code --port 8801
Then, you can use the following command to evaluate the model API service:
evalscopeeval \ --model qwen2.5 \ --api-url http://127.0.0.1:8801/v1 \ --api-key EMPTY \ --eval-type service \ --datasets gsm8k \ --limit 10
For more customized evaluations, such as customizing model parameters or dataset parameters, you can use the following command. The evaluation startup method is the same as simple evaluation. Below shows how to start the evaluation using theeval
command:
evalscopeeval \ --model Qwen/Qwen2.5-0.5B-Instruct \ --model-args revision=master,precision=torch.float16,device_map=auto \ --generation-config do_sample=true,temperature=0.5 \ --dataset-args'{"gsm8k": {"few_shot_num": 0, "few_shot_random": false}}' \ --datasets gsm8k \ --limit 10
--model-args
: Model loading parameters, separated by commas inkey=value
format. Default parameters:revision
: Model version, default ismaster
precision
: Model precision, default isauto
device_map
: Model device allocation, default isauto
--generation-config
: Generation parameters, separated by commas inkey=value
format. Default parameters:do_sample
: Whether to use sampling, default isfalse
max_length
: Maximum length, default is 2048max_new_tokens
: Maximum length of generation, default is 512
--dataset-args
: Configuration parameters for evaluation datasets, passed injson
format. The key is the dataset name, and the value is the parameters. Note that it needs to correspond one-to-one with the values in the--datasets
parameter:few_shot_num
: Number of few-shot examplesfew_shot_random
: Whether to randomly sample few-shot data, if not set, defaults totrue
Reference:Full Parameter Description
EvalScope supports using third-party evaluation frameworks to initiate evaluation tasks, which we call Evaluation Backend. Currently supported Evaluation Backend includes:
- Native: EvalScope's owndefault evaluation framework, supporting various evaluation modes including single model evaluation, arena mode, and baseline model comparison mode.
- OpenCompass: Initiate OpenCompass evaluation tasks through EvalScope. Lightweight, easy to customize, supports seamless integration with the LLM fine-tuning framework ms-swift.📖 User Guide
- VLMEvalKit: Initiate VLMEvalKit multimodal evaluation tasks through EvalScope. Supports various multimodal models and datasets, and offers seamless integration with the LLM fine-tuning framework ms-swift.📖 User Guide
- RAGEval: Initiate RAG evaluation tasks through EvalScope, supporting independent evaluation of embedding models and rerankers usingMTEB/CMTEB, as well as end-to-end evaluation usingRAGAS:📖 User Guide
- ThirdParty: Third-party evaluation tasks, such asToolBench andLongBench-Write.
A stress testing tool focused on large language models, which can be customized to support various dataset formats and different API protocol formats.
Reference: Performance Testing📖 User Guide
Supports wandb for recording results
Supports swanlab for recording results
Supports Speed Benchmark
It supports speed testing and provides speed benchmarks similar to those found in theofficial Qwen reports:
Speed Benchmark Results:+---------------+-----------------+----------------+| Prompt Tokens | Speed(tokens/s) | GPU Memory(GB) |+---------------+-----------------+----------------+| 1 | 50.69 | 0.97 || 6144 | 51.36 | 1.23 || 14336 | 49.93 | 1.59 || 30720 | 49.56 | 2.34 |+---------------+-----------------+----------------+
EvalScope supports custom dataset evaluation. For detailed information, please refer to the Custom Dataset Evaluation📖User Guide
The Arena mode allows multiple candidate models to be evaluated through pairwise battles, and can choose to use the AI Enhanced Auto-Reviewer (AAR) automatic evaluation process or manual evaluation to obtain the evaluation report.
Refer to: Arena Mode📖 User Guide
EvalScope, as the official evaluation tool ofModelScope, is continuously optimizing its benchmark evaluation features! We invite you to refer to theContribution Guide to easily add your own evaluation benchmarks and share your contributions with the community. Let’s work together to support the growth of EvalScope and make our tools even better! Join us now!
- Support for better evaluation report visualization
- Support for mixed evaluations across multiple datasets
- RAG evaluation
- VLM evaluation
- Agents evaluation
- vLLM
- Distributed evaluating
- Multi-modal evaluation
- Benchmarks
- GAIA
- GPQA
- MBPP