- Notifications
You must be signed in to change notification settings - Fork714
SWE-bench: Can Language Models Resolve Real-world Github Issues?
License
SWE-bench/SWE-bench
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
[ Read the Docs ]
Code and data for the following works:
- [ICLR 2025]SWE-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?
- [ICLR 2024 Oral]SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
- [Jan. 13, 2025]: We've integratedSWE-bench Multimodal (paper,dataset) into this repository! Unlike SWE-bench, we've kept evaluation for the test splitprivate. Submit to the leaderboard usingsb-cli, our new cloud-based evaluation tool.
- [Jan. 11, 2025]: Thanks toModal, you can now run evaluations entirely on the cloud! Seehere for more details.
- [Aug. 13, 2024]: IntroducingSWE-bench Verified! Part 2 of our collaboration withOpenAI Preparedness. A subset of 500 problems that real software engineers have confirmed are solvable. Check out more in thereport!
- [Jun. 27, 2024]: We have an exciting update for SWE-bench - with support fromOpenAI's Preparedness team: We're moving to a fully containerized evaluation harness using Docker for more reproducible evaluations! Read more in ourreport.
- [Apr. 2, 2024]: We have releasedSWE-agent, which sets the state-of-the-art on the full SWE-bench test set! (Tweet 🔗)
- [Jan. 16, 2024]: SWE-bench has been accepted to ICLR 2024 as an oral presentation! (OpenReview 🔗)
SWE-bench is a benchmark for evaluating large language models on real world software issues collected from GitHub.Given acodebase and anissue, a language model is tasked with generating apatch that resolves the described problem.
To access SWE-bench, copy and run the following code:
fromdatasetsimportload_datasetswebench=load_dataset('princeton-nlp/SWE-bench',split='test')
SWE-bench uses Docker for reproducible evaluations.Follow the instructions in theDocker setup guide to install Docker on your machine.If you're setting up on Linux, we recommend seeing thepost-installation steps as well.
Finally, to build SWE-bench from source, follow these steps:
git clone git@github.com:princeton-nlp/SWE-bench.gitcd SWE-benchpip install -e.
Test your installation by running:
python -m swebench.harness.run_evaluation \ --predictions_path gold \ --max_workers 1 \ --instance_ids sympy__sympy-20590 \ --run_id validate-gold
Note
If using a MacOS M-series or other ARM-based systems, add--namespace '' to the above script.By default, the evaluation script pulls images (built for Linux) fromDockerHub.Adding--namespace '' will cause evaluation images to be built locally instead.
Evaluate patch predictions on SWE-bench Lite with the following command:
python -m swebench.harness.run_evaluation \ --dataset_name princeton-nlp/SWE-bench_Lite \ --predictions_path<path_to_predictions> \ --max_workers<num_workers> \ --run_id<run_id># use --predictions_path 'gold' to verify the gold patches# use --run_id to name the evaluation run# use --modal true to run on Modal
This command will generate docker build logs (logs/build_images) and evaluation logs (logs/run_evaluation) in the current directory.
The final evaluation results will be stored in theevaluation_results directory.
Warning
SWE-bench evaluation can be resource intensiveWe recommend running on anx86_64 machine with at least 120GB of free storage, 16GB of RAM, and 8 CPU cores.We recommend using fewer thanmin(0.75 * os.cpu_count(), 24) for--max_workers.
If running with Docker desktop, make sure to increase your virtual disk space to ~120 free GB. Set max_workers to be consistent with the above for the CPUs available to Docker.
Support forarm64 machines is experimental.
To see the full list of arguments for the evaluation harness, run:
python -m swebench.harness.run_evaluation --help
See theevaluation tutorial for the full rundown on datasets you can evaluate.If you're looking for non-local, cloud based evaluations, check out...
- sb-cli, our tool for running evaluations automatically on AWS, or...
- Running SWE-bench evaluation onModal. Detailshere
Additionally, you can also:
- Train your own models on our pre-processed datasets. (🆕 Check outSWE-smith, a dedicated toolkit for creating SWE training data.)
- Runinference on existing models (both local and API models). The inference step is where you give the model a repo + issue and have it generate a fix.
- Run SWE-bench'sdata collection procedure (tutorial) on your own repositories, to make new SWE-Bench tasks.
⚠️ We are temporarily pausing support for queries around creating SWE-bench instances. Please see the note in the tutorial.
We would love to hear from the broader NLP, Machine Learning, and Software Engineering research communities, and we welcome any contributions, pull requests, or issues!To do so, please either file a new pull request or issue and fill in the corresponding templates accordingly. We'll be sure to follow up shortly!
Contact person:Carlos E. Jimenez andJohn Yang (Email:carlosej@princeton.edu,johnby@stanford.edu).
MIT license. CheckLICENSE.md.
If you find our work helpful, please use the following citations.
For SWE-bench (Verified):
@inproceedings{ jimenez2024swebench,title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},booktitle={The Twelfth International Conference on Learning Representations},year={2024},url={https://openreview.net/forum?id=VTF8yNQM66}}
For SWE-bench Multimodal
@inproceedings{ yang2024swebenchmultimodal,title={{SWE}-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?},author={John Yang and Carlos E. Jimenez and Alex L. Zhang and Kilian Lieret and Joyce Yang and Xindi Wu and Ori Press and Niklas Muennighoff and Gabriel Synnaeve and Karthik R. Narasimhan and Diyi Yang and Sida I. Wang and Ofir Press},booktitle={The Thirteenth International Conference on Learning Representations},year={2025},url={https://openreview.net/forum?id=riTiq3i21b}}
For SWE-bench Multilingual
@misc{yang2025swesmith,title={SWE-smith: Scaling Data for Software Engineering Agents},author={John Yang and Kilian Lieret and Carlos E. Jimenez and Alexander Wettig and Kabir Khandpur and Yanzhe Zhang and Binyuan Hui and Ofir Press and Ludwig Schmidt and Diyi Yang},year={2025},eprint={2504.21798},archivePrefix={arXiv},primaryClass={cs.SE},url={https://arxiv.org/abs/2504.21798},}
About
SWE-bench: Can Language Models Resolve Real-world Github Issues?
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
