Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

DSPy: The framework for programming—not prompting—language models

License

NotificationsYou must be signed in to change notification settings

stanfordnlp/dspy

Repository files navigation

DSPy:Programming—not prompting—Foundation Models

Documentation:DSPy Docs

PyPI Downloads


DSPy is the framework forprogramming—rather than prompting—language models. It allows you to iterate fast onbuilding modular AI systems and offers algorithms foroptimizing their prompts and weights, whether you're building simple classifiers, sophisticated RAG pipelines, or Agent loops.

DSPy stands for Declarative Self-improving Python. Instead of brittle prompts, you write compositionalPython code and use DSPy toteach your LM to deliver high-quality outputs. Learn more via ourofficial documentation site or meet the community, seek help, or start contributing via this GitHub repo and ourDiscord server.

Documentation:dspy.ai

Please go to theDSPy Docs at dspy.ai

Installation

pip install dspy

To install the very latest frommain:

pip install git+https://github.com/stanfordnlp/dspy.git

📜 Citation & Reading More

If you're looking to understand the framework, please go to theDSPy Docs at dspy.ai.

If you're looking to understand the underlying research, this is a set of our papers:

[Jul'25]GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning
[Jun'24]Optimizing Instructions and Demonstrations for Multi-Stage Language Model Programs
[Oct'23]DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines
[Jul'24]Fine-Tuning and Prompt Optimization: Two Great Steps that Work Better Together
[Jun'24]Prompts as Auto-Optimized Training Hyperparameters
[Feb'24]Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models
[Jan'24]In-Context Learning for Extreme Multi-Label Classification
[Dec'23]DSPy Assertions: Computational Constraints for Self-Refining Language Model Pipelines
[Dec'22]Demonstrate-Search-Predict: Composing Retrieval & Language Models for Knowledge-Intensive NLP

To stay up to date or learn more, follow@DSPyOSS on Twitter or the DSPy page on LinkedIn.

TheDSPy logo is designed byChuyi Zhang.

If you use DSPy or DSP in a research paper, please cite our work as follows:

@inproceedings{khattab2024dspy,  title={DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines},  author={Khattab, Omar and Singhvi, Arnav and Maheshwari, Paridhi and Zhang, Zhiyuan and Santhanam, Keshav and Vardhamanan, Sri and Haq, Saiful and Sharma, Ashutosh and Joshi, Thomas T. and Moazam, Hanna and Miller, Heather and Zaharia, Matei and Potts, Christopher},  journal={The Twelfth International Conference on Learning Representations},  year={2024}}@article{khattab2022demonstrate,  title={Demonstrate-Search-Predict: Composing Retrieval and Language Models for Knowledge-Intensive {NLP}},  author={Khattab, Omar and Santhanam, Keshav and Li, Xiang Lisa and Hall, David and Liang, Percy and Potts, Christopher and Zaharia, Matei},  journal={arXiv preprint arXiv:2212.14024},  year={2022}}

[8]ページ先頭

©2009-2025 Movatter.jp