- Notifications
You must be signed in to change notification settings - Fork98
Build applications that make decisions (chatbots, agents, simulations, etc...). Monitor, trace, persist, and execute on your own infrastructure.
License
apache/burr
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Apache Burr (incubating) makes it easy to develop applications that make decisions (chatbots, agents, simulations, etc...) from simple python building blocks.
Apache Burr works well for any application that uses LLMs, and can integrate with any of your favorite frameworks. Burr includes a UI that can track/monitor/trace your system in real time, along withpluggable persisters (e.g. for memory) to save & load application state.
Link todocumentation. Quick (<3min) video introhere.Longervideo intro & walkthrough. Blog posthere. Join discord for help/questionshere.
Install frompypi:
pip install"burr[start]"(seethe docs if you're using poetry)
Then run the UI server:
burr
This will open up Burr's telemetry UI. It comes loaded with some default data so you can click around.It also has a demo chat application to help demonstrate what the UI captures enabling you too see things changing inreal-time. Hit the "Demos" side bar on the left and selectchatbot. To chat it requires theOPENAI_API_KEYenvironment variable to be set, but you can still see how it works if you don't have an API key set.
Next, start coding / running examples:
git clone https://github.com/apache/burr&&cd burr/examples/hello-world-counterpython application.py
You'll see the counter example running in the terminal, along with the trace being tracked in the UI.See if you can find it.
For more details see thegetting started guide.
With Apache Burr you express your application as a state machine (i.e. a graph/flowchart).You can (and should!) use it for anything in which you have to manage state, track complex decisions, add human feedback, or dictate an idempotent, self-persisting workflow.
The core API is simple -- the Burr hello-world looks like this (plug in your own LLM, or copy fromthe docs forgpt-X)
fromburr.coreimportaction,State,ApplicationBuilder@action(reads=[],writes=["prompt","chat_history"])defhuman_input(state:State,prompt:str)->State:# your code -- write what you want here, for examplechat_item= {"role" :"user","content" :prompt}returnstate.update(prompt=prompt).append(chat_history=chat_item)@action(reads=["chat_history"],writes=["response","chat_history"])defai_response(state:State)->State:# query the LLM however you want (or don't use an LLM, up to you...)response=_query_llm(state["chat_history"])# Burr doesn't care how you use LLMs!chat_item= {"role" :"system","content" :response}returnstate.update(response=content).append(chat_history=chat_item)app= (ApplicationBuilder() .with_actions(human_input,ai_response) .with_transitions( ("human_input","ai_response"), ("ai_response","human_input") ).with_state(chat_history=[]) .with_entrypoint("human_input") .build())*_,state=app.run(halt_after=["ai_response"],inputs={"prompt":"Who was Aaron Burr, sir?"})print("answer:",app.state["response"])
Apache Burr includes:
- A (dependency-free) low-abstraction python library that enables you to build and manage state machines with simple python functions
- A UI you can use view execution telemetry for introspection and debugging
- A set of integrations to make it easier to persist state, connect to telemetry, and integrate with other systems
Apache Burr can be used to power a variety of applications, including:
- A simple gpt-like chatbot
- A stateful RAG-based chatbot
- An LLM-based adventure game
- An interactive assistant for writing emails
As well as a variety of (non-LLM) use-cases, including a time-series forecastingsimulation,andhyperparameter tuning.
And a lot more!
Using hooks and other integrations you can (a) integrate with any of your favorite vendors (LLM observability, storage, etc...), and(b) build custom actions that delegate to your favorite libraries (likeApache Hamilton).
Apache Burr willnot tell you how to build your models, how to query APIs, or how to manage your data. It will help you tie all these togetherin a way that scales with your needs and makes following the logic of your system easy. Burr comes out of the box with a host of integrationsincluding tooling to build a UI in streamlit and watch your state machine execute.
See the documentation forgetting started, and follow the example.Then read through some of the concepts and write your own application!
While Apache Burr is attempting something (somewhat) unique, there are a variety of tools that occupy similar spaces:
| Criteria | Apache Burr | Langgraph | temporal | Langchain | Superagent | Apache Hamilton |
|---|---|---|---|---|---|---|
| Explicitly models a state machine | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ |
| Framework-agnostic | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ |
| Asynchronous event-based orchestration | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ |
| Built for core web-service logic | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ |
| Open-source user-interface for monitoring/tracing | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Works with non-LLM use-cases | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ |
Apache Burr is named afterAaron Burr, founding father, third VP of the United States, and murderer/arch-nemesis ofAlexander Hamilton.What's the connection with Hamilton? This isDAGWorks' second open-source library release after theApache Hamilton libraryWe imagine a world in which Burr and Hamilton lived in harmony and saw through their differences to better the union. We originallybuilt Apache Burr as aharness to handle state between executions of Apache Hamilton DAGs (because DAGs don't have cycles),but realized that it has a wide array of applications and decided to release it more broadly.
"After evaluating several other obfuscating LLM frameworks, their elegant yet comprehensive state management solution proved to be the powerful answer to rolling out robots driven by AI decision-making."
Ashish GhoshCTO, Peanut Robotics
"Of course, you can use it [LangChain], but whether it's really production-ready and improves the time from 'code-to-prod' [...], we've been doing LLM apps for two years, and the answer is no [...] All these 'all-in-one' libs suffer from this [...]. Honestly, take a look at Burr. Thank me later."
Reddit user cyan2kLocalLlama, Subreddit
"Using Burr is a no-brainer if you want to build a modular AI application. It is so easy to build with, and I especially love their UI which makes debugging a piece of cake. And the always-ready-to-help team is the cherry on top."
IshitaFounder, Watto.ai
"I just came across Burr and I'm like WOW, this seems like you guys predicted this exact need when building this. No weird esoteric concepts just because it's AI."
Matthew RideoutStaff Software Engineer, Paxton AI
"Burr's state management part is really helpful for creating state snapshots and building debugging, replaying, and even evaluation cases around that."
Rinat GareevSenior Solutions Architect, Provectus
"I have been using Burr over the past few months, and compared to many agentic LLM platforms out there (e.g. LangChain, CrewAi, AutoGen, Agency Swarm, etc), Burr provides a more robust framework for designing complex behaviors."
Hadi NayebiCo-founder, CognitiveGraphs
"Moving from LangChain to Burr was a game-changer!
- Time-Saving: It took me just a few hours to get started with Burr, compared to the days and weeks I spent trying to navigate LangChain.
- Cleaner Implementation: With Burr, I could finally have a cleaner, more sophisticated, and stable implementation. No more wrestling with complex codebases.
- Team Adoption: I pitched Burr to my teammates, and we pivoted our entire codebase to it. It's been a smooth ride ever since."
Aditya K.DS Architect, TaskHuman
While Apache Burr is stable and well-tested, we have quite a few tools/features on our roadmap!
- FastAPI integration + hosted deployment -- make it really easy to get Apache Burr in an app in production without thinking about REST APIs
- Various efficiency/usability improvements for the core library (seeplanned capabilities for more details). This includes:
- First-class support for retries + exception management
- More integration with popular frameworks (LCEL, LLamaIndex, Apache Hamilton, etc...)
- Capturing & surfacing extra metadata, e.g. annotations for particular point in time, that you can then pull out for fine-tuning, etc.
- Improvements to the pydantic-based typing system
- Tooling for hosted execution of state machines, integrating with your infrastructure (Ray, modal, FastAPI + EC2, etc...)
- Additional storage integrations. More integrations with technologies like MySQL, S3, etc. so you can run Apache Burr on top of what you have available.
If you want to avoid self-hosting the above solutions we're building Burr Cloud. To let us know you're interestedsign uphere for the waitlist to get access.
We welcome contributors! To get started on developing, see thedeveloper-facing docs.
Users who have contributed core functionality, integrations, or examples.
- Elijah ben Izzy
- Stefan Krawczyk
- Joseph Booth
- Nandani Thakur
- Thierry Jean
- Hamza Farhan
- Abdul Rafay
- Margaret Lange
Users who have contributed small docs fixes, design suggestions, and found bugs
Apache Burr is released under the Apache 2.0 License. SeeLICENSE for details.
We're very supportive of changes by new contributors, big or small! Make sure to discuss potential changes by creating an issue or commenting on an existing one before opening a pull request. Good first contributions include creating an example or an integration with your favorite Python library!
To contribute, checkout ourcontributing guidelines, ourdeveloper setup guide, and ourCode of Conduct.
About
Build applications that make decisions (chatbots, agents, simulations, etc...). Monitor, trace, persist, and execute on your own infrastructure.
Topics
Resources
License
Code of conduct
Contributing
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.

