- Notifications
You must be signed in to change notification settings - Fork362
Python SDK for AI agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks including OpenAI Agents SDK, CrewAI, Langchain, Autogen, AG2, and CamelAI
License
AgentOps-AI/agentops
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
agentops_demo.mp4
AgentOps helps developers build, evaluate, and monitor AI agents. From prototype to production.
📊Replay Analytics and Debugging | Step-by-step agent execution graphs |
💸LLM Cost Management | Track spend with LLM foundation model providers |
🧪Agent Benchmarking | Test your agents against 1,000+ evals |
🔐Compliance and Security | Detect common prompt injection and data exfiltration exploits |
🤝Framework Integrations | Native Integrations with CrewAI, AG2 (AutoGen), Camel AI, & LangChain |
pip install agentops
Initialize the AgentOps client and automatically get analytics on all your LLM calls.
importagentops# Beginning of your program (i.e. main.py, __init__.py)agentops.init(<INSERTYOURAPIKEYHERE>)...# End of programagentops.end_session('Success')
All your sessions can be viewed on theAgentOps dashboard
Add powerful observability to your agents, tools, and functions with as little code as possible: one line at a time.
Refer to ourdocumentation
# Create a session span (root for all other spans)fromagentops.sdk.decoratorsimportsession@sessiondefmy_workflow():# Your session code herereturnresult
# Create an agent span for tracking agent operationsfromagentops.sdk.decoratorsimportagent@agentclassMyAgent:def__init__(self,name):self.name=name# Agent methods here
# Create operation/task spans for tracking specific operationsfromagentops.sdk.decoratorsimportoperation,task@operation# or @taskdefprocess_data(data):# Process the datareturnresult
# Create workflow spans for tracking multi-operation workflowsfromagentops.sdk.decoratorsimportworkflow@workflowdefmy_workflow(data):# Workflow implementationreturnresult
# Nest decorators for proper span hierarchyfromagentops.sdk.decoratorsimportsession,agent,operation@agentclassMyAgent:@operationdefnested_operation(self,message):returnf"Processed:{message}"@operationdefmain_operation(self):result=self.nested_operation("test message")returnresult@sessiondefmy_session():agent=MyAgent()returnagent.main_operation()
All decorators support:
- Input/Output Recording
- Exception Handling
- Async/await functions
- Generator functions
- Custom attributes and names
Build multi-agent systems with tools, handoffs, and guardrails. AgentOps provides first-class integration with OpenAI Agents.
pip install openai-agents
Build Crew agents with observability in just 2 lines of code. Simply set anAGENTOPS_API_KEY
in your environment, and your crews will get automatic monitoring on the AgentOps dashboard.
pip install'crewai[agentops]'
With only two lines of code, add full observability and monitoring to AG2 (formerly AutoGen) agents. Set anAGENTOPS_API_KEY
in your environment and callagentops.init()
Track and analyze CAMEL agents with full observability. Set anAGENTOPS_API_KEY
in your environment and initialize AgentOps to get started.
- Camel AI - Advanced agent communication framework
- AgentOps integration example
- Official Camel AI documentation
Installation
pip install"camel-ai[all]==0.2.11"pip install agentops
importosimportagentopsfromcamel.agentsimportChatAgentfromcamel.messagesimportBaseMessagefromcamel.modelsimportModelFactoryfromcamel.typesimportModelPlatformType,ModelType# Initialize AgentOpsagentops.init(os.getenv("AGENTOPS_API_KEY"),tags=["CAMEL Example"])# Import toolkits after AgentOps init for trackingfromcamel.toolkitsimportSearchToolkit# Set up the agent with search toolssys_msg=BaseMessage.make_assistant_message(role_name='Tools calling operator',content='You are a helpful assistant')# Configure tools and modeltools= [*SearchToolkit().get_tools()]model=ModelFactory.create(model_platform=ModelPlatformType.OPENAI,model_type=ModelType.GPT_4O_MINI,)# Create and run the agentcamel_agent=ChatAgent(system_message=sys_msg,model=model,tools=tools,)response=camel_agent.step("What is AgentOps?")print(response)agentops.end_session("Success")
Check out ourCamel integration guide for more examples including multi-agent scenarios.
AgentOps works seamlessly with applications built using Langchain. To use the handler, install Langchain as an optional dependency:
Installation
pip install agentops[langchain]
To use the handler, import and set
importosfromlangchain.chat_modelsimportChatOpenAIfromlangchain.agentsimportinitialize_agent,AgentTypefromagentops.partners.langchain_callback_handlerimportLangchainCallbackHandlerAGENTOPS_API_KEY=os.environ['AGENTOPS_API_KEY']handler=LangchainCallbackHandler(api_key=AGENTOPS_API_KEY,tags=['Langchain Example'])llm=ChatOpenAI(openai_api_key=OPENAI_API_KEY,callbacks=[handler],model='gpt-3.5-turbo')agent=initialize_agent(tools,llm,agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,verbose=True,callbacks=[handler],# You must pass in a callback handler to record your agenthandle_parsing_errors=True)
Check out theLangchain Examples Notebook for more details including Async handlers.
First class support for Cohere(>=5.4.0). This is a living integration, should you need any added functionality please message us on Discord!
Installation
pip install cohere
importcohereimportagentops# Beginning of program's code (i.e. main.py, __init__.py)agentops.init(<INSERTYOURAPIKEYHERE>)co=cohere.Client()chat=co.chat(message="Is it pronounced ceaux-hear or co-hehray?")print(chat)agentops.end_session('Success')
importcohereimportagentops# Beginning of program's code (i.e. main.py, __init__.py)agentops.init(<INSERTYOURAPIKEYHERE>)co=cohere.Client()stream=co.chat_stream(message="Write me a haiku about the synergies between Cohere and AgentOps")foreventinstream:ifevent.event_type=="text-generation":print(event.text,end='')agentops.end_session('Success')
Track agents built with the Anthropic Python SDK (>=0.32.0).
Installation
pip install anthropic
importanthropicimportagentops# Beginning of program's code (i.e. main.py, __init__.py)agentops.init(<INSERTYOURAPIKEYHERE>)client=anthropic.Anthropic(# This is the default and can be omittedapi_key=os.environ.get("ANTHROPIC_API_KEY"),)message=client.messages.create(max_tokens=1024,messages=[ {"role":"user","content":"Tell me a cool fact about AgentOps", } ],model="claude-3-opus-20240229", )print(message.content)agentops.end_session('Success')
Streaming
importanthropicimportagentops# Beginning of program's code (i.e. main.py, __init__.py)agentops.init(<INSERTYOURAPIKEYHERE>)client=anthropic.Anthropic(# This is the default and can be omittedapi_key=os.environ.get("ANTHROPIC_API_KEY"),)stream=client.messages.create(max_tokens=1024,model="claude-3-opus-20240229",messages=[ {"role":"user","content":"Tell me something cool about streaming agents", } ],stream=True,)response=""foreventinstream:ifevent.type=="content_block_delta":response+=event.delta.textelifevent.type=="message_stop":print("\n")print(response)print("\n")
Async
importasynciofromanthropicimportAsyncAnthropicclient=AsyncAnthropic(# This is the default and can be omittedapi_key=os.environ.get("ANTHROPIC_API_KEY"),)asyncdefmain()->None:message=awaitclient.messages.create(max_tokens=1024,messages=[ {"role":"user","content":"Tell me something interesting about async agents", } ],model="claude-3-opus-20240229", )print(message.content)awaitmain()
Track agents built with the Mistral Python SDK (>=0.32.0).
Installation
pip install mistralai
Sync
frommistralaiimportMistralimportagentops# Beginning of program's code (i.e. main.py, __init__.py)agentops.init(<INSERTYOURAPIKEYHERE>)client=Mistral(# This is the default and can be omittedapi_key=os.environ.get("MISTRAL_API_KEY"),)message=client.chat.complete(messages=[ {"role":"user","content":"Tell me a cool fact about AgentOps", } ],model="open-mistral-nemo", )print(message.choices[0].message.content)agentops.end_session('Success')
Streaming
frommistralaiimportMistralimportagentops# Beginning of program's code (i.e. main.py, __init__.py)agentops.init(<INSERTYOURAPIKEYHERE>)client=Mistral(# This is the default and can be omittedapi_key=os.environ.get("MISTRAL_API_KEY"),)message=client.chat.stream(messages=[ {"role":"user","content":"Tell me something cool about streaming agents", } ],model="open-mistral-nemo", )response=""foreventinmessage:ifevent.data.choices[0].finish_reason=="stop":print("\n")print(response)print("\n")else:response+=event.textagentops.end_session('Success')
Async
importasynciofrommistralaiimportMistralclient=Mistral(# This is the default and can be omittedapi_key=os.environ.get("MISTRAL_API_KEY"),)asyncdefmain()->None:message=awaitclient.chat.complete_async(messages=[ {"role":"user","content":"Tell me something interesting about async agents", } ],model="open-mistral-nemo", )print(message.choices[0].message.content)awaitmain()
Async Streaming
importasynciofrommistralaiimportMistralclient=Mistral(# This is the default and can be omittedapi_key=os.environ.get("MISTRAL_API_KEY"),)asyncdefmain()->None:message=awaitclient.chat.stream_async(messages=[ {"role":"user","content":"Tell me something interesting about async streaming agents", } ],model="open-mistral-nemo", )response=""asyncforeventinmessage:ifevent.data.choices[0].finish_reason=="stop":print("\n")print(response)print("\n")else:response+=event.textawaitmain()
Track agents built with the CamelAI Python SDK (>=0.32.0).
Installation
pip install camel-ai[all]pip install agentops
#Import Dependenciesimportagentopsimportosfromgetpassimportgetpassfromdotenvimportload_dotenv#Set Keysload_dotenv()openai_api_key=os.getenv("OPENAI_API_KEY")or"<your openai key here>"agentops_api_key=os.getenv("AGENTOPS_API_KEY")or"<your agentops key here>"
You can find usage examples here!.
AgentOps provides support for LiteLLM(>=1.3.1), allowing you to call 100+ LLMs using the same Input/Output Format.
Installation
pip install litellm
# Do not use LiteLLM like this# from litellm import completion# ...# response = completion(model="claude-3", messages=messages)# Use LiteLLM like thisimportlitellm...response=litellm.completion(model="claude-3",messages=messages)# orresponse=awaitlitellm.acompletion(model="claude-3",messages=messages)
AgentOps works seamlessly with applications built using LlamaIndex, a framework for building context-augmented generative AI applications with LLMs.
Installation
pip install llama-index-instrumentation-agentops
To use the handler, import and set
fromllama_index.coreimportset_global_handler# NOTE: Feel free to set your AgentOps environment variables (e.g., 'AGENTOPS_API_KEY')# as outlined in the AgentOps documentation, or pass the equivalent keyword arguments# anticipated by AgentOps' AOClient as **eval_params in set_global_handler.set_global_handler("agentops")
Check out theLlamaIndex docs for more details.
AgentOps provides support for Llama Stack Python Client(>=0.0.53), allowing you to monitor your Agentic applications.
Track and analyze SwarmZero agents with full observability. Set anAGENTOPS_API_KEY
in your environment and initialize AgentOps to get started.
- SwarmZero - Advanced multi-agent framework
- AgentOps integration example
- SwarmZero AI integration example
- SwarmZero AI - AgentOps documentation
- Official SwarmZero Python SDK
Installation
pip install swarmzeropip install agentops
fromdotenvimportload_dotenvload_dotenv()importagentopsagentops.init(<INSERTYOURAPIKEYHERE>)fromswarmzeroimportAgent,Swarm# ...
(coming soon!)
Platform | Dashboard | Evals |
---|---|---|
✅ Python SDK | ✅ Multi-session and Cross-session metrics | ✅ Custom eval metrics |
🚧 Evaluation builder API | ✅ Custom event tag tracking | 🔜 Agent scorecards |
✅Javascript/Typescript SDK | ✅ Session replays | 🔜 Evaluation playground + leaderboard |
Performance testing | Environments | LLM Testing | Reasoning and execution testing |
---|---|---|---|
✅ Event latency analysis | 🔜 Non-stationary environment testing | 🔜 LLM non-deterministic function detection | 🚧 Infinite loops and recursive thought detection |
✅ Agent workflow execution pricing | 🔜 Multi-modal environments | 🚧 Token limit overflow flags | 🔜 Faulty reasoning detection |
🚧 Success validators (external) | 🔜 Execution containers | 🔜 Context limit overflow flags | 🔜 Generative code validators |
🔜 Agent controllers/skill tests | ✅ Honeypot and prompt injection detection (PromptArmor) | 🔜 API bill tracking | 🔜 Error breakpoint analysis |
🔜 Information context constraint testing | 🔜 Anti-agent roadblocks (i.e. Captchas) | 🔜 CI/CD integration checks | |
🔜 Regression testing | 🔜 Multi-agent framework visualization |
Without the right tools, AI agents are slow, expensive, and unreliable. Our mission is to bring your agent from prototype to production. Here's why AgentOps stands out:
- Comprehensive Observability: Track your AI agents' performance, user interactions, and API usage.
- Real-Time Monitoring: Get instant insights with session replays, metrics, and live monitoring tools.
- Cost Control: Monitor and manage your spend on LLM and API calls.
- Failure Detection: Quickly identify and respond to agent failures and multi-agent interaction issues.
- Tool Usage Statistics: Understand how your agents utilize external tools with detailed analytics.
- Session-Wide Metrics: Gain a holistic view of your agents' sessions with comprehensive statistics.
AgentOps is designed to make agent observability, testing, and monitoring easy.
Check out our growth in the community:
Repository | Stars |
---|---|
42787 | |
34446 | |
18287 | |
5166 | |
5050 | |
4713 | |
2723 | |
2007 | |
272 | |
195 | |
134 | |
55 | |
47 | |
27 | |
19 | |
14 | |
13 |
Generated usinggithub-dependents-info, byNicolas Vuillamy
About
Python SDK for AI agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks including OpenAI Agents SDK, CrewAI, Langchain, Autogen, AG2, and CamelAI