Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork5.9k
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]
License
BerriAI/litellm
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Call 100+ LLMs in OpenAI format. [Bedrock, Azure, OpenAI, VertexAI, Anthropic, Groq, etc.]

LLMs - Call 100+ LLMs (Python SDK + AI Gateway)
All Supported Endpoints -/chat/completions,/responses,/embeddings,/images,/audio,/batches,/rerank,/a2a,/messages and more.
pip install litellm
fromlitellmimportcompletionimportosos.environ["OPENAI_API_KEY"]="your-openai-key"os.environ["ANTHROPIC_API_KEY"]="your-anthropic-key"# OpenAIresponse=completion(model="openai/gpt-4o",messages=[{"role":"user","content":"Hello!"}])# Anthropicresponse=completion(model="anthropic/claude-sonnet-4-20250514",messages=[{"role":"user","content":"Hello!"}])
Getting Started - E2E Tutorial - Setup virtual keys, make your first request
pip install'litellm[proxy]'litellm --model gpt-4oimportopenaiclient=openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000")response=client.chat.completions.create(model="gpt-4o",messages=[{"role":"user","content":"Hello!"}])
Agents - Invoke A2A Agents (Python SDK + AI Gateway)
Supported Providers - LangGraph, Vertex AI Agent Engine, Azure AI Foundry, Bedrock AgentCore, Pydantic AI
fromlitellm.a2a_protocolimportA2AClientfroma2a.typesimportSendMessageRequest,MessageSendParamsfromuuidimportuuid4client=A2AClient(base_url="http://localhost:10001")request=SendMessageRequest(id=str(uuid4()),params=MessageSendParams(message={"role":"user","parts": [{"kind":"text","text":"Hello!"}],"messageId":uuid4().hex, } ))response=awaitclient.send_message(request)
Step 1.Add your Agent to the AI Gateway
Step 2. Call Agent via A2A SDK
froma2a.clientimportA2ACardResolver,A2AClientfroma2a.typesimportMessageSendParams,SendMessageRequestfromuuidimportuuid4importhttpxbase_url="http://localhost:4000/a2a/my-agent"# LiteLLM proxy + agent nameheaders= {"Authorization":"Bearer sk-1234"}# LiteLLM Virtual Keyasyncwithhttpx.AsyncClient(headers=headers)ashttpx_client:resolver=A2ACardResolver(httpx_client=httpx_client,base_url=base_url)agent_card=awaitresolver.get_agent_card()client=A2AClient(httpx_client=httpx_client,agent_card=agent_card)request=SendMessageRequest(id=str(uuid4()),params=MessageSendParams(message={"role":"user","parts": [{"kind":"text","text":"Hello!"}],"messageId":uuid4().hex, } ) )response=awaitclient.send_message(request)
MCP Tools - Connect MCP servers to any LLM (Python SDK + AI Gateway)
frommcpimportClientSession,StdioServerParametersfrommcp.client.stdioimportstdio_clientfromlitellmimportexperimental_mcp_clientimportlitellmserver_params=StdioServerParameters(command="python",args=["mcp_server.py"])asyncwithstdio_client(server_params)as (read,write):asyncwithClientSession(read,write)assession:awaitsession.initialize()# Load MCP tools in OpenAI formattools=awaitexperimental_mcp_client.load_mcp_tools(session=session,format="openai")# Use with any LiteLLM modelresponse=awaitlitellm.acompletion(model="gpt-4o",messages=[{"role":"user","content":"What's 3 + 5?"}],tools=tools )
Step 1.Add your MCP Server to the AI Gateway
Step 2. Call MCP tools via/chat/completions
curl -X POST'http://0.0.0.0:4000/v1/chat/completions' \ -H'Authorization: Bearer sk-1234' \ -H'Content-Type: application/json' \ -d'{ "model": "gpt-4o", "messages": [{"role": "user", "content": "Summarize the latest open PR"}], "tools": [{ "type": "mcp", "server_url": "litellm_proxy/mcp/github", "server_label": "github_mcp", "require_approval": "never" }] }'
{"mcpServers": {"LiteLLM": {"url":"http://localhost:4000/mcp","headers": {"x-litellm-api-key":"Bearer sk-1234" } } }}You can use LiteLLM through either the Proxy Server or Python SDK. Both gives you a unified interface to access multiple LLMs (100+ LLMs). Choose the option that best fits your needs:
| LiteLLM AI Gateway | LiteLLM Python SDK | |
|---|---|---|
| Use Case | Central service (LLM Gateway) to access multiple LLMs | Use LiteLLM directly in your Python code |
| Who Uses It? | Gen AI Enablement / ML Platform Teams | Developers building LLM projects |
| Key Features | Centralized API gateway with authentication and authorization, multi-tenant cost tracking and spend management per project/user, per-project customization (logging, guardrails, caching), virtual keys for secure access control, admin dashboard UI for monitoring and management | Direct Python library integration in your codebase, Router with retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) -Router, application-level load balancing and cost tracking, exception handling with OpenAI-compatible errors, observability callbacks (Lunary, MLflow, Langfuse, etc.) |
LiteLLM Performance:8ms P95 latency at 1k RPS (See benchmarkshere)
Jump to LiteLLM Proxy (LLM Gateway) Docs
Jump to Supported LLM Providers
Stable Release: Use docker images with the-stable tag. These have undergone 12 hour load tests, before being published.More information about the release cycle here
Support for more providers. Missing a provider or LLM Platform, raise afeature request.
![]() | ![]() | ![]() | ![]() | ![]() |
Supported Providers (Website Supported Models |Docs)
- Setup .env file in root
- Run dependant services
docker-compose up db prometheus
- (In root) create virtual environment
python -m venv .venv - Activate virtual environment
source .venv/bin/activate - Install dependencies
pip install -e ".[all]" pip install prismaprisma generate- Start proxy backend
python litellm/proxy/proxy_cli.py
- Navigate to
ui/litellm-dashboard - Install dependencies
npm install - Run
npm run devto start the dashboard
For companies that need better security, user management and professional support
This covers:
- ✅Features under theLiteLLM Commercial License:
- ✅Feature Prioritization
- ✅Custom Integrations
- ✅Professional Support - Dedicated discord + slack
- ✅Custom SLAs
- ✅Secure access with Single Sign-On
We welcome contributions to LiteLLM! Whether you're fixing bugs, adding features, or improving documentation, we appreciate your help.
This requires poetry to be installed.
git clone https://github.com/BerriAI/litellm.gitcd litellmmake install-dev# Install development dependenciesmake format# Format your codemake lint# Run all linting checksmake test-unit# Run unit testsmake format-check# Check formatting only
For detailed contributing guidelines, seeCONTRIBUTING.md.
LiteLLM follows theGoogle Python Style Guide.
Our automated checks include:
- Black for code formatting
- Ruff for linting and code quality
- MyPy for type checking
- Circular import detection
- Import safety checks
All these checks must pass before your PR can be merged.
- Schedule Demo 👋
- Community Discord 💭
- Community Slack 💭
- Our numbers 📞 +1 (770) 8783-106 / +1 (412) 618-6238
- Our emails ✉️ishaan@berri.ai /krrish@berri.ai
- Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.
About
Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]
Topics
Resources
License
Contributing
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Sponsor this project
Uh oh!
There was an error while loading.Please reload this page.
Packages0
Uh oh!
There was an error while loading.Please reload this page.




