Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]

License

NotificationsYou must be signed in to change notification settings

BerriAI/litellm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33,037 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Call 100+ LLMs in OpenAI format. [Bedrock, Azure, OpenAI, VertexAI, Anthropic, Groq, etc.]

Deploy to RenderDeploy on Railway

PyPI VersionY Combinator W23WhatsappDiscordSlack

Group 7154 (1)

Use LiteLLM for

LLMs - Call 100+ LLMs (Python SDK + AI Gateway)

All Supported Endpoints -/chat/completions,/responses,/embeddings,/images,/audio,/batches,/rerank,/a2a,/messages and more.

Python SDK

pip install litellm
fromlitellmimportcompletionimportosos.environ["OPENAI_API_KEY"]="your-openai-key"os.environ["ANTHROPIC_API_KEY"]="your-anthropic-key"# OpenAIresponse=completion(model="openai/gpt-4o",messages=[{"role":"user","content":"Hello!"}])# Anthropicresponse=completion(model="anthropic/claude-sonnet-4-20250514",messages=[{"role":"user","content":"Hello!"}])

AI Gateway (Proxy Server)

Getting Started - E2E Tutorial - Setup virtual keys, make your first request

pip install'litellm[proxy]'litellm --model gpt-4o
importopenaiclient=openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000")response=client.chat.completions.create(model="gpt-4o",messages=[{"role":"user","content":"Hello!"}])

Docs: LLM Providers

Agents - Invoke A2A Agents (Python SDK + AI Gateway)

Supported Providers - LangGraph, Vertex AI Agent Engine, Azure AI Foundry, Bedrock AgentCore, Pydantic AI

Python SDK - A2A Protocol

fromlitellm.a2a_protocolimportA2AClientfroma2a.typesimportSendMessageRequest,MessageSendParamsfromuuidimportuuid4client=A2AClient(base_url="http://localhost:10001")request=SendMessageRequest(id=str(uuid4()),params=MessageSendParams(message={"role":"user","parts": [{"kind":"text","text":"Hello!"}],"messageId":uuid4().hex,        }    ))response=awaitclient.send_message(request)

AI Gateway (Proxy Server)

Step 1.Add your Agent to the AI Gateway

Step 2. Call Agent via A2A SDK

froma2a.clientimportA2ACardResolver,A2AClientfroma2a.typesimportMessageSendParams,SendMessageRequestfromuuidimportuuid4importhttpxbase_url="http://localhost:4000/a2a/my-agent"# LiteLLM proxy + agent nameheaders= {"Authorization":"Bearer sk-1234"}# LiteLLM Virtual Keyasyncwithhttpx.AsyncClient(headers=headers)ashttpx_client:resolver=A2ACardResolver(httpx_client=httpx_client,base_url=base_url)agent_card=awaitresolver.get_agent_card()client=A2AClient(httpx_client=httpx_client,agent_card=agent_card)request=SendMessageRequest(id=str(uuid4()),params=MessageSendParams(message={"role":"user","parts": [{"kind":"text","text":"Hello!"}],"messageId":uuid4().hex,            }        )    )response=awaitclient.send_message(request)

Docs: A2A Agent Gateway

MCP Tools - Connect MCP servers to any LLM (Python SDK + AI Gateway)

Python SDK - MCP Bridge

frommcpimportClientSession,StdioServerParametersfrommcp.client.stdioimportstdio_clientfromlitellmimportexperimental_mcp_clientimportlitellmserver_params=StdioServerParameters(command="python",args=["mcp_server.py"])asyncwithstdio_client(server_params)as (read,write):asyncwithClientSession(read,write)assession:awaitsession.initialize()# Load MCP tools in OpenAI formattools=awaitexperimental_mcp_client.load_mcp_tools(session=session,format="openai")# Use with any LiteLLM modelresponse=awaitlitellm.acompletion(model="gpt-4o",messages=[{"role":"user","content":"What's 3 + 5?"}],tools=tools        )

AI Gateway - MCP Gateway

Step 1.Add your MCP Server to the AI Gateway

Step 2. Call MCP tools via/chat/completions

curl -X POST'http://0.0.0.0:4000/v1/chat/completions' \  -H'Authorization: Bearer sk-1234' \  -H'Content-Type: application/json' \  -d'{    "model": "gpt-4o",    "messages": [{"role": "user", "content": "Summarize the latest open PR"}],    "tools": [{      "type": "mcp",      "server_url": "litellm_proxy/mcp/github",      "server_label": "github_mcp",      "require_approval": "never"    }]  }'

Use with Cursor IDE

{"mcpServers": {"LiteLLM": {"url":"http://localhost:4000/mcp","headers": {"x-litellm-api-key":"Bearer sk-1234"      }    }  }}

Docs: MCP Gateway


How to use LiteLLM

You can use LiteLLM through either the Proxy Server or Python SDK. Both gives you a unified interface to access multiple LLMs (100+ LLMs). Choose the option that best fits your needs:

LiteLLM AI GatewayLiteLLM Python SDK
Use CaseCentral service (LLM Gateway) to access multiple LLMsUse LiteLLM directly in your Python code
Who Uses It?Gen AI Enablement / ML Platform TeamsDevelopers building LLM projects
Key FeaturesCentralized API gateway with authentication and authorization, multi-tenant cost tracking and spend management per project/user, per-project customization (logging, guardrails, caching), virtual keys for secure access control, admin dashboard UI for monitoring and managementDirect Python library integration in your codebase, Router with retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) -Router, application-level load balancing and cost tracking, exception handling with OpenAI-compatible errors, observability callbacks (Lunary, MLflow, Langfuse, etc.)

LiteLLM Performance:8ms P95 latency at 1k RPS (See benchmarkshere)

Jump to LiteLLM Proxy (LLM Gateway) Docs
Jump to Supported LLM Providers

Stable Release: Use docker images with the-stable tag. These have undergone 12 hour load tests, before being published.More information about the release cycle here

Support for more providers. Missing a provider or LLM Platform, raise afeature request.

OSS Adopters

StripeGoogle ADKGreptileOpenHands

Netflix

OpenAI Agents SDK

Supported Providers (Website Supported Models |Docs)

Provider/chat/completions/messages/responses/embeddings/image/generations/audio/transcriptions/audio/speech/moderations/batches/rerank
Abliteration (abliteration)
AI/ML API (aiml)
AI21 (ai21)
AI21 Chat (ai21_chat)
Aleph Alpha
Amazon Nova
Anthropic (anthropic)
Anthropic Text (anthropic_text)
Anyscale
AssemblyAI (assemblyai)
Auto Router (auto_router)
AWS - Bedrock (bedrock)
AWS - Sagemaker (sagemaker)
Azure (azure)
Azure AI (azure_ai)
Azure Text (azure_text)
Baseten (baseten)
Bytez (bytez)
Cerebras (cerebras)
Clarifai (clarifai)
Cloudflare AI Workers (cloudflare)
Codestral (codestral)
Cohere (cohere)
Cohere Chat (cohere_chat)
CometAPI (cometapi)
CompactifAI (compactifai)
Custom (custom)
Custom OpenAI (custom_openai)
Dashscope (dashscope)
Databricks (databricks)
DataRobot (datarobot)
Deepgram (deepgram)
DeepInfra (deepinfra)
Deepseek (deepseek)
ElevenLabs (elevenlabs)
Empower (empower)
Fal AI (fal_ai)
Featherless AI (featherless_ai)
Fireworks AI (fireworks_ai)
FriendliAI (friendliai)
Galadriel (galadriel)
GitHub Copilot (github_copilot)
GitHub Models (github)
Google - PaLM
Google - Vertex AI (vertex_ai)
Google AI Studio - Gemini (gemini)
GradientAI (gradient_ai)
Groq AI (groq)
Heroku (heroku)
Hosted VLLM (hosted_vllm)
Huggingface (huggingface)
Hyperbolic (hyperbolic)
IBM - Watsonx.ai (watsonx)
Infinity (infinity)
Jina AI (jina_ai)
Lambda AI (lambda_ai)
Lemonade (lemonade)
LiteLLM Proxy (litellm_proxy)
Llamafile (llamafile)
LM Studio (lm_studio)
Maritalk (maritalk)
Meta - Llama API (meta_llama)
Mistral AI API (mistral)
Moonshot (moonshot)
Morph (morph)
Nebius AI Studio (nebius)
NLP Cloud (nlp_cloud)
Novita AI (novita)
Nscale (nscale)
Nvidia NIM (nvidia_nim)
OCI (oci)
Ollama (ollama)
Ollama Chat (ollama_chat)
Oobabooga (oobabooga)
OpenAI (openai)
OpenAI-like (openai_like)
OpenRouter (openrouter)
OVHCloud AI Endpoints (ovhcloud)
Perplexity AI (perplexity)
Petals (petals)
Predibase (predibase)
Recraft (recraft)
Replicate (replicate)
Sagemaker Chat (sagemaker_chat)
Sambanova (sambanova)
Snowflake (snowflake)
Text Completion Codestral (text-completion-codestral)
Text Completion OpenAI (text-completion-openai)
Together AI (together_ai)
Topaz (topaz)
Triton (triton)
V0 (v0)
Vercel AI Gateway (vercel_ai_gateway)
VLLM (vllm)
Volcengine (volcengine)
Voyage AI (voyage)
WandB Inference (wandb)
Watsonx Text (watsonx_text)
xAI (xai)
Xinference (xinference)

Read the Docs

Run in Developer mode

Services

  1. Setup .env file in root
  2. Run dependant servicesdocker-compose up db prometheus

Backend

  1. (In root) create virtual environmentpython -m venv .venv
  2. Activate virtual environmentsource .venv/bin/activate
  3. Install dependenciespip install -e ".[all]"
  4. pip install prisma
  5. prisma generate
  6. Start proxy backendpython litellm/proxy/proxy_cli.py

Frontend

  1. Navigate toui/litellm-dashboard
  2. Install dependenciesnpm install
  3. Runnpm run dev to start the dashboard

Enterprise

For companies that need better security, user management and professional support

Talk to founders

This covers:

  • Features under theLiteLLM Commercial License:
  • Feature Prioritization
  • Custom Integrations
  • Professional Support - Dedicated discord + slack
  • Custom SLAs
  • Secure access with Single Sign-On

Contributing

We welcome contributions to LiteLLM! Whether you're fixing bugs, adding features, or improving documentation, we appreciate your help.

Quick Start for Contributors

This requires poetry to be installed.

git clone https://github.com/BerriAI/litellm.gitcd litellmmake install-dev# Install development dependenciesmake format# Format your codemake lint# Run all linting checksmake test-unit# Run unit testsmake format-check# Check formatting only

For detailed contributing guidelines, seeCONTRIBUTING.md.

Code Quality / Linting

LiteLLM follows theGoogle Python Style Guide.

Our automated checks include:

  • Black for code formatting
  • Ruff for linting and code quality
  • MyPy for type checking
  • Circular import detection
  • Import safety checks

All these checks must pass before your PR can be merged.

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors

About

Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Sponsor this project

    Packages

     
     
     

    [8]ページ先頭

    ©2009-2026 Movatter.jp