Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]

License

NotificationsYou must be signed in to change notification settings

BerriAI/litellm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deploy to RenderDeploy on Railway

Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.]

PyPI VersionY Combinator W23WhatsappDiscordSlack

LiteLLM manages:

  • Translate inputs to provider'scompletion,embedding, andimage_generation endpoints
  • Consistent output, text responses will always be available at['choices'][0]['message']['content']
  • Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) -Router
  • Set Budgets & Rate limits per project, api key, modelLiteLLM Proxy Server (LLM Gateway)

LiteLLM Performance:8ms P95 latency at 1k RPS (See benchmarkshere)

Jump to LiteLLM Proxy (LLM Gateway) Docs
Jump to Supported LLM Providers

🚨Stable Release: Use docker images with the-stable tag. These have undergone 12 hour load tests, before being published.More information about the release cycle here

Support for more providers. Missing a provider or LLM Platform, raise afeature request.

Usage (Docs)

Open In Colab
pip install litellm
fromlitellmimportcompletionimportos## set ENV variablesos.environ["OPENAI_API_KEY"]="your-openai-key"os.environ["ANTHROPIC_API_KEY"]="your-anthropic-key"messages= [{"content":"Hello, how are you?","role":"user"}]# openai callresponse=completion(model="openai/gpt-4o",messages=messages)# anthropic callresponse=completion(model="anthropic/claude-sonnet-4-20250514",messages=messages)print(response)

Response (OpenAI Format)

{"id":"chatcmpl-1214900a-6cdd-4148-b663-b5e2f642b4de","created":1751494488,"model":"claude-sonnet-4-20250514","object":"chat.completion","system_fingerprint":null,"choices": [        {"finish_reason":"stop","index":0,"message": {"content":"Hello! I'm doing well, thank you for asking. I'm here and ready to help with whatever you'd like to discuss or work on. How are you doing today?","role":"assistant","tool_calls":null,"function_call":null            }        }    ],"usage": {"completion_tokens":39,"prompt_tokens":13,"total_tokens":52,"completion_tokens_details":null,"prompt_tokens_details": {"audio_tokens":null,"cached_tokens":0        },"cache_creation_input_tokens":0,"cache_read_input_tokens":0    }}

Note: LiteLLM also supports theResponses API (litellm.responses())

Call any model supported by a provider, withmodel=<provider_name>/<model_name>. There might be provider-specific details here, so refer toprovider docs for more information

Async (Docs)

fromlitellmimportacompletionimportasyncioasyncdeftest_get_response():user_message="Hello, how are you?"messages= [{"content":user_message,"role":"user"}]response=awaitacompletion(model="openai/gpt-4o",messages=messages)returnresponseresponse=asyncio.run(test_get_response())print(response)

Streaming (Docs)

LiteLLM supports streaming the model response back, passstream=True to get a streaming iterator in response.Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

fromlitellmimportcompletionmessages= [{"content":"Hello, how are you?","role":"user"}]# gpt-4oresponse=completion(model="openai/gpt-4o",messages=messages,stream=True)forpartinresponse:print(part.choices[0].delta.contentor"")# claude sonnet 4response=completion('anthropic/claude-sonnet-4-20250514',messages,stream=True)forpartinresponse:print(part)

Response chunk (OpenAI Format)

{"id":"chatcmpl-fe575c37-5004-4926-ae5e-bfbc31f356ca","created":1751494808,"model":"claude-sonnet-4-20250514","object":"chat.completion.chunk","system_fingerprint":null,"choices": [        {"finish_reason":null,"index":0,"delta": {"provider_specific_fields":null,"content":"Hello","role":"assistant","function_call":null,"tool_calls":null,"audio":null            },"logprobs":null        }    ],"provider_specific_fields":null,"stream_options":null,"citations":null}

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to Lunary, MLflow, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack

fromlitellmimportcompletion## set env variables for logging tools (when using MLflow, no API key set up is required)os.environ["LUNARY_PUBLIC_KEY"]="your-lunary-public-key"os.environ["HELICONE_API_KEY"]="your-helicone-auth-key"os.environ["LANGFUSE_PUBLIC_KEY"]=""os.environ["LANGFUSE_SECRET_KEY"]=""os.environ["ATHINA_API_KEY"]="your-athina-api-key"os.environ["OPENAI_API_KEY"]="your-openai-key"# set callbackslitellm.success_callback= ["lunary","mlflow","langfuse","athina","helicone"]# log input/output to lunary, langfuse, supabase, athina, helicone etc#openai callresponse=completion(model="openai/gpt-4o",messages=[{"role":"user","content":"Hi 👋 - i'm openai"}])

LiteLLM Proxy Server (LLM Gateway) - (Docs)

Track spend + Load Balance across multiple projects

Hosted Proxy

The proxy provides:

  1. Hooks for auth
  2. Hooks for logging
  3. Cost tracking
  4. Rate Limiting

📖 Proxy Endpoints -Swagger Docs

Quick Start Proxy - CLI

pip install'litellm[proxy]'

Step 1: Start litellm proxy

$ litellm --model huggingface/bigcode/starcoder#INFO: Proxy running on http://0.0.0.0:4000

Step 2: Make ChatCompletions Request to Proxy

importopenai# openai v1.0.0+client=openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000")# set proxy to base_url# request sent to model set on litellm proxy, `litellm --model`response=client.chat.completions.create(model="gpt-3.5-turbo",messages= [    {"role":"user","content":"this is a test request, write a short poem"    }])print(response)

Proxy Key Management (Docs)

Connect the proxy with a Postgres DB to create proxy keys

# Get the codegit clone https://github.com/BerriAI/litellm# Go to foldercd litellm# Add the master key - you can change this after setupecho'LITELLM_MASTER_KEY="sk-1234"'> .env# Add the litellm salt key - you cannot change this after adding a model# It is used to encrypt / decrypt your LLM API Key credentials# We recommend - https://1password.com/password-generator/# password generator to get a random hash for litellm salt keyecho'LITELLM_SALT_KEY="sk-1234"'>> .envsource .env# Startdocker compose up

UI on/ui on your proxy serverui_3

Set budgets and rate limits across multiple projectsPOST /key/generate

Request

curl'http://0.0.0.0:4000/key/generate' \--header'Authorization: Bearer sk-1234' \--header'Content-Type: application/json' \--data-raw'{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'

Expected Response

{"key":"sk-kdEXbIqZRwEeEiHwdg7sFA",# Bearer token"expires":"2023-11-19T01:38:25.838000+00:00"# datetime object}

Supported Providers (Website Supported Models |Docs)

Provider/chat/completions/messages/responses/embeddings/image/generations/audio/transcriptions/audio/speech/moderations/batches/rerank
AI/ML API (aiml)
AI21 (ai21)
AI21 Chat (ai21_chat)
Aleph Alpha
Anthropic (anthropic)
Anthropic Text (anthropic_text)
Anyscale
AssemblyAI (assemblyai)
Auto Router (auto_router)
AWS - Bedrock (bedrock)
AWS - Sagemaker (sagemaker)
Azure (azure)
Azure AI (azure_ai)
Azure Text (azure_text)
Baseten (baseten)
Bytez (bytez)
Cerebras (cerebras)
Clarifai (clarifai)
Cloudflare AI Workers (cloudflare)
Codestral (codestral)
Cohere (cohere)
Cohere Chat (cohere_chat)
CometAPI (cometapi)
CompactifAI (compactifai)
Custom (custom)
Custom OpenAI (custom_openai)
Dashscope (dashscope)
Databricks (databricks)
DataRobot (datarobot)
Deepgram (deepgram)
DeepInfra (deepinfra)
Deepseek (deepseek)
ElevenLabs (elevenlabs)
Empower (empower)
Fal AI (fal_ai)
Featherless AI (featherless_ai)
Fireworks AI (fireworks_ai)
FriendliAI (friendliai)
Galadriel (galadriel)
GitHub Copilot (github_copilot)
GitHub Models (github)
Google - PaLM
Google - Vertex AI (vertex_ai)
Google AI Studio - Gemini (gemini)
GradientAI (gradient_ai)
Groq AI (groq)
Heroku (heroku)
Hosted VLLM (hosted_vllm)
Huggingface (huggingface)
Hyperbolic (hyperbolic)
IBM - Watsonx.ai (watsonx)
Infinity (infinity)
Jina AI (jina_ai)
Lambda AI (lambda_ai)
Lemonade (lemonade)
LiteLLM Proxy (litellm_proxy)
Llamafile (llamafile)
LM Studio (lm_studio)
Maritalk (maritalk)
Meta - Llama API (meta_llama)
Mistral AI API (mistral)
Moonshot (moonshot)
Morph (morph)
Nebius AI Studio (nebius)
NLP Cloud (nlp_cloud)
Novita AI (novita)
Nscale (nscale)
Nvidia NIM (nvidia_nim)
OCI (oci)
Ollama (ollama)
Ollama Chat (ollama_chat)
Oobabooga (oobabooga)
OpenAI (openai)
OpenAI-like (openai_like)
OpenRouter (openrouter)
OVHCloud AI Endpoints (ovhcloud)
Perplexity AI (perplexity)
Petals (petals)
Predibase (predibase)
Recraft (recraft)
Replicate (replicate)
Sagemaker Chat (sagemaker_chat)
Sambanova (sambanova)
Snowflake (snowflake)
Text Completion Codestral (text-completion-codestral)
Text Completion OpenAI (text-completion-openai)
Together AI (together_ai)
Topaz (topaz)
Triton (triton)
V0 (v0)
Vercel AI Gateway (vercel_ai_gateway)
VLLM (vllm)
Volcengine (volcengine)
Voyage AI (voyage)
WandB Inference (wandb)
Watsonx Text (watsonx_text)
xAI (xai)
Xinference (xinference)

Read the Docs

Run in Developer mode

Services

  1. Setup .env file in root
  2. Run dependant servicesdocker-compose up db prometheus

Backend

  1. (In root) create virtual environmentpython -m venv .venv
  2. Activate virtual environmentsource .venv/bin/activate
  3. Install dependenciespip install -e ".[all]"
  4. Start proxy backendpython litellm/proxy_cli.py

Frontend

  1. Navigate toui/litellm-dashboard
  2. Install dependenciesnpm install
  3. Runnpm run dev to start the dashboard

Enterprise

For companies that need better security, user management and professional support

Talk to founders

This covers:

  • Features under theLiteLLM Commercial License:
  • Feature Prioritization
  • Custom Integrations
  • Professional Support - Dedicated discord + slack
  • Custom SLAs
  • Secure access with Single Sign-On

Contributing

We welcome contributions to LiteLLM! Whether you're fixing bugs, adding features, or improving documentation, we appreciate your help.

Quick Start for Contributors

This requires poetry to be installed.

git clone https://github.com/BerriAI/litellm.gitcd litellmmake install-dev# Install development dependenciesmake format# Format your codemake lint# Run all linting checksmake test-unit# Run unit testsmake format-check# Check formatting only

For detailed contributing guidelines, seeCONTRIBUTING.md.

Code Quality / Linting

LiteLLM follows theGoogle Python Style Guide.

Our automated checks include:

  • Black for code formatting
  • Ruff for linting and code quality
  • MyPy for type checking
  • Circular import detection
  • Import safety checks

All these checks must pass before your PR can be merged.

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors

About

Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing and logging. [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, VLLM, NVIDIA NIM]

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Sponsor this project

    Packages

     
     
     

    [8]ページ先頭

    ©2009-2025 Movatter.jp