Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Optimizing inference proxy for LLMs

License

NotificationsYou must be signed in to change notification settings

algorithmicsuperintelligence/optillm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OptiLLM Logo

🚀 2-10x accuracy improvements on reasoning tasks with zero training

GitHub starsPyPI versionPyPI downloadsLicense

🤗 HuggingFace Space📓 Colab Demo💬 Discussions


OptiLLM is an OpenAI API-compatible optimizing inference proxy that implements 20+ state-of-the-art techniques to dramatically improve LLM accuracy and performance on reasoning tasks - without requiring any model training or fine-tuning.

It is possible to beat the frontier models using these techniques across diverse tasks by doing additional compute at inference time. A good example of how to combine such techniques together is theCePO approach from Cerebras.

✨ Key Features

  • 🎯 Instant Improvements: 2-10x better accuracy on math, coding, and logical reasoning
  • 🔌 Drop-in Replacement: Works with any OpenAI-compatible API endpoint
  • 🧠 20+ Optimization Techniques: From simple best-of-N to advanced MCTS and planning
  • 📦 Zero Training Required: Just proxy your existing API calls through OptiLLM
  • ⚡ Production Ready: Used in production by companies and researchers worldwide
  • 🌍 Multi-Provider: Supports OpenAI, Anthropic, Google, Cerebras, and 100+ models via LiteLLM

🚀 Quick Start

Get powerful reasoning improvements in 3 simple steps:

# 1. Install OptiLLMpip install optillm# 2. Start the serverexport OPENAI_API_KEY="your-key-here"optillm# 3. Use with any OpenAI client - just change the model name!
fromopenaiimportOpenAIclient=OpenAI(base_url="http://localhost:8000/v1")# Add 'moa-' prefix for Mixture of Agents optimizationresponse=client.chat.completions.create(model="moa-gpt-4o-mini",# This gives you GPT-4o performance from GPT-4o-mini!messages=[{"role":"user","content":"Solve: If 2x + 3 = 7, what is x?"}])

Before OptiLLM: "x = 1" ❌
After OptiLLM: "Let me work through this step by step: 2x + 3 = 7, so 2x = 4, therefore x = 2" ✅

📊 Proven Results

OptiLLM delivers measurable improvements across diverse benchmarks:

TechniqueBase ModelImprovementBenchmark
MARSGemini 2.5 Flash Lite+30.0 pointsAIME 2025 (43.3→73.3)
CePOLlama 3.3 70B+18.6 pointsMath-L5 (51.0→69.6)
AutoThinkDeepSeek-R1-1.5B+9.34 pointsGPQA-Diamond (21.72→31.06)
LongCePOLlama 3.3 70B+13.6 pointsInfiniteBench (58.0→71.6)
MOAGPT-4o-miniMatches GPT-4Arena-Hard-Auto
PlanSearchGPT-4o-mini+20% pass@5LiveCodeBench

Full benchmark resultsbelow ⬇️

🏗️ Installation

Using pip

pip install optillmoptillm2024-10-22 07:45:05,612 - INFO - Loaded plugin: privacy2024-10-22 07:45:06,293 - INFO - Loaded plugin: memory2024-10-22 07:45:06,293 - INFO - Starting server with approach: auto

Using docker

docker pull ghcr.io/codelion/optillm:latestdocker run -p 8000:8000 ghcr.io/codelion/optillm:latest2024-10-22 07:45:05,612 - INFO - Loaded plugin: privacy2024-10-22 07:45:06,293 - INFO - Loaded plugin: memory2024-10-22 07:45:06,293 - INFO - Starting server with approach: auto

Available Docker image variants:

  • Full image (latest): Includes all dependencies for local inference and plugins
  • Proxy-only (latest-proxy): Lightweight image without local inference capabilities
  • Offline (latest-offline): Self-contained image with pre-downloaded models (spaCy) for fully offline operation
# Proxy-only (smallest)docker pull ghcr.io/codelion/optillm:latest-proxy# Offline (largest, includes pre-downloaded models)docker pull ghcr.io/codelion/optillm:latest-offline

Install from source

Clone the repository withgit and usepip install to setup the dependencies.

git clone https://github.com/codelion/optillm.gitcd optillmpython3 -m venv .venvsource .venv/bin/activatepip install -r requirements.txt

🔒 SSL Configuration

OptILLM supports SSL certificate verification configuration for working with self-signed certificates or corporate proxies.

Disable SSL verification (development only):

# Command lineoptillm --no-ssl-verify# Environment variableexport OPTILLM_SSL_VERIFY=falseoptillm

Use custom CA certificate:

# Command lineoptillm --ssl-cert-path /path/to/ca-bundle.crt# Environment variableexport OPTILLM_SSL_CERT_PATH=/path/to/ca-bundle.crtoptillm

⚠️Security Note: Disabling SSL verification is insecure and should only be used in development. For production environments with custom CAs, use--ssl-cert-path instead. SeeSSL_CONFIGURATION.md for details.

Implemented techniques

ApproachSlugDescription
MARS (Multi-Agent Reasoning System)marsMulti-agent reasoning with diverse temperature exploration, cross-verification, and iterative improvement
Cerebras Planning and OptimizationcepoCombines Best of N, Chain-of-Thought, Self-Reflection, Self-Improvement, and various prompting techniques
CoT with Reflectioncot_reflectionImplements chain-of-thought reasoning with <thinking>, <reflection> and <output> sections
PlanSearchplansearchImplements a search algorithm over candidate plans for solving a problem in natural language
ReReadre2Implements rereading to improve reasoning by processing queries twice
Self-Consistencyself_consistencyImplements an advanced self-consistency method
Z3 Solverz3Utilizes the Z3 theorem prover for logical reasoning
R* AlgorithmrstarImplements the R* algorithm for problem-solving
LEAPleapLearns task-specific principles from few shot examples
Round Trip OptimizationrtoOptimizes responses through a round-trip process
Best of N SamplingbonGenerates multiple responses and selects the best one
Mixture of AgentsmoaCombines responses from multiple critiques
Monte Carlo Tree SearchmctsUses MCTS for decision-making in chat responses
PV GamepvgApplies a prover-verifier game approach at inference time
Deep ConfidenceN/A for proxyImplements confidence-guided reasoning with multiple intensity levels for enhanced accuracy
CoT DecodingN/A for proxyImplements chain-of-thought decoding to elicit reasoning without explicit prompting
Entropy DecodingN/A for proxyImplements adaptive sampling based on the uncertainty of tokens during generation
ThinkdeeperN/A for proxyImplements thereasoning_effort param from OpenAI for reasoning models like DeepSeek R1
AutoThinkN/A for proxyCombines query complexity classification with steering vectors to enhance reasoning

Implemented plugins

PluginSlugDescription
System Prompt LearningsplImplements whatAndrej Karpathy called the third paradigm for LLM learning, this enables the model to acquire program solving knowledge and strategies
Deep ThinkdeepthinkImplements a Gemini-like Deep Think approach using inference time scaling for reasoning LLMs
Long-Context Cerebras Planning and OptimizationlongcepoCombines planning and divide-and-conquer processing of long documents to enable infinite context
Majority Votingmajority_votingGenerates k candidate solutions and selects the most frequent answer through majority voting (default k=6)
MCP ClientmcpImplements the model context protocol (MCP) client, enabling you to use any LLM with any MCP Server
RouterrouterUses theoptillm-modernbert-large model to route requests to different approaches based on the user prompt
Chain-of-CodecocImplements a chain of code approach that combines CoT with code execution and LLM based code simulation
MemorymemoryImplements a short term memory layer, enables you to use unbounded context length with any LLM
PrivacyprivacyAnonymize PII data in request and deanonymize it back to original value in response
Read URLsreadurlsReads all URLs found in the request, fetches the content at the URL and adds it to the context
Execute CodeexecutecodeEnables use of code interpreter to execute python code in requests and LLM generated responses
JSONjsonEnables structured outputs using the outlines library, supports pydantic types and JSON schema
GenSelectgenselectGenerative Solution Selection - generates multiple candidates and selects the best based on quality criteria
Web Searchweb_searchPerforms Google searches using Chrome automation (Selenium) to gather search results and URLs
Deep Researchdeep_researchImplements Test-Time Diffusion Deep Researcher (TTD-DR) for comprehensive research reports using iterative refinement
ProxyproxyLoad balancing and failover across multiple LLM providers with health monitoring and round-robin routing

We support all major LLM providers and models for inference. You need to set the correct environment variable and the proxy will pick the corresponding client.

ProviderRequired Environment VariablesAdditional Notes
OptiLLMOPTILLM_API_KEYUses the inbuilt local server for inference, supports logprobs and decoding techniques likecot_decoding &entropy_decoding
OpenAIOPENAI_API_KEYYou can use this with any OpenAI compatible endpoint (e.g. OpenRouter) by setting thebase_url
CerebrasCEREBRAS_API_KEYYou can use this for fast inference with supported models, seedocs for details
Azure OpenAIAZURE_OPENAI_API_KEY
AZURE_API_VERSION
AZURE_API_BASE
-
Azure OpenAI (Managed Identity)AZURE_API_VERSION
AZURE_API_BASE
Login required usingaz login, seedocs for details
LiteLLMdepends on the modelSeedocs for details

You can then run the optillm proxy as follows.

python optillm.py2024-09-06 07:57:14,191 - INFO - Starting server with approach: auto2024-09-06 07:57:14,191 - INFO - Server configuration: {'approach':'auto','mcts_simulations': 2,'mcts_exploration': 0.2,'mcts_depth': 1,'best_of_n': 3,'model':'gpt-4o-mini','rstar_max_depth': 3,'rstar_num_rollouts': 5,'rstar_c': 1.4,'base_url':''}* Serving Flask app'optillm'* Debug mode: off2024-09-06 07:57:14,212 - INFO - WARNING: This is a development server. Do not use itin a production deployment. Use a production WSGI server instead.* Running on all addresses (0.0.0.0)* Running on http://127.0.0.1:8000* Running on http://192.168.10.48:80002024-09-06 07:57:14,212 - INFO - Press CTRL+C to quit

Usage

Once the proxy is running, you can use it as a drop in replacement for an OpenAI client by setting thebase_url ashttp://localhost:8000/v1.

importosfromopenaiimportOpenAIOPENAI_KEY=os.environ.get("OPENAI_API_KEY")OPENAI_BASE_URL="http://localhost:8000/v1"client=OpenAI(api_key=OPENAI_KEY,base_url=OPENAI_BASE_URL)response=client.chat.completions.create(model="moa-gpt-4o",messages=[    {"role":"user","content":"Write a Python program to build an RL model to recite text from any position that the user provides, using only numpy."    }  ],temperature=0.2)print(response)

The code above applies to both OpenAI and Azure OpenAI, just remember to populate theOPENAI_API_KEY env variable with the proper key.There are multiple ways to control the optimization techniques, they are applied in the follow order of preference:

  • You can control the technique you use for optimization by prepending the slug to the model name{slug}-model-name. E.g. in the above code we are usingmoa or mixture of agents as the optimization approach. In the proxy logs you will see the following showing themoa is been used with the base model asgpt-4o-mini.
2024-09-06 08:35:32,597 - INFO - Using approach moa, with gpt-4o-mini2024-09-06 08:35:35,358 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions"HTTP/1.1 200 OK"2024-09-06 08:35:39,553 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions"HTTP/1.1 200 OK"2024-09-06 08:35:44,795 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions"HTTP/1.1 200 OK"2024-09-06 08:35:44,797 - INFO - 127.0.0.1 - - [06/Sep/2024 08:35:44]"POST /v1/chat/completions HTTP/1.1" 200 -
  • Or, you can pass the slug in theoptillm_approach field in theextra_body.
response = client.chat.completions.create(  model="gpt-4o-mini",  messages=[{"role":"user","content":"" }],  temperature=0.2,  extra_body={"optillm_approach":"bon|moa|mcts"})
  • Or, you can just mention the approach in either yoursystem oruser prompt, within<optillm_approach> </optillm_approach> tags.
response = client.chat.completions.create(  model="gpt-4o-mini",  messages=[{"role":"user","content":"<optillm_approach>re2</optillm_approach> How many r's are there in strawberry?" }],  temperature=0.2)

Tip

You can also combine different techniques either by using symbols& and|. When you use& the techniques are processed in the order from left to right in a pipelinewith response from previous stage used as request to the next. While, with| we run all the requests in parallel and generate multiple responses that are returned as a list.

Please note that the convention described above works only when the optillm server has been started with inference approach set toauto. Otherwise, themodel attribute in the client request must be set with the model name only.

We now support all LLM providers (by wrapping around theLiteLLM sdk). E.g. you can use the Gemini Flash model withmoa by setting passing the api key in the environment variableos.environ['GEMINI_API_KEY'] and then calling the modelmoa-gemini/gemini-1.5-flash-002. In the output you will then see that LiteLLM is being used to call the base model.

9:43:21 - LiteLLM:INFO: utils.py:2952 -LiteLLMcompletion() model= gemini-1.5-flash-002; provider = gemini2024-09-29 19:43:21,011 - INFO -LiteLLMcompletion() model= gemini-1.5-flash-002; provider = gemini2024-09-29 19:43:21,481 - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash-002:generateContent?key=[redacted]"HTTP/1.1 200 OK"19:43:21 - LiteLLM:INFO: utils.py:988 - Wrapper: Completed Call, calling success_handler2024-09-29 19:43:21,483 - INFO - Wrapper: Completed Call, calling success_handler19:43:21 - LiteLLM:INFO: utils.py:2952 -LiteLLMcompletion() model= gemini-1.5-flash-002; provider = gemini

Tip

optillm is a transparent proxy and will work with any LLM API or provider that has an OpenAI API compatible chat completions endpoint, and in turn, optillm also exposesthe same OpenAI API compatible chat completions endpoint. This should allow you to integrate it into any existing tools or frameworks easily. If the LLM you want to usedoesn't have an OpenAI API compatible endpoint (like Google or Anthropic) you can useLiteLLM proxy server that supports most LLMs.

The following sequence diagram illustrates how the request and responses go through optillm.

Sequance diagram showing optillm in use

In the diagram:

  • A is an existing tool (likeoobabooga), framework (likepatchwork)or your own code where you want to use the results from optillm. You can use it directly using any OpenAI client sdk.
  • B is the optillm service (running directly or in a docker container) that will send requests to thebase_url.
  • C is any service providing an OpenAI API compatible chat completions endpoint.

Local inference server

We support loading any HuggingFace model or LoRA directly in optillm. To use the built-in inference server set theOPTILLM_API_KEY to any value (e.g.export OPTILLM_API_KEY="optillm")and then use the same in your OpenAI client. You can pass any HuggingFace model in model field. If it is a private model make sure you set theHF_TOKEN environment variablewith your HuggingFace key. We also support adding any number of LoRAs on top of the model by using the+ separator.

E.g. The following code loads the base modelmeta-llama/Llama-3.2-1B-Instruct and then adds two LoRAs on top -patched-codes/Llama-3.2-1B-FixVulns andpatched-codes/Llama-3.2-1B-FastApply.You can specify which LoRA to use using theactive_adapter param inextra_body field of OpenAI SDK client. By default we will load the last specified adapter.

OPENAI_BASE_URL="http://localhost:8000/v1"OPENAI_KEY="optillm"response=client.chat.completions.create(model="meta-llama/Llama-3.2-1B-Instruct+patched-codes/Llama-3.2-1B-FastApply+patched-codes/Llama-3.2-1B-FixVulns",messages=messages,temperature=0.2,logprobs=True,top_logprobs=3,extra_body={"active_adapter":"patched-codes/Llama-3.2-1B-FastApply"},)

You can also use the alternate decoding techniques likecot_decoding andentropy_decoding directly with the local inference server.

response=client.chat.completions.create(model="meta-llama/Llama-3.2-1B-Instruct",messages=messages,temperature=0.2,extra_body={"decoding":"cot_decoding",# or "entropy_decoding"# CoT specific params"k":10,"aggregate_paths":True,# OR Entropy specific params"top_k":27,"min_p":0.03,    })

Starting the optillm proxy with an external server (e.g. llama.cpp or ollama)

  • Set theOPENAI_API_KEY env variable to a placeholder value
    • e.g.export OPENAI_API_KEY="sk-no-key"
  • Run./llama-server -c 4096 -m path_to_model to start the server with the specified model and a context length of 4096 tokens
  • Runpython3 optillm.py --base_url base_url to start the proxy
    • e.g. for llama.cpp, runpython3 optillm.py --base_url http://localhost:8080/v1

Warning

The Anthropic API, llama.cpp-server, and ollama currently do not support sampling multiple responses from a model, which limits the available approaches to the following:cot_reflection,leap,plansearch,rstar,rto,self_consistency,re2, andz3. For models on HuggingFace, you can use the built-in local inference server as it supports multiple responses.

MCP Plugin

The Model Context Protocol (MCP) plugin enables OptiLLM to connect with MCP servers, bringing external tools, resources, and prompts into the context of language models. This allows for powerful integrations with filesystem access, database queries, API connections, and more.

OptiLLM supports bothlocal andremote MCP servers through multiple transport methods:

  • stdio: Local servers (traditional)
  • SSE: Remote servers via Server-Sent Events
  • WebSocket: Remote servers via WebSocket connections

What is MCP?

TheModel Context Protocol (MCP) is an open protocol standard that allows LLMs to securely access tools and data sources through a standardized interface. MCP servers can provide:

  • Tools: Callable functions that perform actions (like writing files, querying databases, etc.)
  • Resources: Data sources for providing context (like file contents)
  • Prompts: Reusable prompt templates for specific use cases

Configuration

Setting up MCP Config

Note on Backwards Compatibility: Existing MCP configurations will continue to work unchanged. Thetransport field defaults to "stdio" when not specified, maintaining full backwards compatibility with existing setups.

  1. Create a configuration file at~/.optillm/mcp_config.json with the following structure:

Local Server (stdio) - Traditional Method:

{"mcpServers": {"filesystem": {"transport":"stdio","command":"npx","args": ["-y","@modelcontextprotocol/server-filesystem","/path/to/allowed/directory1","/path/to/allowed/directory2"      ],"env": {},"description":"Local filesystem access"    }  },"log_level":"INFO"}

Legacy Format (still works):

{"mcpServers": {"filesystem": {"command":"npx","args": ["-y","@modelcontextprotocol/server-filesystem","/path/to/directory"],"env": {}    }  }}

Remote Server (SSE) - New Feature:

{"mcpServers": {"github": {"transport":"sse","url":"https://api.githubcopilot.com/mcp","headers": {"Authorization":"Bearer ${GITHUB_TOKEN}","Accept":"text/event-stream"      },"timeout":30.0,"sse_read_timeout":300.0,"description":"GitHub MCP server for repository access"    }  },"log_level":"INFO"}

Remote Server (WebSocket) - New Feature:

{"mcpServers": {"remote-ws": {"transport":"websocket","url":"wss://api.example.com/mcp","description":"Remote WebSocket MCP server"    }  },"log_level":"INFO"}

Mixed Configuration (Local + Remote):

{"mcpServers": {"filesystem": {"transport":"stdio","command":"npx","args": ["-y","@modelcontextprotocol/server-filesystem","/home/user/docs"],"description":"Local filesystem access"    },"github": {"transport":"sse","url":"https://api.githubcopilot.com/mcp","headers": {"Authorization":"Bearer ${GITHUB_TOKEN}"      },"description":"GitHub MCP server"    },"remote-api": {"transport":"websocket","url":"wss://api.company.com/mcp","description":"Company internal MCP server"    }  },"log_level":"INFO"}
Configuration Parameters

Common Parameters:

  • Server name: A unique identifier for the server (e.g., "filesystem", "github")
  • transport: Transport method - "stdio" (default), "sse", or "websocket"
  • description (optional): Description of the server's functionality
  • timeout (optional): Connection timeout in seconds (default: 5.0)

stdio Transport (Local Servers):

  • command: The executable to run the server
  • args: Command-line arguments for the server
  • env: Environment variables for the server process

sse Transport (Server-Sent Events):

  • url: The SSE endpoint URL
  • headers (optional): HTTP headers for authentication
  • sse_read_timeout (optional): SSE read timeout in seconds (default: 300.0)

websocket Transport (WebSocket):

  • url: The WebSocket endpoint URL

Environment Variable Expansion:Headers and other string values support environment variable expansion using${VARIABLE_NAME} syntax. This is especially useful for API keys:

{"headers": {"Authorization":"Bearer ${GITHUB_TOKEN}","X-API-Key":"${MY_API_KEY}"  }}

Available MCP Servers

OptiLLM supports both local and remote MCP servers:

Local MCP Servers (stdio transport)

You can use any of theofficial MCP servers or third-party servers that run as local processes:

  • Filesystem:@modelcontextprotocol/server-filesystem - File operations
  • Git:mcp-server-git - Git repository operations
  • SQLite:@modelcontextprotocol/server-sqlite - SQLite database access
  • Brave Search:@modelcontextprotocol/server-brave-search - Web search capabilities
Remote MCP Servers (SSE/WebSocket transport)

Remote servers provide centralized access without requiring local installation:

  • GitHub MCP Server:https://api.githubcopilot.com/mcp - Repository management, issue tracking, and code analysis
  • Third-party servers: Any MCP server that supports SSE or WebSocket protocols
Example: Comprehensive Configuration
{"mcpServers": {"filesystem": {"transport":"stdio","command":"npx","args": ["-y","@modelcontextprotocol/server-filesystem","/home/user/documents"],"description":"Local file system access"    },"search": {"transport":"stdio","command":"npx","args": ["-y","@modelcontextprotocol/server-brave-search"],"env": {"BRAVE_API_KEY":"your-api-key-here"      },"description":"Web search capabilities"    },"github": {"transport":"sse","url":"https://api.githubcopilot.com/mcp","headers": {"Authorization":"Bearer ${GITHUB_TOKEN}","Accept":"text/event-stream"      },"description":"GitHub repository and issue management"    }  },"log_level":"INFO"}

Using the MCP Plugin

Once configured, the MCP plugin will automatically:

  1. Connect to all configured MCP servers
  2. Discover available tools, resources, and prompts
  3. Make these capabilities available to the language model
  4. Handle tool calls and resource requests

The plugin enhances the system prompt with MCP capabilities so the model knows which tools are available. When the model decides to use a tool, the plugin:

  1. Executes the tool with the provided arguments
  2. Returns the results to the model
  3. Allows the model to incorporate the results into its response

Example Queries

Here are some examples of queries that will engage MCP tools:

Local Server Examples:

  • "List all the Python files in my documents directory" (Filesystem)
  • "What are the recent commits in my Git repository?" (Git)
  • "Search for the latest information about renewable energy" (Search)
  • "Query my database for all users who registered this month" (Database)

Remote Server Examples:

  • "Show me the open issues in my GitHub repository" (GitHub MCP)
  • "Create a new branch for the feature I'm working on" (GitHub MCP)
  • "What are the most recent pull requests that need review?" (GitHub MCP)
  • "Get the file contents from my remote repository" (GitHub MCP)

Troubleshooting

Logs

The MCP plugin logs detailed information to:

~/.optillm/logs/mcp_plugin.log

Check this log file for connection issues, tool execution errors, and other diagnostic information.

Common Issues

Local Server Issues (stdio transport):

  1. Command not found: Make sure the server executable is available in your PATH, or use an absolute path in the configuration.

  2. Access denied: For filesystem operations, ensure the paths specified in the configuration are accessible to the process.

Remote Server Issues (SSE/WebSocket transport):

  1. Connection timeout: Remote servers may take longer to connect. Increase thetimeout value in your configuration.

  2. Authentication failed: Verify your API keys and tokens are correct. For GitHub MCP server, ensure yourGITHUB_TOKEN environment variable is set with appropriate permissions.

  3. Network errors: Check your internet connection and verify the server URL is accessible.

  4. Environment variable not found: If using${VARIABLE_NAME} syntax, ensure the environment variables are set before starting OptILLM.

General Issues:

  1. Method not found: Some servers don't implement all MCP capabilities (tools, resources, prompts). Verify which capabilities the server supports.

  2. Transport not supported: Ensure you're using a supported transport: "stdio", "sse", or "websocket".

Example: Testing GitHub MCP Connection

To test if your GitHub MCP server configuration is working:

  1. Set your GitHub token:export GITHUB_TOKEN="your-github-token"
  2. Start OptILLM and check the logs at~/.optillm/logs/mcp_plugin.log
  3. Look for connection success messages and discovered capabilities

Available parameters

optillm supports various command-line arguments for configuration. When using Docker, these can also be set as environment variables prefixed withOPTILLM_.

ParameterDescriptionDefault Value
--approachInference approach to use"auto"
--simulationsNumber of MCTS simulations2
--explorationExploration weight for MCTS0.2
--depthSimulation depth for MCTS1
--best-of-nNumber of samples for best_of_n approach3
--modelOpenAI model to use"gpt-4o-mini"
--base-urlBase URL for OpenAI compatible endpoint""
--rstar-max-depthMaximum depth for rStar algorithm3
--rstar-num-rolloutsNumber of rollouts for rStar algorithm5
--rstar-cExploration constant for rStar algorithm1.4
--nNumber of final responses to be returned1
--return-full-responseReturn the full response including the CoT with tagsFalse
--portSpecify the port to run the proxy8000
--optillm-api-keyOptional API key for client authentication to optillm""
--cepo_*See CePO Parameters section below for detailed config optionsVarious
CePO Parameters
ParameterDescriptionDefault Value
--cepo_bestofn_nNumber of responses to be generated in best of n stage3
--cepo_bestofn_temperatureTemperature for verifier in best of n stage0.1
--cepo_bestofn_max_tokensMaximum number of tokens for verifier in best of n stage4096
--cepo_bestofn_rating_typeType of rating in best of n stage ("absolute" or "pairwise")"absolute"
--cepo_planning_nNumber of plans generated in planning stage3
--cepo_planning_mNumber of attempts to generate n plans in planning stage6
--cepo_planning_temperature_step1Temperature for generator in step 1 of planning stage0.55
--cepo_planning_temperature_step2Temperature for generator in step 2 of planning stage0.25
--cepo_planning_temperature_direct_respTemperature for generator after step 2 if planning fails and answer directly0.1
--cepo_planning_temperature_step3Temperature for generator in step 3 of planning stage0.1
--cepo_planning_temperature_step4Temperature for generator in step 4 of planning stage0
--cepo_planning_max_tokens_step1Maximum number of tokens in step 1 of planning stage4096
--cepo_planning_max_tokens_step2Maximum number of tokens in step 2 of planning stage4096
--cepo_planning_max_tokens_direct_respMaximum number of tokens after step 2 if planning fails and answer directly4096
--cepo_planning_max_tokens_step3Maximum number of tokens in step 3 of planning stage4096
--cepo_planning_max_tokens_step4Maximum number of tokens in step 4 of planning stage4096
--cepo_use_reasoning_fallbackWhether to fallback to lower levels of reasoning when higher level failsFalse
--cepo_num_of_retriesNumber of retries if llm call fails, 0 for no retries0
--cepo_print_outputWhether to print the output of each stageFalse
--cepo_config_filePath to CePO configuration fileNone
--cepo_use_plan_diversityUse additional plan diversity stepFalse
--cepo_rating_modelSpecify a model for rating step if different than for completionNone

Running with Docker

optillm can optionally be built and run using Docker and the providedDockerfile.

Using Docker Compose

  1. Make sure you have Docker and Docker Compose installed on your system.

  2. Either update the environment variables in the docker-compose.yaml file or create a.env file in the project root directory and add any environment variables you want to set. For example, to set the OpenAI API key, add the following line to the.env file:

    OPENAI_API_KEY=your_openai_api_key_here
  3. Run the following command to start optillm:

    docker compose up -d

    This will build the Docker image if it doesn't exist and start the optillm service.

  4. optillm will be available athttp://localhost:8000.

When using Docker, you can set these parameters as environment variables. For example, to set the approach and model, you would use:

OPTILLM_APPROACH=mctsOPTILLM_MODEL=gpt-4

To secure the optillm proxy with an API key, set theOPTILLM_API_KEY environment variable:

OPTILLM_API_KEY=your_secret_api_key

When the API key is set, clients must include it in their requests using theAuthorization header:

Authorization: Bearer your_secret_api_key

SOTA results on benchmarks with optillm

MARS on AIME 2025, IMO 2025, and LiveCodeBench (Oct 2025)

BenchmarkApproachProblemsCorrectAccuracyImprovement
AIME 2025Baseline301343.3%-
AIME 2025MARS302273.3%+30.0pp (+69.2%)
IMO 2025Baseline6116.7%-
IMO 2025MARS6233.3%+16.7pp (+100%)
LiveCodeBench v5/v6Baseline1054139.05%-
LiveCodeBench v5/v6MARS1055350.48%+11.43pp (+29.3%)

Model: google/gemini-2.5-flash-lite-preview-09-2025 via OpenRouterConfiguration: 3 agents, 2-pass verification, thinking tags disabled for proofs

AutoThink on GPQA-Diamond & MMLU-Pro (May 2025)

ModelGPQA-DiamondMMLU-Pro
Accuracy (%)Avg. TokensAccuracy (%)Avg. Tokens
DeepSeek-R1-Distill-Qwen-1.5B21.727868.2625.582842.75
with Fixed Budget28.473570.0026.181815.67
with AutoThink31.063520.5226.381792.50

LongCePO on LongBench v2 (Apr 2025)

Model¹Context windowShort samples (up to 32K words)Medium samples (32–128K words)
Llama 3.3 70B Instruct128K36.7 (45.0)27.0 (33.0)
LongCePO + Llama 3.3 70B Instruct8K36.8 ± 1.3838.7 ± 2.574 (39.735)²
Mistral-Large-Instruct-2411128K41.7 (46.1)30.7 (34.9)
o1-mini-2024-09-12128K48.6 (48.9)33.3 (32.9)
Claude-3.5-Sonnet-20241022200K46.1 (53.9)38.6 (41.9)
Llama-4-Maverick-17B-128E-Instruct524K32.22 (50.56)28.84 (41.86)

¹ Performance numbers reported by LongBench v2 authors, except for LongCePO and Llama-4-Maverick results.

² Numbers in parentheses for LongCePO indicate accuracy of majority voting from 5 runs.

LongCePO on HELMET - InfiniteBench En.MC, 128K length (Apr 2025)

ModelAccuracy (%)
Llama 3.3 70B Instruct (full context)58.0
LongCePO + Llama 3.3 70B Instruct (8K context)71.6 ± 1.855 (73.0)¹
o1-mini-2024-09-12 (full context)58.0
gpt-4o-2024-08-06 (full context)74.0

¹ Numbers in parentheses for LongCePO indicate accuracy of majority voting from 5 runs.

CePO on math and code benchmarks (Sep 2025)

MethodAIME 2024AIME 2025GPQALiveCodeBench
Qwen3 8B74.068.359.355.7
CePO (using Qwen3 8B)86.780.062.560.5
Qwen3 32B81.472.966.865.7
CePO (using Qwen3 32B)90.783.370.071.9
Qwen3 235B85.781.571.170.7
DeepSeek R179.870.071.564.3
OpenAI o3-mini79.674.876.866.3
Grok3 Think83.977.380.270.6

CePO on math and code benchmarks (Mar 2025)

MethodMath-L5MMLU-Pro (Math)CRUXLiveCodeBench (pass@1)Simple QA
Llama 3.3 70B51.078.672.627.120.9
Llama 3.1 405B49.879.273.031.813.5
CePO (using Llama 3.3 70B)69.684.880.131.922.6
QwQ 32B61.490.882.544.37.8
CePO (using QwQ 32B)88.192.086.351.58.2
DeepSeek R1 Llama83.182.084.047.314.6
CePO (using DeepSeek R1 Llama)90.284.089.447.215.5

coc-claude-3-5-sonnet-20241022 on AIME 2024 pass@1 (Nov 2024)

ModelScore
o1-mini56.67
coc-claude-3-5-sonnet-2024102246.67
coc-gemini/gemini-exp-112146.67
o1-preview40.00
gemini-exp-111436.67
claude-3-5-sonnet-2024102220.00
gemini-1.5-pro-00220.00
gemini-1.5-flash-00216.67

readurls&memory-gpt-4o-mini on Google FRAMES Benchmark (Oct 2024)

ModelAccuracy
readurls&memory-gpt-4o-mini61.29
gpt-4o-mini50.61
readurls&memory-Gemma2-9b30.1
Gemma2-9b5.1
Gemma2-27b30.8
Gemini Flash 1.566.5
Gemini Pro 1.572.9

plansearch-gpt-4o-mini on LiveCodeBench (Sep 2024)

Modelpass@1pass@5pass@10
plansearch-gpt-4o-mini44.0359.3163.5
gpt-4o-mini43.950.6153.25
claude-3.5-sonnet51.3
gpt-4o-2024-05-1345.2
gpt-4-turbo-2024-04-0944.2

moa-gpt-4o-mini on Arena-Hard-Auto (Aug 2024)

Results showing Mixture of Agents approach using gpt-4o-mini on Arena Hard Auto Benchmark

optillm with Patchwork (July 2024)

Since optillm is a drop-in replacement for OpenAI API you can easily integrate it with existing tools and frameworks using the OpenAI client. We used optillm withpatchwork which is an open-source framework that automates development gruntwork like PR reviews, bug fixing, security patching using workflowscalled patchflows. We saw huge performance gains across all the supported patchflows as shown below when using the mixture of agents approach (moa).

Results showing optillm mixture of agents approach used with patchflows

Testing

OptiLLM includes a comprehensive test suite to ensure reliability and compatibility.

Running Tests

The main test suite can be run from the project root:

# Test all approaches with default test casespython tests/test.py# Test specific approachespython tests/test.py --approaches moa bon mcts# Run a single testpython tests/test.py --single-test"Simple Math Problem"

Unit and Integration Tests

Additional tests are available in thetests/ directory:

# Run all tests (requires pytest)./tests/run_tests.sh# Run specific test modulespytest tests/test_plugins.py -vpytest tests/test_api_compatibility.py -v

CI/CD

All tests are automatically run on pull requests via GitHub Actions. The workflow tests:

  • Multiple Python versions (3.10, 3.11, 3.12)
  • Unit tests for plugins and core functionality
  • API compatibility tests
  • Integration tests with various approaches

Seetests/README.md for more details on the test structure and how to write new tests.

🤝 Contributing

We ❤️ contributions! OptiLLM is built by the community, for the community.

Development Setup

git clone https://github.com/codelion/optillm.gitcd optillmpython -m venv .venvsource .venv/bin/activate# or `.venv\Scripts\activate` on Windowspip install -r requirements.txtpip install -r tests/requirements.txt# Run testspython -m pytest tests/

References

Citation

If you use this library in your research, please cite:

@software{optillm,title ={OptiLLM: Optimizing inference proxy for LLMs},author ={Asankhaya Sharma},year ={2024},publisher ={GitHub},url ={https://github.com/codelion/optillm}}

Ready to optimize your LLMs? Install OptiLLM and see the difference! 🚀

Star us on GitHub if you find OptiLLM useful!


[8]ページ先頭

©2009-2025 Movatter.jp