Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
#

llm-inference

Here are 1,079 public repositories matching this topic...

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

  • UpdatedMay 27, 2025
  • C++

本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)

  • UpdatedJul 10, 2025
  • HTML

20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

  • UpdatedJul 18, 2025
  • Python

Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.

  • UpdatedJul 14, 2025
  • Python

Official inference library for Mistral models

  • UpdatedMar 20, 2025
  • Jupyter Notebook

High-speed Large Language Model Serving for Local Deployment

  • UpdatedFeb 19, 2025
  • C++
BentoML

The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!

  • UpdatedJul 18, 2025
  • Python

🚀 全网效果最好的移动端【实时对话数字人】。 支持本地部署、多模态交互(语音、文本、表情),响应速度低于 1.5 秒,适用于直播、教学、客服、金融、政务等对隐私与实时性要求极高的场景。开箱即用,开发者友好。

  • UpdatedJul 18, 2025
  • C++

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.

  • UpdatedJul 18, 2025
  • Python
superduperAwesome-LLM-Inference

📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉

  • UpdatedJul 14, 2025
  • Python

FlashInfer: Kernel Library for LLM Serving

  • UpdatedJul 18, 2025
  • Cuda

Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs

  • UpdatedMay 21, 2025
  • Python

The edge and AI gateway for agents. Arch is an intelligent proxy server that handles the low-level work in building agents like applying guardrails, routing prompts to the right agent, and unifying access to any LLM. It's a framework-agnostic infrastructure layer that helps you build production-grade agents faster.

  • UpdatedJul 17, 2025
  • Rust

Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.

  • UpdatedJul 16, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to thellm-inference topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with thellm-inference topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp