Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference

License

NotificationsYou must be signed in to change notification settings

mudler/LocalAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation




LocalAI forksLocalAI starsLocalAI pull-requests

LocalAI Docker hubLocalAI Quay.io

Follow LocalAI_APIJoin LocalAI Discord Community

mudler%2FLocalAI | Trendshift

💡 Get help -❓FAQ💭Discussions💬 Discord📖 Documentation website

💻 Quickstart🖼️ Models🚀 Roadmap🥽 Demo🌍 Explorer🛫 Examples Try onTelegram

testsBuild and Releasebuild container imagesBump dependenciesArtifact Hub

LocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that's compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained byEttore Di Giacinto.

📚🆕 Local Stack Family

🆕 LocalAI is now part of a comprehensive suite of AI tools designed to work together:

LocalAGI Logo

A powerful Local AI agent management platform that serves as a drop-in replacement for OpenAI's Responses API, enhanced with advanced agentic capabilities.

LocalRecall Logo

A REST-ful API and knowledge base management system that provides persistent memory and storage capabilities for AI agents.

Screenshots

Talk InterfaceGenerate Audio
Screenshot 2025-03-31 at 12-01-36 LocalAI - TalkScreenshot 2025-03-31 at 12-01-29 LocalAI - Generate audio with voice-en-us-ryan-low
Models OverviewGenerate Images
Screenshot 2025-03-31 at 12-01-20 LocalAI - ModelsScreenshot 2025-03-31 at 12-31-41 LocalAI - Generate images with flux 1-dev
Chat InterfaceHome
Screenshot 2025-03-31 at 11-57-44 LocalAI - Chat with localai-functioncall-qwen2 5-7b-v0 5Screenshot 2025-03-31 at 11-57-23 LocalAI API - c2a39e3 (c2a39e3639227cfd94ffffe9f5691239acc275a8)
LoginSwarm
Screenshot 2025-03-31 at 12-09-59 Screenshot 2025-03-31 at 12-10-39 LocalAI - P2P dashboard

💻 Quickstart

Run the installer script:

# Basic installationcurl https://localai.io/install.sh| sh

For more installation options, seeInstaller Options.

Or run with docker:

CPU only image:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest

NVIDIA GPU Images:

# CUDA 12.0docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12# CUDA 11.7docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-11# NVIDIA Jetson (L4T) ARM64docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-nvidia-l4t-arm64

AMD GPU Images (ROCm):

docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas

Intel GPU Images (oneAPI):

# Intel GPU with FP16 supportdocker run -ti --name local-ai -p 8080:8080 --device=/dev/dri/card1 --device=/dev/dri/renderD128 localai/localai:latest-gpu-intel-f16# Intel GPU with FP32 supportdocker run -ti --name local-ai -p 8080:8080 --device=/dev/dri/card1 --device=/dev/dri/renderD128 localai/localai:latest-gpu-intel-f32

Vulkan GPU Images:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan

AIO Images (pre-downloaded models):

# CPU versiondocker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu# NVIDIA CUDA 12 versiondocker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12# NVIDIA CUDA 11 versiondocker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-11# Intel GPU versiondocker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel-f16# AMD GPU versiondocker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas

For more information about the AIO images and pre-downloaded models, seeContainer Documentation.

To load models:

# From the model gallery (see available models with `local-ai models list`, in the WebUI from the model tab, or visiting https://models.localai.io)local-ai run llama-3.2-1b-instruct:q4_k_m# Start LocalAI with the phi-2 model directly from huggingfacelocal-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf# Install and run a model from the Ollama OCI registrylocal-ai run ollama://gemma:2b# Run a model from a configuration filelocal-ai run https://gist.githubusercontent.com/.../phi-2.yaml# Install and run a model from a standard OCI registry (e.g., Docker Hub)local-ai run oci://localai/phi-2:latest

For more information, see💻 Getting started

📰 Latest project news

Roadmap items:List of issues

🔗 Community and integrations

Build and deploy custom containers:

WebUIs:

Model galleries

Other:

🔗 Resources

Citation

If you utilize this repository, data in a downstream project, please consider citing it with:

@misc{localai,  author = {Ettore Di Giacinto},  title = {LocalAI: The free, Open source OpenAI alternative},  year = {2023},  publisher = {GitHub},  journal = {GitHub repository},  howpublished = {\url{https://github.com/go-skynet/LocalAI}},

❤️ Sponsors

Do you find LocalAI useful?

Support the project by becominga backer or sponsor. Your logo will show up here with a link to your website.

A huge thank you to our generous sponsors who support this project covering CI expenses, and ourSponsor list:


🌟 Star history

LocalAI Star history Chart

📖 License

LocalAI is a community-driven project created byEttore Di Giacinto.

MIT - Author Ettore Di Giacintomudler@localai.io

🙇 Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

🤗 Contributors

This is a community project, a special thanks to our contributors! 🤗

About

🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Sponsor this project

  •  

[8]ページ先頭

©2009-2025 Movatter.jp