Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Local AI API Platform

License

NotificationsYou must be signed in to change notification settings

menloresearch/cortex.cpp

Repository files navigation

Cortex.cpp Banner

GitHub commit activityGithub Last CommitGithub ContributorsDiscord

DocsAPI ReferenceChangelogIssuesCommunity

Under Active Development - Expect rapid improvements!

Cortex is the open-source brain for robots: vision, speech, language, tabular, and action -- the cloud is optional.

Installation

PlatformInstaller
Windowscortex.exe
macOScortex.pkg
Linux (Debian)cortex.deb

All other Linux distributions:

curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh| sudo bash

Start the Server

cortex start
Set log level to INFOHost: 127.0.0.1 Port: 39281Server startedAPI Documentation available at: http://127.0.0.1:39281

Full API docs.

Download Models

You can download models from the huggingface model hub using thecortex pull command:

cortex pull llama3.2
Downloaded models:    llama3.1:8b-gguf-q4-km    llama3.2:3b-gguf-q2-kAvailable to download:    1. llama3:8b-gguf    2. llama3:8b-gguf-q2-k    3. llama3:8b-gguf-q3-kl    4. ...Select a model (1-21):

Run Models

cortex run llama3.2
In order to exit, type `exit()`>

You can also run it in detached mode, meaning, you can run it in the background and canuse the model via the API:

cortex run -d llama3.2:3b-gguf-q2-k

Manage resources

cortex ps# View active models
cortex stop# Shutdown server

Why Cortex.cpp?

Local AI platform for running AI models with:

  • Multi-Engine Support - Start with llama.cpp or add your own
  • Hardware Optimized - Automatic GPU detection (NVIDIA/AMD/Intel)
  • OpenAI-Compatible API - Tools, Runs, and Multi-modal coming soon

Featured Models

ModelCommandMin RAM
Llama 3 8Bcortex run llama3.18GB
Phi-4cortex run phi-48GB
Mistralcortex run mistral4GB
Gemma 2Bcortex run gemma26GB

View all supported models →

Advanced Features

See table below for the binaries with the nightly builds.

# Multiple quantizationscortex-nightly pull llama3.2# Choose from several quantization options
# Engine management (nightly)cortex-nightly engines install llama-cpp -m
# Hardware controlcortex-nightly hardware detectcortex-nightly hardware activate

Need Help?


For Contributors

Development Builds

VersionWindowsmacOSLinux
Stableexepkgdeb
Betaexepkgdeb
Nightlyexepkgdeb

Build from Source

SeeBUILDING.md

Uninstall Cortex

Windows

  1. Open the Windows Control Panel.
  2. Navigate toAdd or Remove Programs.
  3. Search forcortexcpp and double click to uninstall. (for beta and nightly builds, search forcortexcpp-beta andcortexcpp-nightly respectively)

MacOs/Linux

Run the uninstaller script:

sudo cortex-uninstall.sh

The script to uninstall Cortex comes with the binary and was added to the/usr/local/bin/ directory. The script is namedcortex-uninstall.sh for stable builds,cortex-beta-uninstall.sh for beta builds andcortex-nightly-uninstall.sh for nightly builds.

Contact Support


[8]ページ先頭

©2009-2025 Movatter.jp