- Notifications
You must be signed in to change notification settings - Fork180
menloresearch/cortex.cpp
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This repository is no longer actively maintained.
Development has moved tomenloresearch/llama.cpp.
Please contribute directly tollama.cpp
moving forward.
Docs •API Reference •Changelog •Issues •Community
Under Active Development - Expect rapid improvements!
Cortex is the open-source brain for robots: vision, speech, language, tabular, and action -- the cloud is optional.
Platform | Installer |
---|---|
Windows | cortex.exe |
macOS | cortex.pkg |
Linux (Debian) | cortex.deb |
All other Linux distributions:
curl -s https://raw.githubusercontent.com/menloresearch/cortex/main/engine/templates/linux/install.sh| sudo bash
cortex start
Set log level to INFOHost: 127.0.0.1 Port: 39281Server startedAPI Documentation available at: http://127.0.0.1:39281
You can download models from the huggingface model hub using thecortex pull
command:
cortex pull llama3.2
Downloaded models: llama3.1:8b-gguf-q4-km llama3.2:3b-gguf-q2-kAvailable to download: 1. llama3:8b-gguf 2. llama3:8b-gguf-q2-k 3. llama3:8b-gguf-q3-kl 4. ...Select a model (1-21):
cortex run llama3.2
In order to exit, type `exit()`>
You can also run it in detached mode, meaning, you can run it in the background and canuse the model via the API:
cortex run -d llama3.2:3b-gguf-q2-k
cortex ps# View active models
cortex stop# Shutdown server
Local AI platform for running AI models with:
- Multi-Engine Support - Start with llama.cpp or add your own
- Hardware Optimized - Automatic GPU detection (NVIDIA/AMD/Intel)
- OpenAI-Compatible API - Tools, Runs, and Multi-modal coming soon
Model | Command | Min RAM |
---|---|---|
Llama 3 8B | cortex run llama3.1 | 8GB |
Phi-4 | cortex run phi-4 | 8GB |
Mistral | cortex run mistral | 4GB |
Gemma 2B | cortex run gemma2 | 6GB |
See table below for the binaries with the nightly builds.
# Multiple quantizationscortex-nightly pull llama3.2# Choose from several quantization options
# Engine management (nightly)cortex-nightly engines install llama-cpp -m
# Hardware controlcortex-nightly hardware detectcortex-nightly hardware activate
- Quick troubleshooting:
cortex --help
- Documentation
- Community Discord
- Report Issues
Version | Windows | macOS | Linux |
---|---|---|---|
Stable | exe | pkg | deb |
Beta | exe | pkg | deb |
Nightly | exe | pkg | deb |
SeeBUILDING.md
- Open the Windows Control Panel.
- Navigate to
Add or Remove Programs
. - Search for
cortexcpp
and double click to uninstall. (for beta and nightly builds, search forcortexcpp-beta
andcortexcpp-nightly
respectively)
Run the uninstaller script:
sudo cortex-uninstall.sh
The script to uninstall Cortex comes with the binary and was added to the/usr/local/bin/
directory. The script is namedcortex-uninstall.sh
for stable builds,cortex-beta-uninstall.sh
for beta builds andcortex-nightly-uninstall.sh
for nightly builds.
- For support, please file aGitHub ticket.
- For questions, join our Discordhere.
- For long-form inquiries, please emailhello@jan.ai.
About
Local AI API Platform
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.