- Notifications
You must be signed in to change notification settings - Fork0
appdevwk/cortex.cpp
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Documentation -API Reference -Changelog -Bug reports -Discord
Cortex.cpp is currently in active development.
Cortex is a Local AI API Platform that is used to run and customize LLMs.
Key Features:
- Pull from Huggingface, or Cortex Built-in Models
- Models stored in universal file formats (vs blobs)
- Swappable Engines (default:
llamacpp, future:ONNXRuntime,TensorRT-LLM) - Cortex can be deployed as a standalone API server, or integrated into apps likeJan.ai
Coming soon; now available oncortex-nightly:
- Engines Management (install specific llama-cpp version and variants)
- Nvidia Hardware detection & activation (current: Nvidia, future: AMD, Intel, Qualcomm)
- Cortex's roadmap is to implement the full OpenAI API including Tools, Runs, Multi-modal and Realtime APIs.
Cortex has an Local Installer that packages all required dependencies, so that no internet connection is required during the installation process.
Cortex also has aNetwork Installer which downloads the necessary dependencies from the internet during the installation.
Windows:cortex.exe
MacOS (Silicon/Intel):cortex.pkg
Linux debian based distros:cortex-linux-local-installer.deb
- For Linux: Download the installer and run the following command in terminal:
# Linux debian based distros curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh| sudo bash -s -- --deb_local# Other Linux distros curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh| sudo bash -s
- The binary will be installed in the
/usr/bin/directory.
After installation, you can run Cortex.cpp from the command line by typingcortex --help.
# Run a Modelcortex pull llama3.2 cortex pull bartowski/Meta-Llama-3.1-8B-Instruct-GGUFcortex run llama3.2 # Resource Managementcortex ps (view active models & RAM/VRAM used) cortex models stop llama3.2 # Available on cortex-nightly:cortex engines install llama-cpp -m (lists versions and variants)cortex hardware list (hardware detection)cortex hardware activate cortex stopRefer to ourQuickstart andCLI documentation for more details.
Cortex.cpp includes a REST API accessible atlocalhost:39281.
Refer to ourAPI documentation for more details.
Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexibility and extensive model access:
- Hugging Face: GGUF models eg
author/Model-GGUF - Cortex Built-in Models
Once downloaded, the model.gguf andmodel.yml files are stored in~\cortexcpp\models.
Note:You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.
| Model /Engine | llama.cpp | Command |
|---|---|---|
| phi-3.5 | ✅ | cortex run phi3.5 |
| llama3.2 | ✅ | cortex run llama3.2 |
| llama3.1 | ✅ | cortex run llama3.1 |
| codestral | ✅ | cortex run codestral |
| gemma2 | ✅ | cortex run gemma2 |
| mistral | ✅ | cortex run mistral |
| ministral | ✅ | cortex run ministral |
| qwen2 | ✅ | cortex run qwen2.5 |
| openhermes-2.5 | ✅ | cortex run openhermes-2.5 |
| tinyllama | ✅ | cortex run tinyllama |
View allCortex Built-in Models.
Cortex supports multiple quantizations for each model.
❯ cortex-nightly pull llama3.2Downloaded models: llama3.2:3b-gguf-q2-kAvailable to download: 1. llama3.2:3b-gguf-q3-kl 2. llama3.2:3b-gguf-q3-km 3. llama3.2:3b-gguf-q3-ks 4. llama3.2:3b-gguf-q4-km (default) 5. llama3.2:3b-gguf-q4-ks 6. llama3.2:3b-gguf-q5-km 7. llama3.2:3b-gguf-q5-ks 8. llama3.2:3b-gguf-q6-k 9. llama3.2:3b-gguf-q8-0Select a model (1-9):Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.
MacOS (Universal):cortex-mac-network-installer.pkg
Linux debian based distros:cortex-linux-network-installer.deb
Cortex releases Beta and Nightly versions for advanced users to try new features (we appreciate your feedback!)
- Beta (early preview): CLI command:
cortex-beta - Nightly (released every night): CLI Command:
cortex-nightly- Nightly automatically pulls the latest changes from upstreamllama.cpp repo, creates a PR and runs tests.
- If all test pass, the PR is automatically merged into our repo, with the latest llama.cpp version.
| Version | Windows | MacOS | Linux debian based distros |
| Beta (Preview) | cortex.exe | cortex.pkg | cortex.deb |
| Nightly (Experimental) | cortex.exe | cortex.pkg | cortex.deb |
Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.
| Version Type | Windows | MacOS | Linux debian based distros |
| Stable (Recommended) | cortex.exe | cortex.pkg | cortex.deb |
| Beta (Preview) | cortex.exe | cortex.pkg | cortex.deb |
| Nightly (Experimental) | cortex.exe | cortex.pkg | cortex.deb |
- Clone the Cortex.cpp repositoryhere.
- Navigate to the
enginefolder. - Configure the vpkg:
cd vcpkg./bootstrap-vcpkg.batvcpkg install- Build the Cortex.cpp inside the
engine/buildfolder:
mkdir buildcd buildcmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-staticcmake --build. --config Release
- Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h
- Clone the Cortex.cpp repositoryhere.
- Navigate to the
enginefolder. - Configure the vpkg:
cd vcpkg./bootstrap-vcpkg.shvcpkg install- Build the Cortex.cpp inside the
engine/buildfolder:
mkdir buildcd buildcmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmakemake -j4- Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h
- Clone the Cortex.cpp repositoryhere.
- Navigate to the
enginefolder. - Configure the vpkg:
cd vcpkg./bootstrap-vcpkg.shvcpkg install- Build the Cortex.cpp inside the
engine/buildfolder:
mkdir buildcd buildcmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmakemake -j4- Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h
Open Cortex.cpp repository in Codespaces or local devcontainer
devcontainer up --workspace-folder.Configure vpkg in
engine/vcpkg:
cd engine/vcpkgexport VCPKG_FORCE_SYSTEM_BINARIES="$([[$(uname -m)=='arm64' ]]&&echo'1'||echo'0')"./bootstrap-vcpkg.sh
- Build the Cortex.cpp inside the
engine/buildfolder:
cd enginemkdir -p buildcd buildcmake .. -DCMAKE_TOOLCHAIN_FILE=$(realpath ..)/vcpkg/scripts/buildsystems/vcpkg.cmakemake -j$(grep -c ^processor /proc/cpuinfo)
- Verify that Cortex.cpp is installed correctly by getting help information.
cd engine/build./cortex -h- Everytime a rebuild is needed, just run the commands above using oneliner
npx -y runme run --filename README.md -t devcontainer -y
- Open the Windows Control Panel.
- Navigate to
Add or Remove Programs. - Search for
cortexcppand double click to uninstall. (for beta and nightly builds, search forcortexcpp-betaandcortexcpp-nightlyrespectively)
Run the uninstaller script:
sudo sh cortex-uninstall.sh
For MacOS, there is a uninstaller script comes with the binary and added to the/usr/local/bin/ directory. The script is namedcortex-uninstall.sh for stable builds,cortex-beta-uninstall.sh for beta builds andcortex-nightly-uninstall.sh for nightly builds.
sudo apt remove cortexcpp
- For support, please file aGitHub ticket.
- For questions, join our Discordhere.
- For long-form inquiries, please emailhello@jan.ai.
About
Local AI API Platform
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Languages
- C++68.8%
- C25.4%
- Python2.3%
- Shell1.2%
- Inno Setup1.1%
- CMake0.7%
- Other0.5%
