Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Local AI API Platform

License

NotificationsYou must be signed in to change notification settings

appdevwk/cortex.cpp

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cortex cpp's Readme Banner

GitHub commit activityGithub Last CommitGithub ContributorsGitHub closed issuesDiscord

Documentation -API Reference -Changelog -Bug reports -Discord

Cortex.cpp is currently in active development.

Overview

Cortex is a Local AI API Platform that is used to run and customize LLMs.

Key Features:

  • Pull from Huggingface, or Cortex Built-in Models
  • Models stored in universal file formats (vs blobs)
  • Swappable Engines (default:llamacpp, future:ONNXRuntime,TensorRT-LLM)
  • Cortex can be deployed as a standalone API server, or integrated into apps likeJan.ai

Coming soon; now available oncortex-nightly:

  • Engines Management (install specific llama-cpp version and variants)
  • Nvidia Hardware detection & activation (current: Nvidia, future: AMD, Intel, Qualcomm)
  • Cortex's roadmap is to implement the full OpenAI API including Tools, Runs, Multi-modal and Realtime APIs.

Local Installation

Cortex has an Local Installer that packages all required dependencies, so that no internet connection is required during the installation process.

Cortex also has aNetwork Installer which downloads the necessary dependencies from the internet during the installation.

Windows:cortex.exe

MacOS (Silicon/Intel):cortex.pkg

Linux debian based distros:cortex-linux-local-installer.deb

  • For Linux: Download the installer and run the following command in terminal:
# Linux debian based distros    curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh| sudo bash -s -- --deb_local# Other Linux distros    curl -s https://raw.githubusercontent.com/janhq/cortex/main/engine/templates/linux/install.sh| sudo bash -s
  • The binary will be installed in the/usr/bin/ directory.

Usage

CLI

After installation, you can run Cortex.cpp from the command line by typingcortex --help.

# Run a Modelcortex pull llama3.2                                    cortex pull bartowski/Meta-Llama-3.1-8B-Instruct-GGUFcortex run llama3.2                          # Resource Managementcortex ps                               (view active models & RAM/VRAM used)       cortex models stop llama3.2                # Available on cortex-nightly:cortex engines install llama-cpp -m     (lists versions and variants)cortex hardware list                    (hardware detection)cortex hardware activate   cortex stop

Refer to ourQuickstart andCLI documentation for more details.

API:

Cortex.cpp includes a REST API accessible atlocalhost:39281.

Refer to ourAPI documentation for more details.

Models

Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexibility and extensive model access:

  • Hugging Face: GGUF models egauthor/Model-GGUF
  • Cortex Built-in Models

Once downloaded, the model.gguf andmodel.yml files are stored in~\cortexcpp\models.

Note:You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.

Cortex Built-in Models & Quantizations

Model /Enginellama.cppCommand
phi-3.5cortex run phi3.5
llama3.2cortex run llama3.2
llama3.1cortex run llama3.1
codestralcortex run codestral
gemma2cortex run gemma2
mistralcortex run mistral
ministralcortex run ministral
qwen2cortex run qwen2.5
openhermes-2.5cortex run openhermes-2.5
tinyllamacortex run tinyllama

View allCortex Built-in Models.

Cortex supports multiple quantizations for each model.

❯ cortex-nightly pull llama3.2Downloaded models:    llama3.2:3b-gguf-q2-kAvailable to download:    1. llama3.2:3b-gguf-q3-kl    2. llama3.2:3b-gguf-q3-km    3. llama3.2:3b-gguf-q3-ks    4. llama3.2:3b-gguf-q4-km (default)    5. llama3.2:3b-gguf-q4-ks    6. llama3.2:3b-gguf-q5-km    7. llama3.2:3b-gguf-q5-ks    8. llama3.2:3b-gguf-q6-k    9. llama3.2:3b-gguf-q8-0Select a model (1-9):

Advanced Installation

Network Installer (Stable)

Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.

Linux debian based distros:cortex-linux-network-installer.deb

Beta & Nightly Versions (Local Installer)

Cortex releases Beta and Nightly versions for advanced users to try new features (we appreciate your feedback!)

  • Beta (early preview): CLI command:cortex-beta
  • Nightly (released every night): CLI Command:cortex-nightly
    • Nightly automatically pulls the latest changes from upstreamllama.cpp repo, creates a PR and runs tests.
    • If all test pass, the PR is automatically merged into our repo, with the latest llama.cpp version.
VersionWindowsMacOSLinux debian based distros
Beta (Preview) cortex.exe cortex.pkg cortex.deb
Nightly (Experimental) cortex.exe cortex.pkg cortex.deb

Network Installer

Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.

Version TypeWindowsMacOSLinux debian based distros
Stable (Recommended) cortex.exe cortex.pkg cortex.deb
Beta (Preview) cortex.exe cortex.pkg cortex.deb
Nightly (Experimental) cortex.exe cortex.pkg cortex.deb

Build from Source

Windows

  1. Clone the Cortex.cpp repositoryhere.
  2. Navigate to theengine folder.
  3. Configure the vpkg:
cd vcpkg./bootstrap-vcpkg.batvcpkg install
  1. Build the Cortex.cpp inside theengine/build folder:
mkdir buildcd buildcmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-staticcmake --build. --config Release
  1. Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h

MacOS

  1. Clone the Cortex.cpp repositoryhere.
  2. Navigate to theengine folder.
  3. Configure the vpkg:
cd vcpkg./bootstrap-vcpkg.shvcpkg install
  1. Build the Cortex.cpp inside theengine/build folder:
mkdir buildcd buildcmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmakemake -j4
  1. Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h

Linux

  1. Clone the Cortex.cpp repositoryhere.
  2. Navigate to theengine folder.
  3. Configure the vpkg:
cd vcpkg./bootstrap-vcpkg.shvcpkg install
  1. Build the Cortex.cpp inside theengine/build folder:
mkdir buildcd buildcmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmakemake -j4
  1. Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h

Devcontainer / Codespaces

  1. Open Cortex.cpp repository in Codespaces or local devcontainer

    Open in GitHub Codespaces

    devcontainer up --workspace-folder.
  2. Configure vpkg inengine/vcpkg:

cd engine/vcpkgexport VCPKG_FORCE_SYSTEM_BINARIES="$([[$(uname -m)=='arm64' ]]&&echo'1'||echo'0')"./bootstrap-vcpkg.sh
  1. Build the Cortex.cpp inside theengine/build folder:
cd enginemkdir -p buildcd buildcmake .. -DCMAKE_TOOLCHAIN_FILE=$(realpath ..)/vcpkg/scripts/buildsystems/vcpkg.cmakemake -j$(grep -c ^processor /proc/cpuinfo)
  1. Verify that Cortex.cpp is installed correctly by getting help information.
cd engine/build./cortex -h
  1. Everytime a rebuild is needed, just run the commands above using oneliner
npx -y runme run --filename README.md -t devcontainer -y

Uninstallation

Windows

  1. Open the Windows Control Panel.
  2. Navigate toAdd or Remove Programs.
  3. Search forcortexcpp and double click to uninstall. (for beta and nightly builds, search forcortexcpp-beta andcortexcpp-nightly respectively)

MacOs

Run the uninstaller script:

sudo sh cortex-uninstall.sh

For MacOS, there is a uninstaller script comes with the binary and added to the/usr/local/bin/ directory. The script is namedcortex-uninstall.sh for stable builds,cortex-beta-uninstall.sh for beta builds andcortex-nightly-uninstall.sh for nightly builds.

Linux

sudo apt remove cortexcpp

Contact Support

About

Local AI API Platform

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++68.8%
  • C25.4%
  • Python2.3%
  • Shell1.2%
  • Inno Setup1.1%
  • CMake0.7%
  • Other0.5%

[8]ページ先頭

©2009-2025 Movatter.jp