Install MLC LLM Python Package¶
MLC LLM Python Package can be installed directly from a prebuilt developer package, or built from source.
Option 1. Prebuilt Package¶
We provide nightly built pip wheels for MLC-LLM via pip.Select your operating system/compute platform and run the command in your terminal:
Note
❗ Whenever using Python, it is highly recommended to useconda to manage an isolated Python environment to avoid missing dependencies, incompatible versions, and package conflicts.Please make sure your conda environment has Python and pip installed.
condaactivateyour-environmentpython-mpipinstall--pre-U-fhttps://mlc.ai/wheelsmlc-llm-nightly-cpumlc-ai-nightly-cpu
condaactivateyour-environmentpython-mpipinstall--pre-U-fhttps://mlc.ai/wheelsmlc-llm-nightly-cu122mlc-ai-nightly-cu122
condaactivateyour-environmentpython-mpipinstall--pre-U-fhttps://mlc.ai/wheelsmlc-llm-nightly-cu123mlc-ai-nightly-cu123
condaactivateyour-environmentpython-mpipinstall--pre-U-fhttps://mlc.ai/wheelsmlc-llm-nightly-rocm61mlc-ai-nightly-rocm61
condaactivateyour-environmentpython-mpipinstall--pre-U-fhttps://mlc.ai/wheelsmlc-llm-nightly-rocm62mlc-ai-nightly-rocm62
Supported in all Linux packages. Checkout the following instructionsto install the latest vulkan loader to avoid vulkan not found issue.
condainstall-cconda-forgegcclibvulkan-loader
Note
We need git-lfs in the system, you can install it via
condainstall-cconda-forgegit-lfs
If encountering issues with GLIBC not found, please install the latest glibc in conda:
condainstall-cconda-forgelibgcc-ng
Besides, we would recommend using Python 3.11; so if you are creating a new environment,you could use the following command:
condacreate--namemlc-prebuiltpython=3.11
condaactivateyour-environmentpython-mpipinstall--pre-U-fhttps://mlc.ai/wheelsmlc-llm-nightly-cpumlc-ai-nightly-cpu
Note
Always check if conda is installed properly in macOS using the command below:
condainfo|grepplatform
It should return “osx-64” for Mac with Intel chip, and “osx-arm64” for Mac with Apple chip.We need git-lfs in the system, you can install it via
condainstall-cconda-forgegit-lfs
condaactivateyour-environmentpython-mpipinstall--pre-U-fhttps://mlc.ai/wheelsmlc-llm-nightly-cpumlc-ai-nightly-cpu
Note
Please make sure your conda environment comes with python and pip.Make sure you also install the following packages,vulkan loader, clang, git and git-lfs to enable proper automatic downloadand jit compilation.
condainstall-cconda-forgeclanglibvulkan-loadergit-lfsgit
If encountering the error below:
FileNotFoundError:Couldnotfindmodule'path\to\site-packages\tvm\tvm.dll'(oroneofitsdependencies).Tryusingthefullpathwithconstructorsyntax.
It is likelyzstd, a dependency to LLVM, was missing. Please use the command below to get it installed:
condainstallzstd
Then you can verify installation in command line:
python-c"import mlc_llm; print(mlc_llm)"# Prints out: <module 'mlc_llm' from '/path-to-env/lib/python3.11/site-packages/mlc_llm/__init__.py'>
Option 2. Build from Source¶
We also provide options to build mlc runtime librariesmlc_llm
from source.This step is useful when you want to make modification or obtain a specific version of mlc runtime.
Step 1. Set up build dependency. To build from source, you need to ensure that the following build dependencies are satisfied:
CMake >= 3.24
Git
Rust and Cargo, required by Hugging Face’s tokenizer
One of the GPU runtimes:
CUDA >= 11.8 (NVIDIA GPUs)
Metal (Apple GPUs)
Vulkan (NVIDIA, AMD, Intel GPUs)
# make sure to start with a fresh environmentcondaenvremove-nmlc-chat-venv# create the conda environment with build dependencycondacreate-nmlc-chat-venv-cconda-forge\"cmake>=3.24"\rust\git\python=3.11# enter the build environmentcondaactivatemlc-chat-venv
Note
For runtime,TVM Unity compiler is not a dependency for MLCChat CLI or Python API. Only TVM’s runtime is required, which is automatically included in3rdparty/tvm.However, if you would like to compile your own models, you need to followTVM Unity.
Step 2. Configure and build. A standard git-based workflow is recommended to download MLC LLM, after which you can specify build requirements with our lightweight config generation tool:
# clone from GitHubgitclone--recursivehttps://github.com/mlc-ai/mlc-llm.git&&cdmlc-llm/# create build directorymkdir-pbuild&&cdbuild# generate build configurationpython../cmake/gen_cmake_config.py# build mlc_llm librariescmake..&&cmake--build.--parallel$(nproc)&&cd..
Note
If you are using CUDA and your compute capability is above 80, then it is require to build withset(USE_FLASHINFERON)
. Otherwise, you may run intoCannotfindPackedFunc
issue duringruntime.
To check your CUDA compute capability, you can usenvidia-smi--query-gpu=compute_cap--format=csv
.
Step 3. Install via Python. We recommend that you installmlc_llm
as a Python package, giving youaccess tomlc_llm.compile
,mlc_llm.MLCEngine
, and the CLI.There are two ways to do so:
exportMLC_LLM_SOURCE_DIR=/path-to-mlc-llmexportPYTHONPATH=$MLC_LLM_SOURCE_DIR/python:$PYTHONPATHaliasmlc_llm="python -m mlc_llm"condaactivateyour-own-envwhichpython# make sure python is installed, expected output: path_to_conda/envs/your-own-env/bin/pythoncd/path-to-mlc-llm/pythonpipinstall-e.
Step 4. Validate installation. You may validate if MLC libarires and mlc_llm CLI is compiled successfully using the following command:
# expected to see `libmlc_llm.so` and `libtvm_runtime.so`ls-l./build/# expected to see help messagemlc_llmchat-h
Finally, you can verify installation in command line. You should see the path you used to build from source with:
python-c"import mlc_llm; print(mlc_llm)"