Installation Guide

XGBoost provides binary packages for some language bindings. The binary packages supportthe GPU algorithm (device=cuda:0) on machines with NVIDIA GPUs. Please note thattraining with multiple GPUs is only supported for Linux platform. SeeXGBoost GPU Support. Also we have both stable releases and nightly builds, see below for howto install them. For building from source, visitthis page.

Stable Release

Python

Pre-built binary wheels are uploaded to PyPI (Python Package Index) for each release. Supported platforms are Linux (x86_64, aarch64), Windows (x86_64) and MacOS (x86_64, Apple Silicon).

# Pip 21.3+ is requiredpipinstallxgboost

You might need to run the command with--user flag or usevirtualenv if you runinto permission errors.

Note

Parts of the Python package now require glibc 2.28+

Starting from 2.1.0, XGBoost Python package will be distributed in two variants:

  • manylinux_2_28: for recent Linux distros with glibc 2.28 or newer. This variant comes with all features enabled.

  • manylinux2014: for old Linux distros with glibc older than 2.28. This variant does not support GPU algorithms or federated learning.

Thepip package manager will automatically choose the correct variant depending on your system.

Starting fromMay 31, 2025, we will stop distributing themanylinux2014 variant and exclusivelydistribute themanylinux_2_28 variant. We made this decision so that our CI/CD pipeline won’t havedepend on software components that reached end-of-life (such as CentOS 7). We strongly encourageeveryone to migrate to recent Linux distros in order to use future versions of XGBoost.

Note. If you want to use GPU algorithms or federated learning on an older Linux distro, you havetwo alternatives:

  1. Upgrade to a recent Linux distro with glibc 2.28+. OR

  2. Build XGBoost from the source.

Note

Windows users need to install Visual C++ Redistributable

XGBoost requires DLLs fromVisual C++ Redistributablein order to function, so make sure to install it. Exception: Ifyou have Visual Studio installed, you already have access tonecessary libraries and thus don’t need to install Visual C++Redistributable.

Capabilities of binary wheels for each platform:

Platform

GPU

Multi-Node-Multi-GPU

Linux x86_64

Linux aarch64

MacOS x86_64

MacOS Apple Silicon

Windows

Minimal installation (CPU-only)

The default installation withpip will install the full XGBoost package, including the support for the GPU algorithms and federated learning.

You may choose to reduce the size of the installed package and save the disk space, by opting to installxgboost-cpu instead:

pipinstallxgboost-cpu

Thexgboost-cpu variant will have drastically smaller disk footprint, but does not provide some features, such as the GPU algorithms andfederated learning.

Currently,xgboost-cpu package is provided for x86_64 (amd64) Linux and Windows platforms.

Conda

You may use the Conda packaging manager to install XGBoost:

condainstall-cconda-forgepy-xgboost

Conda should be able to detect the existence of a GPU on your machine and install the correct variant of XGBoost. If you run into issues, try indicating the variant explicitly:

# CPU variantcondainstall-cconda-forgepy-xgboost=*=cpu*# GPU variantcondainstall-cconda-forgepy-xgboost=*=cuda*

To force the installation of the GPU variant on a machine that does not have an NVIDIA GPU, use environment variableCONDA_OVERRIDE_CUDA,as described in“Managing Virtual Packages” in the conda docs.

exportCONDA_OVERRIDE_CUDA="12.8"condainstall-cconda-forgepy-xgboost=*=cuda*

You can install Conda from the following link:Download the conda-forge Installer.

R

  • From R Universe

install.packages('xgboost',repos=c('https://dmlc.r-universe.dev','https://cloud.r-project.org'))

Note

Using all CPU cores (threads) on Mac OSX

If you are using Mac OSX, you should first install OpenMP library (libomp) by running

brewinstalllibomp

and then runinstall.packages("xgboost"). Without OpenMP, XGBoost will only use asingle CPU core, leading to suboptimal training speed.

  • We also provideexperimental pre-built binary with GPU support. With this binary,you will be able to use the GPU algorithm without building XGBoost from the source.Download the binary package from the Releases page. The file name will be of the formxgboost_r_gpu_[os]_[version].tar.gz, where[os] is eitherlinux orwin64.(We build the binaries for 64-bit Linux and Windows.)Then install XGBoost by running:

    # Install dependenciesR-q-e"install.packages(c('data.table', 'jsonlite'))"# Install XGBoostRCMDINSTALL./xgboost_r_gpu_linux.tar.gz
  • From CRAN (outdated):

Warning

We are working on bringing the CRAN version of XGBoost up-to-date, in the meantime,please use packages from the R-universe.

install.packages("xgboost")

Note

Using all CPU cores (threads) on Mac OSX

If you are using Mac OSX, you should first install OpenMP library (libomp) by running

brewinstalllibomp

and then runinstall.packages("xgboost"). Without OpenMP, XGBoost will only use asingle CPU core, leading to suboptimal training speed.

JVM

  • XGBoost4j-Spark

Maven
<properties>...<!-- Specify Scala version in package name --><scala.binary.version>2.12</scala.binary.version></properties><dependencies>...<dependency><groupId>ml.dmlc</groupId><artifactId>xgboost4j-spark_${scala.binary.version}</artifactId><version>latest_version_num</version></dependency></dependencies>
sbt
libraryDependencies++=Seq("ml.dmlc"%%"xgboost4j-spark"%"latest_version_num")
  • XGBoost4j-Spark-GPU

Maven
<properties>...<!-- Specify Scala version in package name --><scala.binary.version>2.12</scala.binary.version></properties><dependencies>...<dependency><groupId>ml.dmlc</groupId><artifactId>xgboost4j-spark-gpu_${scala.binary.version}</artifactId><version>latest_version_num</version></dependency></dependencies>
sbt
libraryDependencies++=Seq("ml.dmlc"%%"xgboost4j-spark-gpu"%"latest_version_num")

This will check out the latest stable version from the Maven Central.

For the latest release version number, please checkrelease page.

To enable the GPU algorithm (device='cuda'), use artifactsxgboost4j-spark-gpu_2.12 instead (note thegpu suffix).

Note

Windows not supported in the JVM package

Currently, XGBoost4J-Spark does not support Windows platform, as the distributed training algorithm is inoperational for Windows. Please use Linux or MacOS.

Nightly Build

Python

Nightly builds are available. You can go tothis page,find the wheel with the commit ID you want and install it with pip:

pipinstall<urltothewheel>

The capability of Python pre-built wheel is the same as stable release.

R

Other than standard CRAN installation, we also provideexperimental pre-built binary onwith GPU support. You can go tothis page, Find the commitID you want to install and then locate the filexgboost_r_gpu_[os]_[commit].tar.gz,where[os] is eitherlinux orwin64. (We build the binaries for 64-bit Linuxand Windows.) Download it and run the following commands:

# Install dependenciesR-q-e"install.packages(c('data.table', 'jsonlite', 'remotes'))"# Install XGBoostRCMDINSTALL./xgboost_r_gpu_linux.tar.gz

JVM

  • XGBoost4j/XGBoost4j-Spark

Maven
<repository><id>XGBoost4JSnapshotRepo</id><name>XGBoost4JSnapshotRepo</name><url>https://s3-us-west-2.amazonaws.com/xgboost-maven-repo/snapshot/</url></repository>
sbt
resolvers+="XGBoost4J Snapshot Repo"at"https://s3-us-west-2.amazonaws.com/xgboost-maven-repo/snapshot/"

Then add XGBoost4J-Spark as a dependency:

maven
<properties>...<!-- Specify Scala version in package name --><scala.binary.version>2.12</scala.binary.version></properties><dependencies><dependency><groupId>ml.dmlc</groupId><artifactId>xgboost4j-spark_${scala.binary.version}</artifactId><version>latest_version_num-SNAPSHOT</version></dependency></dependencies>
sbt
libraryDependencies++=Seq("ml.dmlc"%%"xgboost4j-spark"%"latest_version_num-SNAPSHOT")
  • XGBoost4j-Spark-GPU

maven
<properties>...<!-- Specify Scala version in package name --><scala.binary.version>2.12</scala.binary.version></properties><dependencies><dependency><groupId>ml.dmlc</groupId><artifactId>xgboost4j-spark-gpu_${scala.binary.version}</artifactId><version>latest_version_num-SNAPSHOT</version></dependency></dependencies>
sbt
libraryDependencies++=Seq("ml.dmlc"%%"xgboost4j-spark-gpu"%"latest_version_num-SNAPSHOT")

Look up theversion field inpom.xml to get the correct version number.

The SNAPSHOT JARs are hosted by the XGBoost project. Every commit in themaster branch will automatically trigger generation of a new SNAPSHOT JAR. You can control how often Maven should upgrade your SNAPSHOT installation by specifyingupdatePolicy. Seehere for details.

You can browse the file listing of the Maven repository athttps://s3-us-west-2.amazonaws.com/xgboost-maven-repo/list.html.

To enable the GPU algorithm (device='cuda'), use artifactsxgboost4j-gpu_2.12 andxgboost4j-spark-gpu_2.12 instead (note thegpu suffix).