XGBoost GPU Support

This page contains information about GPU algorithms supported in XGBoost.

Note

CUDA 12.0, Compute Capability 5.0 required (Seethis list to look up compute capability of your GPU card.)

CUDA Accelerated Tree Construction Algorithms

Most of the algorithms in XGBoost including training, prediction and evaluation can be accelerated with CUDA-capable GPUs.

Usage

To enable GPU acceleration, specify thedevice parameter ascuda. In addition, the device ordinal (which GPU to use if you have multiple devices in the same node) can be specified using thecuda:<ordinal> syntax, where<ordinal> is an integer that represents the device ordinal. XGBoost defaults to 0 (the first device reported by CUDA runtime).

The GPU algorithms currently work with CLI, Python, R, and JVM packages. SeeInstallation Guide for details.

Python example
params=dict()params["device"]="cuda"params["tree_method"]="hist"Xy=xgboost.QuantileDMatrix(X,y)xgboost.train(params,Xy)
With the Scikit-Learn interface
XGBRegressor(tree_method="hist",device="cuda")

GPU-Accelerated SHAP values

XGBoost makes use ofGPUTreeShap as a backend for computing shap values when the GPU is used.

booster.set_param({"device":"cuda:0"})shap_values=booster.predict(dtrain,pred_contribs=True)shap_interaction_values=model.predict(dtrain,pred_interactions=True)

SeeUse GPU to speedup SHAP value computation for a worked example.

Multi-node Multi-GPU Training

XGBoost supports fully distributed GPU training usingDask,Spark andPySpark. For getting started with Dask see our tutorialDistributed XGBoost with Dask and worked examplesXGBoost Dask Feature Walkthrough, also Python documentationDask API for complete reference. For usage withSpark using Scala seeXGBoost4J-Spark-GPU Tutorial. Lastly for distributed GPU training withPySpark, seeDistributed XGBoost with PySpark.

RMM integration

XGBoost provides optional support for RMM integration. SeeUsing XGBoost with RAPIDS Memory Manager (RMM) plugin for more info.

Memory usage

The following are some guidelines on the device memory usage of thehist tree method on GPU.

Memory inside xgboost training is generally allocated for two reasons - storing the dataset and working memory.

The dataset itself is stored on device in a compressed ELLPACK format. The ELLPACK format is a type of sparse matrix that stores elements with a constant row stride. This format is convenient for parallel computation when compared to CSR because the row index of each element is known directly from its address in memory. The disadvantage of the ELLPACK format is that it becomes less memory efficient if the maximum row length is significantly more than the average row length. Elements are quantised and stored as integers. These integers are compressed to a minimum bit length. Depending on the number of features, we usually don’t need the full range of a 32 bit integer to store elements and so compress this down. The compressed, quantised ELLPACK format will commonly use 1/4 the space of a CSR matrix stored in floating point.

Working memory is allocated inside the algorithm proportional to the number of rows to keep track of gradients, tree positions and other per row statistics. Memory is allocated for histogram bins proportional to the number of bins, number of features and nodes in the tree. For performance reasons we keep histograms in memory from previous nodes in the tree, when a certain threshold of memory usage is passed we stop doing this to conserve memory at some performance loss.

If you are getting out-of-memory errors on a big dataset, try thexgboost.QuantileDMatrix first. If you have access to NVLink-C2C devices, seeexternal memory version. In addition,inplace_predict() should be preferred overpredict when datais already on GPU. Bothxgboost.QuantileDMatrix andinplace_predict() are automatically enabled if you are using thescikit-learn interface. Last but not least, usingQuantileDMatrixwith a data iterator as input is a great way to increase memory capacity, seeDemo for using data iterator with Quantile DMatrix.

CPU-GPU Interoperability

The model can be used on any device regardless of the one used to train it. For instance, a model trained using GPU can still work on a CPU-only machine and vice versa. For more information about model serialization, seeIntroduction to Model IO.

Developer notes

The application may be profiled with annotations by specifyingUSE_NTVX to cmake. Regions covered by the ‘Monitor’ class in CUDA code will automatically appear in the nsight profiler whenverbosity is set to 3.

References

Mitchell R, Frank E. (2017) Accelerating the XGBoost algorithm using GPU computing. PeerJ Computer Science 3:e127 https://doi.org/10.7717/peerj-cs.127

NVIDIA Parallel Forall: Gradient Boosting, Decision Trees and XGBoost with CUDA

Out-of-Core GPU Gradient Boosting

Contributors

Many thanks to the following contributors (alphabetical order):

  • Andrey Adinets

  • Jiaming Yuan

  • Jonathan C. McKinney

  • Matthew Jones

  • Philip Cho

  • Rong Ou

  • Rory Mitchell

  • Shankara Rao Thejaswi Nanditale

  • Sriram Chandramouli

  • Vinay Deshpande

Please report bugs to the XGBoostissues list.