Note

Go to the endto download the full example code.

Using rmm on a single node device

importrmmfromsklearn.datasetsimportmake_classificationimportxgboostasxgb# Initialize RMM pool allocatorrmm.reinitialize(pool_allocator=True)# Optionally force XGBoost to use RMM for all GPU memory allocation, see ./README.md# xgb.set_config(use_rmm=True)X,y=make_classification(n_samples=10000,n_informative=5,n_classes=3)dtrain=xgb.DMatrix(X,label=y)params={"max_depth":8,"eta":0.01,"objective":"multi:softprob","num_class":3,"tree_method":"hist","device":"cuda",}# XGBoost will automatically use the RMM pool allocatorbst=xgb.train(params,dtrain,num_boost_round=100,evals=[(dtrain,"train")])

Gallery generated by Sphinx-Gallery