XGBoost initially started as a research project by Tianqi Chen[12] as part of the Distributed (Deep) Machine Learning Community (DMLC) group at theUniversity of Washington. Initially, it began as a terminal application which could be configured using alibsvm configuration file. It became well known in the ML competition circles after its use in the winning solution of theHiggs Machine Learning Challenge. Soon after, the Python and R packages were built, and XGBoost now has package implementations for Java,Scala, Julia,Perl, and other languages. This brought the library to more developers and contributed to its popularity among theKaggle community, where it has been used for a large number of competitions.[11]
It was soon integrated with a number of other packages making it easier to use in their respective communities. It has now been integrated withscikit-learn forPython users and with the caret package forR users. It can also be integrated into Data Flow frameworks likeApache Spark,Apache Hadoop, andApache Flink using the abstracted Rabit[13] and XGBoost4J.[14] XGBoost is also available onOpenCL forFPGAs.[15] An efficient, scalable implementation of XGBoost has been published by Tianqi Chen andCarlos Guestrin.[16]
While the XGBoost model often achieves higher accuracy than a single decision tree, it sacrifices the intrinsic interpretability of decision trees. For example, following the path that a decision tree takes to make its decision is trivial and self-explained, but following the paths of hundreds or thousands of trees is much harder.
XGBoost works asNewton–Raphson in function space unlikegradient boosting that works as gradient descent in function space, a second orderTaylor approximation is used in the loss function to make the connection to Newton–Raphson method.
A generic unregularized XGBoost algorithm is:
Input: training set, a differentiable loss function, a number of weak learners and a learning rate.
Note that this is the initialization of the model and therefore we set a constant value for all inputs. So even if in later iterations we use optimization to find new functions, in step 0 we have to find the value, equals for all inputs, that minimizes the loss functions.
^abChen, Tianqi; Guestrin, Carlos (2016). "XGBoost: A Scalable Tree Boosting System". In Krishnapuram, Balaji; Shah, Mohak; Smola, Alexander J.; Aggarwal, Charu C.; Shen, Dou; Rastogi, Rajeev (eds.).Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016. ACM. pp. 785–794.arXiv:1603.02754.doi:10.1145/2939672.2939785.ISBN9781450342322.S2CID4650265.