You signed in with another tab or window.Reload to refresh your session.You signed out in another tab or window.Reload to refresh your session.You switched accounts on another tab or window.Reload to refresh your session.Dismiss alert
Each line contains an instance and is ended by a ‘\n’ character. Inital guess is optional. For two-class classification, Label is -1 or 1. For regression, Label is the target value, which can be any real number. Feature Index starts from 0. Feature Value can be any real number.
Training Configuration
classConfigure {public:size_tnumber_of_feature;// number of featuressize_tmax_depth;// max depth for each treesize_titerations;// number of trees in gbdtdoubleshrinkage;// shrinkage parameterdoublefeature_sample_ratio;// portion of features to be spliteddoubledata_sample_ratio;// portion of data to be fitted in each iterationsize_tmin_leaf_size;// min number of nodes in leafLossloss;// loss typebooldebug;// show debug info?double*feature_costs;// mannually set feature costs in order to tune the modelboolenable_feature_tunning;// when set true, `feature_costs' is used to tune the modelboolenable_initial_guess;...};
Reference
Friedman, J. H. “Greedy Function Approximation: A Gradient Boosting Machine.” (February 1999)
Friedman, J. H. “Stochastic Gradient Boosting.” (March 1999)
Jerry Ye, et al. (2009). Stochastic gradient boosted distributed decision trees. (Distributed implementation)