Software authors:Brian Williamson,Jean Feng, andCharlie Wolock
Methodology authors:Brian Williamson,Peter Gilbert,Noah Simon,Marco Carone,Jean Feng
Python package:https://github.com/bdwilliamson/vimpy
In predictive modeling applications, it is often of interest to determine the relative contribution of subsets of features in explaining an outcome; this is often called variable importance. It is useful to consider variable importance as a function of the unknown, underlying data-generating mechanism rather than the specific predictive algorithm used to fit the data. This package provides functions that, given fitted values from predictive algorithms, compute algorithm-agnostic estimates of population variable importance, along with asymptotically valid confidence intervals for the true importance and hypothesis tests of the null hypothesis of zero importance.
Specifically, the types of variable importance supported byvimp include: difference in population classification accuracy, difference in population area under the receiver operating characteristic curve, difference in population deviance, difference in population R-squared.
More detail may be found in our papers onR-squared-based variable importance,general variable importance, andgeneral Shapley-based variable importance.
This method works on low-dimensional and high-dimensional data.
If you encounter any bugs or have any specific feature requests, pleasefile an issue.
You may install a stable release ofvimp fromCRAN viainstall.packages("vimp"). You may also install a stable release ofvimp from GitHub viadevtools by running the following code (replacev2.1.0 with the tag for the specific release you wish to install):
## install.packages("devtools") # only run this line if necessarydevtools::install_github(repo="bdwilliamson/vimp@v2.1.0")You may install a development release ofvimp from GitHub viadevtools by running the following code:
## install.packages("devtools") # only run this line if necessarydevtools::install_github(repo="bdwilliamson/vimp")This example shows how to usevimp in a simple setting with simulated data, usingSuperLearner to estimate the conditional mean functions and specifying the importance measure of interest as the R-squared-based measure. For more examples and detailed explanation, please see thevignette.
# load required functions and librarieslibrary("SuperLearner")library("vimp")library("xgboost")library("glmnet")# -------------------------------------------------------------# problem setup# -------------------------------------------------------------# set up the datan<-100p<-2s<-1# desire importance for X_1x<-as.data.frame(replicate(p,runif(n,-1,1)))y<-(x[,1])^2*(x[,1]+7/5)+(25/9)*(x[,2])^2+rnorm(n,0,1)# -------------------------------------------------------------# get variable importance!# -------------------------------------------------------------# set up the learner library, consisting of the mean, boosted trees,# elastic net, and random forestlearner.lib<-c("SL.mean","SL.xgboost","SL.glmnet","SL.randomForest")# get the variable importance estimate, SE, and CI# I'm using only 2 cross-validation folds to make things run quickly; in practice, you should use moreset.seed(20231213)vimp<-vimp_rsquared(Y=y, X=x, indx=1, V=2)After using thevimp package, please cite the following (for R-squared-based variable importance):
@article{williamson2020, author={Williamson, BD and Gilbert, PB and Carone, M and Simon, R}, title={Nonparametric variable importance assessment using machine learning techniques}, journal={Biometrics}, year={2020}, doi={10.1111/biom.13392} }or the following (for general variable importance parameters):
@article{williamson2021, author={Williamson, BD and Gilbert, PB and Simon, NR and Carone, M}, title={A general framework for inference on algorithm-agnostic variable importance}, journal={Journal of the American Statistical Association}, year={2021}, doi={10.1080/01621459.2021.2003200} }or the following (for Shapley-based variable importance):
@inproceedings{williamson2020, title={Efficient nonparametric statistical inference on population feature importance using {S}hapley values}, author={Williamson, BD and Feng, J}, year={2020}, booktitle={Proceedings of the 37th International Conference on Machine Learning}, volume={119}, pages={10282--10291}, series = {Proceedings of Machine Learning Research}, URL = {http://proceedings.mlr.press/v119/williamson20a.html}}The contents of this repository are distributed under the MIT license. See below for details:
MIT LicenseCopyright (c) [2018-present] [Brian D. Williamson]Permission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.The logo was created usinghexSticker andlisa. Many thanks to the maintainers of these packages and theColor Lisa team.