- Notifications
You must be signed in to change notification settings - Fork4
Interpretability methods to analyze the behavior and individual predictions of modern neural networks in R.
License
bips-hb/innsight
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
innsight is an R package that interprets the behavior and explainsindividual predictions of modern neural networks. Many methods forexplaining individual predictions already exist, but hardly any of themare implemented or available in R. Most of these so-calledfeatureattribution methods are only implemented in Python and thus difficultto access or use for the R community. In this sense, the packageinnsight provides a common interface for various methods for theinterpretability of neural networks and can therefore be considered asan R analogue toiNNvestigate orCaptum for Python.
This package implements several model-specific interpretability (featureattribution) methods based on neural networks in R, e.g.,
- Layer-wise Relevance Propagation(LRP)
- Including propagation rules:
$\varepsilon$ -rule and$\alpha$ -$\beta$-rule
- Including propagation rules:
- Deep Learning Important Features(DeepLift)
- Including propagation rules for non-linearities: Rescale rule andRevealCancel rule
- DeepSHAP
- Gradient-based methods:
- Vanilla Gradient, includingGradient xInput
- Smoothed gradients (SmoothGrad),including SmoothGrad x Input
- Integrated gradients
- Expected gradients
- Connection Weights
Example results for these methods on ImageNet with pretrained networkVGG19 (seeExample 3: ImageNet withkerasfor details):
The packageinnsight aims to be as flexible as possible andindependent of a specific deep learning package in which the passednetwork has been learned. Basically, a neural network of the librariestorch,keras andneuralnet can bepassed, which is internally converted into atorch model withspecial insights needed for interpretation. But it is also possible topass an arbitrary net in form of a named list (seevignettefor details).
The package can be installed directly from CRAN and the developmentversion from GitHub with the following commands (successful installationofdevtools is required)
# Stable versioninstall.packages("innsight")# Development versiondevtools::install_github("bips-hb/innsight")
Internally, any passed model is converted to atorch model, thus thecorrect functionality of this package relies on a complete and correctinstallation oftorch. For this reason, the following command mustbe run manually to install the missing libraries LibTorch andLibLantern:
torch::install_torch()
📝 Note
Currently this can lead to problems under Windows if the Visual Studioruntime is not pre-installed. See the issue on GitHubhereor for more information and other problems with installingtorchsee the official installationvignetteoftorch.
You have a trained neural networkmodel and your model input datadata. Now you want to interpret individual data points or the overallbehavior by using the methods from the packageinnsight, then stickto the following pseudo code:
# --------------- Step 0: Train your model -----------------# 'model' has to be an instance of either torch::nn_sequential,# keras::keras_model_sequential, keras::keras_model or neuralnet::neuralnetmodel=...# -------------- Step 1: Convert your model ----------------# For keras and neuralnetconverter<- convert(model)# For a torch model the argument 'input_dim' is requiredconverter<- convert(model,input_dim=model_input_dim)# -------------- Step 2: Apply method ----------------------# Apply global methodresult<- run_method(converter)# no data argument is needed# Apply local methodsresult<- run_method(converter,data)# -------------- Step 3: Get and plot results --------------# Get the results as an arrayres<- get_result(result)# Plot individual resultsplot(result)# Plot a aggregated plot of all given data points in argument 'data'plot_global(result)boxplot(result)# alias of `plot_global` for tabular and signal data# Interactive plots can also be created for both methodsplot(result,as_plotly=TRUE)
For a more detailed high-level introduction, see theintroductionvignette, and for a full in-depth explanation with all thepossibilities, see the“In-depthexplanation”vignette.
- Iris dataset withtorch model (numeric tabular data)→vignette
- Penguin dataset withtorch model and trained withluz (numericand categorical tabular data)→vignette
- ImageNet dataset with pre-trained models inkeras (image data)→article
If you would like to contribute, please open an issue or submit a pullrequest.
This package becomes even more alive and valuable if people are using itfor their analyses. Therefore, don’t hesitate to write me(niklas.koenen@gmail.com) or create a feature request if you aremissing something for your analyses or have great ideas for extendingthis package. Currently, we are working on the following:
- GPU support
- More methods, e.g. Grad-CAM, etc.
- More examples and documentation (contact me if you have anon-trivial application for me)
If you use this package in your research, please cite it as follows:
@Article{, title = {Interpreting Deep Neural Networks with the Package {innsight}}, author = {Niklas Koenen and Marvin N. Wright}, journal = {Journal of Statistical Software}, year = {2024}, volume = {111}, number = {8}, pages = {1--52}, doi = {10.18637/jss.v111.i08},}This work is funded by the German Research Foundation (DFG) in thecontext of the Emmy Noether Grant 437611051.
About
Interpretability methods to analyze the behavior and individual predictions of modern neural networks in R.
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors4
Uh oh!
There was an error while loading.Please reload this page.
