Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)

License

NotificationsYou must be signed in to change notification settings

oneTaken/awesome_deep_learning_interpretability

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 

Repository files navigation

深度学习近年来关于模型解释性的相关论文。

按引用次数排序可见引用排序

159篇论文pdf(有2篇需要上scihub找)上传到腾讯微云

不定期更新。

YearPublicationPaperCitationcode
2020CVPRExplaining Knowledge Distillation by Quantifying the Knowledge81
2020CVPRHigh-frequency Component Helps Explain the Generalization of Convolutional Neural Networks289
2020CVPRWScore-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks414Pytorch
2020ICLRKnowledge consistency between neural networks and beyond28
2020ICLRInterpretable Complex-Valued Neural Networks for Privacy Protection23
2019AIExplanation in artificial intelligence: Insights from the social sciences3248
2019NMIStop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead3505
2019NeurIPSCan you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift1052-
2019NeurIPSThis looks like that: deep learning for interpretable image recognition665Pytorch
2019NeurIPSA benchmark for interpretability methods in deep neural networks413
2019NeurIPSFull-gradient representation for neural network visualization155
2019NeurIPSOn the (In) fidelity and Sensitivity of Explanations226
2019NeurIPSTowards Automatic Concept-based Explanations342Tensorflow
2019NeurIPSCXPlain: Causal explanations for model interpretation under uncertainty133
2019CVPRInterpreting CNNs via Decision Trees293
2019CVPRFrom Recognition to Cognition: Visual Commonsense Reasoning544Pytorch
2019CVPRAttention branch network: Learning of attention mechanism for visual explanation371
2019CVPRInterpretable and fine-grained visual explanations for convolutional neural networks116
2019CVPRLearning to Explain with Complemental Examples36
2019CVPRRevealing Scenes by Inverting Structure from Motion Reconstructions84Tensorflow
2019CVPRMultimodal Explanations by Predicting Counterfactuality in Videos26
2019CVPRVisualizing the Resilience of Deep Convolutional Network Interpretations2
2019ICCVU-CAM: Visual Explanation using Uncertainty based Class Activation Maps61
2019ICCVTowards Interpretable Face Recognition66
2019ICCVTaking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded163
2019ICCVUnderstanding Deep Networks via Extremal Perturbations and Smooth Masks276Pytorch
2019ICCVExplaining Neural Networks Semantically and Quantitatively49
2019ICLRHierarchical interpretations for neural network predictions111Pytorch
2019ICLRHow Important Is a Neuron?101
2019ICLRVisual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks56
2018ICMLExtracting Automata from Recurrent Neural Networks Using Queries and Counterexamples169Pytorch
2019ICMLTowards A Deep and Unified Understanding of Deep Neural Models in NLP80Pytorch
2019ICAISInterpreting black box predictions using fisher kernels80
2019ACMFATExplaining explanations in AI558
2019AAAIInterpretation of neural networks is fragile597Tensorflow
2019AAAIClassifier-agnostic saliency map extraction23
2019AAAICan You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval11
2019AAAIWUnsupervised Learning of Neural Networks to Explain Neural Networks28
2019AAAIWNetwork Transplanting4
2019CSURA Survey of Methods for Explaining Black Box Models3088
2019JVCIRInterpretable convolutional neural networks via feedforward design134Keras
2019ExplainAIThe (Un)reliability of saliency methods515
2019ACLAttention is not Explanation920
2019EMNLPAttention is not not Explanation667
2019arxivAttention Interpretability Across NLP Tasks129
2019arxivInterpretable CNNs2
2018ICLRTowards better understanding of gradient-based attribution methods for deep neural networks775
2018ICLRLearning how to explain neural networks: PatternNet and PatternAttribution342
2018ICLROn the importance of single directions for generalization282Pytorch
2018ICLRDetecting statistical interactions from neural network weights148Pytorch
2018ICLRInterpretable counting for visual question answering55Pytorch
2018CVPRInterpretable Convolutional Neural Networks677
2018CVPRTell me where to look: Guided attention inference network454Chainer
2018CVPRMultimodal Explanations: Justifying Decisions and Pointing to the Evidence349Caffe
2018CVPRTransparency by design: Closing the gap between performance and interpretability in visual reasoning180Pytorch
2018CVPRNet2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks186
2018CVPRWhat have we learned from deep representations for action recognition?52
2018CVPRLearning to Act Properly: Predicting and Explaining Affordances from Images57
2018CVPRTeaching Categories to Human Learners with Visual Explanations64Pytorch
2018CVPRWhat do deep networks like to see?36
2018CVPRInterpret Neural Networks by Identifying Critical Data Routing Paths73Tensorflow
2018ECCVDeep clustering for unsupervised learning of visual features2056Pytorch
2018ECCVExplainable neural computation via stack neural module networks164Tensorflow
2018ECCVGrounding visual explanations184
2018ECCVTextual explanations for self-driving vehicles196
2018ECCVInterpretable basis decomposition for visual explanation228Pytorch
2018ECCVConvnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases147
2018ECCVVqa-e: Explaining, elaborating, and enhancing your answers for visual questions71
2018ECCVChoose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance41Pytorch
2018ECCVDiverse feature visualizations reveal invariances in early layers of deep neural networks23Tensorflow
2018ECCVExplainGAN: Model Explanation via Decision Boundary Crossing Transformations36
2018ICMLInterpretability beyond feature attribution: Quantitative testing with concept activation vectors1130Tensorflow
2018ICMLLearning to explain: An information-theoretic perspective on model interpretation421
2018ACLDid the Model Understand the Question?171Tensorflow
2018FITEEVisual interpretability for deep learning: a survey731
2018NeurIPSSanity Checks for Saliency Maps1353
2018NeurIPSExplanations based on the missing: Towards contrastive explanations with pertinent negatives443Tensorflow
2018NeurIPSTowards robust interpretability with self-explaining neural networks648Pytorch
2018NeurIPSAttacks meet interpretability: Attribute-steered detection of adversarial samples142
2018NeurIPSDeepPINK: reproducible feature selection in deep neural networks125Keras
2018NeurIPSRepresenter point selection for explaining deep neural networks182Tensorflow
2018NeurIPS WorkshopInterpretable convolutional filters with sincNet97
2018AAAIAnchors: High-precision model-agnostic explanations1517
2018AAAIImproving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients537Tensorflow
2018AAAIDeep learning for case-based reasoning through prototypes: A neural network that explains its predictions396Tensorflow
2018AAAIInterpreting CNN Knowledge via an Explanatory Graph199Matlab
2018AAAIExamining CNN Representations with respect to Dataset Bias88
2018WACVGrad-cam++: Generalized gradient-based visual explanations for deep convolutional networks1459
2018IJCVTop-down neural attention by excitation backprop778
2018TPAMIInterpreting deep visual representations via network dissection252
2018DSPMethods for interpreting and understanding deep neural networks2046
2018AccessPeeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)3110
2018JAIRLearning Explanatory Rules from Noisy Data440Tensorflow
2018MIPROExplainable artificial intelligence: A survey794
2018BMVCRise: Randomized input sampling for explanation of black-box models657
2018arxivDistill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation194
2018arxivManipulating and measuring model interpretability496
2018arxivHow convolutional neural network see the world-A survey of convolutional neural network visualization methods211
2018arxivRevisiting the importance of individual units in cnns via ablation93
2018arxivComputationally Efficient Measures of Internal Neuron Importance10
2017ICMLUnderstanding Black-box Predictions via Influence Functions2062Pytorch
2017ICMLAxiomatic attribution for deep networks3654Keras
2017ICMLLearning Important Features Through Propagating Activation Differences2835
2017ICLRVisualizing deep neural network decisions: Prediction difference analysis674Caffe
2017ICLRExploring LOTS in Deep Neural Networks34
2017NeurIPSA Unified Approach to Interpreting Model Predictions11511
2017NeurIPSReal time image saliency for black box classifiers483Pytorch
2017NeurIPSSVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability473
2017CVPRMining Object Parts from CNNs via Active Question-Answering29
2017CVPRNetwork dissection: Quantifying interpretability of deep visual representations1254
2017CVPRImproving Interpretability of Deep Neural Networks with Semantic Information118
2017CVPRMDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network307Torch
2017CVPRMaking the V in VQA matter: Elevating the role of image understanding in Visual Question Answering1686
2017CVPRKnowing when to look: Adaptive attention via a visual sentinel for image captioning1392Torch
2017CVPRWInterpretable 3d human action analysis with temporal convolutional networks539
2017ICCVGrad-cam: Visual explanations from deep networks via gradient-based localization13006Pytorch
2017ICCVInterpretable Explanations of Black Boxes by Meaningful Perturbation1293Pytorch
2017ICCVInterpretable Learning for Self-Driving Cars by Visualizing Causal Attention323
2017ICCVUnderstanding and comparing deep neural networks for age and gender classification130
2017ICCVLearning to disambiguate by asking discriminative questions26
2017IJCAIRight for the right reasons: Training differentiable models by constraining their explanations429
2017IJCAIUnderstanding and improving convolutional neural networks via concatenated rectified linear units510Caffe
2017AAAIGrowing Interpretable Part Graphs on ConvNets via Multi-Shot Learning67Matlab
2017ACLVisualizing and Understanding Neural Machine Translation179
2017EMNLPA causal framework for explaining the predictions of black-box sequence-to-sequence models192
2017CVPR WorkshopLooking under the hood: Deep neural network visualization to interpret whole-slide image analysis outcomes for colorectal polyps47
2017surveyInterpretability of deep learning models: a survey of results345
2017arxivSmoothGrad: removing noise by adding noise1479
2017arxivInterpretable & explorable approximations of black box models259
2017arxivDistilling a neural network into a soft decision tree520Pytorch
2017arxivTowards interpretable deep neural networks by leveraging adversarial examples111
2017arxivExplainable artificial intelligence: Understanding, visualizing and interpreting deep learning models1279
2017arxivContextual Explanation Networks77Pytorch
2017arxivChallenges for transparency142
2017ACMSOPPDeepxplore: Automated whitebox testing of deep learning systems1144
2017CEURWWhat does explainable AI really mean? A new conceptualization of perspectives518
2017TVCGActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models346
2016NeurIPSSynthesizing the preferred inputs for neurons in neural networks via deep generator networks659Caffe
2016NeurIPSUnderstanding the effective receptive field in deep convolutional neural networks1356
2016CVPRInverting Visual Representations with Convolutional Networks626
2016CVPRVisualizing and Understanding Deep Texture Representations147
2016CVPRAnalyzing Classifiers: Fisher Vectors and Deep Neural Networks191
2016ECCVGenerating Visual Explanations613Caffe
2016ECCVDesign of kernels in convolutional neural networks for image classification24
2016ICMLUnderstanding and improving convolutional neural networks via concatenated rectified linear units510
2016ICMLVisualizing and comparing AlexNet and VGG using deconvolutional layers126
2016EMNLPRationalizing Neural Predictions738Pytorch
2016IJCVVisualizing deep convolutional neural networks using natural pre-images508Matlab
2016IJCVVisualizing Object Detection Features38Caffe
2016KDDWhy should i trust you?: Explaining the predictions of any classifier11742
2016TVCGVisualizing the hidden activity of artificial neural networks309
2016TVCGTowards better analysis of deep convolutional neural networks474
2016NAACLVisualizing and understanding neural models in nlp650Torch
2016arxivUnderstanding neural networks through representation erasure)492
2016arxivGrad-CAM: Why did you say that?398
2016arxivInvestigating the influence of noise and distractors on the interpretation of neural networks108
2016arxivAttentive Explanations: Justifying Decisions and Pointing to the Evidence88
2016arxivThe Mythos of Model Interpretability3786
2016arxivMultifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks317
2015ICLRStriving for Simplicity: The All Convolutional Net4645Pytorch
2015CVPRUnderstanding deep image representations by inverting them1942Matlab
2015ICCVUnderstanding deep features with computer-generated imagery156Caffe
2015ICML WorkshopUnderstanding Neural Networks Through Deep Visualization2038Tensorflow
2015AASInterpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model749
2014ECCVVisualizing and Understanding Convolutional Networks18604Pytorch
2014ICLRDeep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps6142Pytorch
2013ICCVHoggles: Visualizing object detection features352
  • 论文talk

About

深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors2

  •  
  •  

[8]ページ先頭

©2009-2025 Movatter.jp