Authors:Takahiro Sannomiya andKazuhiro Hotta
Affiliation:Meijo University, Nagoya, Japan
Keyword(s):Explainable AI, Neural Network Interpretability, Class Activation Map.
Abstract:Grad-CAM and Score-CAM are methods to improve the interpretation of CNNs whose internal behaviour is opaque. These methods do not select which layer to use, but simply use the final layer to visualize the basis of the decision. However, we wondered whether this was really appropriate, and wondered whether there might be important information hidden in layers other than the final layer in making predictions. In the proposed method, layers are selected based on the prediction probability of the model, and the basis of judgment is visualized. In addition, by taking the difference between the model that has been trained slightly to increase the confidence level of the model’s output class and the model before training, the proposed method performs a process to emphasize the parts that contributed to the prediction and provides a better quality basis for judgment. Experimental results confirm that the proposed method outperforms existing methods in two evaluation metrics.