Treffer: An accurate pixel-Level explainable approach for CNNs and its application.
Weitere Informationen
Convolutional neural network (CNN) has been widely used to undertake the task of image classification. Unfortunately, the implicit decision knowledge carried by a trained CNN model is often difficult to be comprehended by humans. A feasible method to make human understanding of decision knowledge is to interpret the classification behaviour of the trained CNN model. Currently, many explainable methods have been proposed and made great contributions. However, the existing methods often suffer from insufficient interpretation accuracy. This paper presents a novel pixel-level explainable approach to address the problem of lack of interpretation accuracy in the existing methods. A large number of experiments are conducted on the PyTorch team published CNN models, and the experimental results show that the presented approach is a 100 % accurate technique for interpreting classification basis of input images on pixel-level in comparison to the existing explainable methods. In addition, a scheme to enhance the adversarial robustness of CNN models is designed based on the presented explainable approach. The evaluation experiments show that the designed scheme provides an effective way to improve the adversarial robustness of the CNN models, and is a transferable technique for the different structure CNN models.
(Copyright © 2025 Elsevier Ltd. All rights reserved.)
Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.