PROB-POS: A FRAMEWORK FOR IMPROVING VISUAL EXPLANATIONS FROM CONVOLUTIONAL NEURAL NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION

Prob-POS: A Framework for Improving Visual Explanations from Convolutional Neural Networks for Remote Sensing Image Classification

Prob-POS: A Framework for Improving Visual Explanations from Convolutional Neural Networks for Remote Sensing Image Classification

Blog Article

During the past decades, convolutional neural network (CNN)-based models have achieved notable success in remote sensing image classification due to their powerful feature representation ability.However, the lack of explainability during the decision-making process is a common criticism of these high-capacity networks.Local explanation methods that provide visual saliency maps have attracted increasing attention as a gumball bere means to surmount the barrier of explainability.

However, the vast majority of research is conducted on the last convolutional layer, where the salient regions are unintelligible for partial remote sensing images, especially scenes that contain plentiful small targets or are similar to the texture image.To address these issues, we propose a novel framework called Prob-POS, which consists of the class-activation donut wall pegboard map based on the probe network (Prob-CAM) and the weighted probability of occlusion (wPO) selection strategy.The proposed probe network is a simple but effective architecture to generate elaborate explanation maps and can be applied to any layer of CNNs.

The wPO is a quantified metric to evaluate the explanation effectiveness of each layer for different categories to automatically pick out the optimal explanation layer.Variational weights are taken into account to highlight the high-scoring regions in the explanation map.Experimental results on two publicly available datasets and three prevalent networks demonstrate that Prob-POS improves the faithfulness and explainability of CNNs on remote sensing images.

Report this page