Grad-CAM Reveals the Why Behind Deep Learning Decisions
This example shows how to use the gradient-weighted class activation mapping (Grad-CAM) technique to understand why a deep learning network makes its classification decisions. Grad-CAM, invented by Selvaraju and coauthors , uses the gradient of the classification score with respect to the convolutional features determined by the network in order to understand which parts of the image are most important for classification. This example uses the GoogLeNet pretrained network for images.
Grad-CAM is a generalization of the class activation mapping (CAM) technique. For activation mapping techniques on live webcam data, see Investigate Network Predictions Using Class Activation Mapping. Grad-CAM can also be applied to nonclassification examples such as regression or semantic segmentation. For an example showing how to use Grad-CAM to investigate the predictions of a semantic segmentation network, see Explore Semantic Segmentation Network Using Grad-CAM.
Load Pretrained Network
Load the GoogLeNet network.
net = googlenet;
Read the GoogLeNet image size.
inputSize = net.Layers(1).InputSize(1:2);
sherlock.jpg., an image of a golden retriever included with this example.
img = imread("sherlock.jpg");
Resize the image to the network input dimensions.
img = imresize(img,inputSize);
Classify the image and display it, along with its classification and classification score.
[classfn,score] = classify(net,img); imshow(img); title(sprintf("%s (%.2f)", classfn, score(classfn)));
GoogLeNet correctly classifies the image as a golden retriever. But why? What characteristics of the image cause the network to make this classification?
Grad-CAM Explains Why
The Grad-CAM technique utilizes the gradients of the classification score with respect to the final convolutional feature map, to identify the parts of an input image that most impact the classification score. The places where this gradient is large are exactly the places where the final score depends most on the data.
gradCAM function computes the importance map by taking the derivative of the reduction layer output for a given class with respect to a convolutional feature map. For classification tasks, the
gradCAM function automatically selects suitable layers to compute the importance map for. You can also specify the layers with the
'FeatureLayer' name-value arguments.
Compute the Grad-CAM map.
map = gradCAM(net,img,classfn);
Show the Grad-CAM map on top of the image by using an
'AlphaData' value of 0.5. The
'jet' colormap has deep blue as the lowest value and deep red as the highest.
imshow(img); hold on; imagesc(map,'AlphaData',0.5); colormap jet hold off; title("Grad-CAM");
Clearly, the upper face and ear of the dog have the greatest impact on the classification.
 Selvaraju, R. R., M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization." In IEEE International Conference on Computer Vision (ICCV), 2017, pp. 618–626. Available at
Grad-CAM on the Computer Vision Foundation Open Access website.
- Interpret Deep Learning Time-Series Classifications Using Grad-CAM
- Explore Semantic Segmentation Network Using Grad-CAM
- Investigate Network Predictions Using Class Activation Mapping
- Deep Learning Visualization Methods
- Explore Network Predictions Using Deep Learning Visualization Techniques
- Understand Network Predictions Using LIME