Home Data Science and GovernanceArtificial Intelligence The (Un)reliability of Saliency methods – Google Research

The (Un)reliability of Saliency methods – Google Research

by Massimo

Feature importance evaluation is a fundamental problem in deep model interpretation and feature visualization. Saliency methods is one of the most popular feature importance evaluation methods, assign to each input feature an importance score, which measures the usefulness of that feature for the performance. A large importance score of a feature means a large performance drop when the feature is unavailable. One way to explain deep learning behaviour is the saliency methods. In Google Research page, it is clearly demonstrated that this method does not work.

Indeed, saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction.
Saliency methods that do not satisfy input invariance result in misleading attribution.

Source: The (Un)reliability of Saliency methods – Google Research

You may also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More