Show simple item record

contributor authorAntonios Mamalakis
contributor authorElizabeth A. Barnes
contributor authorImme Ebert-Uphoff
date accessioned2023-04-12T18:52:29Z
date available2023-04-12T18:52:29Z
date copyright2022/10/01
date issued2022
identifier otherAIES-D-22-0012.1.pdf
identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4290396
description abstractConvolutional neural networks (CNNs) have recently attracted great attention in geoscience because of their ability to capture nonlinear system behavior and extract predictive spatiotemporal patterns. Given their black-box nature, however, and the importance of prediction explainability, methods of explainable artificial intelligence (XAI) are gaining popularity as a means to explain the CNN decision-making strategy. Here, we establish an intercomparison of some of the most popular XAI methods and investigate their fidelity in explaining CNN decisions for geoscientific applications. Our goal is to raise awareness of the theoretical limitations of these methods and to gain insight into the relative strengths and weaknesses to help guide best practices. The considered XAI methods are first applied to an idealized attribution benchmark, in which the ground truth of explanation of the network is known a priori, to help objectively assess their performance. Second, we apply XAI to a climate-related prediction setting, namely, to explain a CNN that is trained to predict the number of atmospheric rivers in daily snapshots of climate simulations. Our results highlight several important issues of XAI methods (e.g., gradient shattering, inability to distinguish the sign of attribution, and ignorance to zero input) that have previously been overlooked in our field and, if not considered cautiously, may lead to a distorted picture of the CNN decision-making strategy. We envision that our analysis will motivate further investigation into XAI fidelity and will help toward a cautious implementation of XAI in geoscience, which can lead to further exploitation of CNNs and deep learning for prediction problems.
publisherAmerican Meteorological Society
titleInvestigating the Fidelity of Explainable Artificial Intelligence Methods for Applications of Convolutional Neural Networks in Geoscience
typeJournal Paper
journal volume1
journal issue4
journal titleArtificial Intelligence for the Earth Systems
identifier doi10.1175/AIES-D-22-0012.1
treeArtificial Intelligence for the Earth Systems:;2022:;volume( 001 ):;issue: 004
contenttypeFulltext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record