YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • AMS
    • Artificial Intelligence for the Earth Systems
    • View Item
    •   YE&T Library
    • AMS
    • Artificial Intelligence for the Earth Systems
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    This Looks Like That There: Interpretable Neural Networks for Image Tasks When Location Matters

    Source: Artificial Intelligence for the Earth Systems:;2022:;volume( 001 ):;issue: 003
    Author:
    Elizabeth A. Barnes
    ,
    Randal J. Barnes
    ,
    Zane K. Martin
    ,
    Jamin K. Rader
    DOI: 10.1175/AIES-D-22-0001.1
    Publisher: American Meteorological Society
    Abstract: We develop and demonstrate a new interpretable deep learning model specifically designed for image analysis in Earth system science applications. The neural network is designed to be inherently interpretable, rather than explained via post hoc methods. This is achieved by training the network to identify parts of training images that act as prototypes for correctly classifying unseen images. The new network architecture extends the interpretable prototype architecture of a previous study in computer science to incorporate absolute location. This is useful for Earth system science where images are typically the result of physics-based processes, and the information is often geolocated. Although the network is constrained to only learn via similarities to a small number of learned prototypes, it can be trained to exhibit only a minimal reduction in accuracy relative to noninterpretable architectures. We apply the new model to two Earth science use cases: a synthetic dataset that loosely represents atmospheric high and low pressure systems, and atmospheric reanalysis fields to identify the state of tropical convective activity associated with the Madden–Julian oscillation. In both cases, we demonstrate that considering absolute location greatly improves testing accuracies when compared with a location-agnostic method. Furthermore, the network architecture identifies specific historical dates that capture multivariate, prototypical behavior of tropical climate variability.
    • Download: (2.157Mb)
    • Show Full MetaData Hide Full MetaData
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      This Looks Like That There: Interpretable Neural Networks for Image Tasks When Location Matters

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4290389
    Collections
    • Artificial Intelligence for the Earth Systems

    Show full item record

    contributor authorElizabeth A. Barnes
    contributor authorRandal J. Barnes
    contributor authorZane K. Martin
    contributor authorJamin K. Rader
    date accessioned2023-04-12T18:52:18Z
    date available2023-04-12T18:52:18Z
    date copyright2022/07/01
    date issued2022
    identifier otherAIES-D-22-0001.1.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4290389
    description abstractWe develop and demonstrate a new interpretable deep learning model specifically designed for image analysis in Earth system science applications. The neural network is designed to be inherently interpretable, rather than explained via post hoc methods. This is achieved by training the network to identify parts of training images that act as prototypes for correctly classifying unseen images. The new network architecture extends the interpretable prototype architecture of a previous study in computer science to incorporate absolute location. This is useful for Earth system science where images are typically the result of physics-based processes, and the information is often geolocated. Although the network is constrained to only learn via similarities to a small number of learned prototypes, it can be trained to exhibit only a minimal reduction in accuracy relative to noninterpretable architectures. We apply the new model to two Earth science use cases: a synthetic dataset that loosely represents atmospheric high and low pressure systems, and atmospheric reanalysis fields to identify the state of tropical convective activity associated with the Madden–Julian oscillation. In both cases, we demonstrate that considering absolute location greatly improves testing accuracies when compared with a location-agnostic method. Furthermore, the network architecture identifies specific historical dates that capture multivariate, prototypical behavior of tropical climate variability.
    publisherAmerican Meteorological Society
    titleThis Looks Like That There: Interpretable Neural Networks for Image Tasks When Location Matters
    typeJournal Paper
    journal volume1
    journal issue3
    journal titleArtificial Intelligence for the Earth Systems
    identifier doi10.1175/AIES-D-22-0001.1
    treeArtificial Intelligence for the Earth Systems:;2022:;volume( 001 ):;issue: 003
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian