YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • AMS
    • Monthly Weather Review
    • View Item
    •   YE&T Library
    • AMS
    • Monthly Weather Review
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Summary Verification Measures and Their Interpretation for Ensemble Forecasts

    Source: Monthly Weather Review:;2010:;volume( 139 ):;issue: 009::page 3075
    Author:
    Bradley, A. Allen
    ,
    Schwartz, Stuart S.
    DOI: 10.1175/2010MWR3305.1
    Publisher: American Meteorological Society
    Abstract: nsemble prediction systems produce forecasts that represent the probability distribution of a continuous forecast variable. Most often, the verification problem is simplified by transforming the ensemble forecast into probability forecasts for discrete events, where the events are defined by one or more threshold values. Then, skill is evaluated using the mean-square error (MSE; i.e., Brier) skill score for binary events, or the ranked probability skill score (RPSS) for multicategory events. A framework is introduced that generalizes this approach, by describing the forecast quality of ensemble forecasts as a continuous function of the threshold value. Viewing ensemble forecast quality this way leads to the interpretation of the RPSS and the continuous ranked probability skill score (CRPSS) as measures of the weighted-average skill over the threshold values. It also motivates additional measures, derived to summarize other features of a continuous forecast quality function, which can be interpreted as descriptions of the function?s geometric shape. The measures can be computed not only for skill, but also for skill score decompositions, which characterize the resolution, reliability, discrimination, and other aspects of forecast quality. Collectively, they provide convenient metrics for comparing the performance of an ensemble prediction system at different locations, lead times, or issuance times, or for comparing alternative forecasting systems.
    • Download: (984.9Kb)
    • Show Full MetaData Hide Full MetaData
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Summary Verification Measures and Their Interpretation for Ensemble Forecasts

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4213160
    Collections
    • Monthly Weather Review

    Show full item record

    contributor authorBradley, A. Allen
    contributor authorSchwartz, Stuart S.
    date accessioned2017-06-09T16:37:56Z
    date available2017-06-09T16:37:56Z
    date copyright2011/09/01
    date issued2010
    identifier issn0027-0644
    identifier otherams-71285.pdf
    identifier urihttp://onlinelibrary.yabesh.ir/handle/yetl/4213160
    description abstractnsemble prediction systems produce forecasts that represent the probability distribution of a continuous forecast variable. Most often, the verification problem is simplified by transforming the ensemble forecast into probability forecasts for discrete events, where the events are defined by one or more threshold values. Then, skill is evaluated using the mean-square error (MSE; i.e., Brier) skill score for binary events, or the ranked probability skill score (RPSS) for multicategory events. A framework is introduced that generalizes this approach, by describing the forecast quality of ensemble forecasts as a continuous function of the threshold value. Viewing ensemble forecast quality this way leads to the interpretation of the RPSS and the continuous ranked probability skill score (CRPSS) as measures of the weighted-average skill over the threshold values. It also motivates additional measures, derived to summarize other features of a continuous forecast quality function, which can be interpreted as descriptions of the function?s geometric shape. The measures can be computed not only for skill, but also for skill score decompositions, which characterize the resolution, reliability, discrimination, and other aspects of forecast quality. Collectively, they provide convenient metrics for comparing the performance of an ensemble prediction system at different locations, lead times, or issuance times, or for comparing alternative forecasting systems.
    publisherAmerican Meteorological Society
    titleSummary Verification Measures and Their Interpretation for Ensemble Forecasts
    typeJournal Paper
    journal volume139
    journal issue9
    journal titleMonthly Weather Review
    identifier doi10.1175/2010MWR3305.1
    journal fristpage3075
    journal lastpage3089
    treeMonthly Weather Review:;2010:;volume( 139 ):;issue: 009
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian