Summary Verification Measures and Their Interpretation for Ensemble ForecastsSource: Monthly Weather Review:;2010:;volume( 139 ):;issue: 009::page 3075DOI: 10.1175/2010MWR3305.1Publisher: American Meteorological Society
Abstract: nsemble prediction systems produce forecasts that represent the probability distribution of a continuous forecast variable. Most often, the verification problem is simplified by transforming the ensemble forecast into probability forecasts for discrete events, where the events are defined by one or more threshold values. Then, skill is evaluated using the mean-square error (MSE; i.e., Brier) skill score for binary events, or the ranked probability skill score (RPSS) for multicategory events. A framework is introduced that generalizes this approach, by describing the forecast quality of ensemble forecasts as a continuous function of the threshold value. Viewing ensemble forecast quality this way leads to the interpretation of the RPSS and the continuous ranked probability skill score (CRPSS) as measures of the weighted-average skill over the threshold values. It also motivates additional measures, derived to summarize other features of a continuous forecast quality function, which can be interpreted as descriptions of the function?s geometric shape. The measures can be computed not only for skill, but also for skill score decompositions, which characterize the resolution, reliability, discrimination, and other aspects of forecast quality. Collectively, they provide convenient metrics for comparing the performance of an ensemble prediction system at different locations, lead times, or issuance times, or for comparing alternative forecasting systems.
|
Collections
Show full item record
contributor author | Bradley, A. Allen | |
contributor author | Schwartz, Stuart S. | |
date accessioned | 2017-06-09T16:37:56Z | |
date available | 2017-06-09T16:37:56Z | |
date copyright | 2011/09/01 | |
date issued | 2010 | |
identifier issn | 0027-0644 | |
identifier other | ams-71285.pdf | |
identifier uri | http://onlinelibrary.yabesh.ir/handle/yetl/4213160 | |
description abstract | nsemble prediction systems produce forecasts that represent the probability distribution of a continuous forecast variable. Most often, the verification problem is simplified by transforming the ensemble forecast into probability forecasts for discrete events, where the events are defined by one or more threshold values. Then, skill is evaluated using the mean-square error (MSE; i.e., Brier) skill score for binary events, or the ranked probability skill score (RPSS) for multicategory events. A framework is introduced that generalizes this approach, by describing the forecast quality of ensemble forecasts as a continuous function of the threshold value. Viewing ensemble forecast quality this way leads to the interpretation of the RPSS and the continuous ranked probability skill score (CRPSS) as measures of the weighted-average skill over the threshold values. It also motivates additional measures, derived to summarize other features of a continuous forecast quality function, which can be interpreted as descriptions of the function?s geometric shape. The measures can be computed not only for skill, but also for skill score decompositions, which characterize the resolution, reliability, discrimination, and other aspects of forecast quality. Collectively, they provide convenient metrics for comparing the performance of an ensemble prediction system at different locations, lead times, or issuance times, or for comparing alternative forecasting systems. | |
publisher | American Meteorological Society | |
title | Summary Verification Measures and Their Interpretation for Ensemble Forecasts | |
type | Journal Paper | |
journal volume | 139 | |
journal issue | 9 | |
journal title | Monthly Weather Review | |
identifier doi | 10.1175/2010MWR3305.1 | |
journal fristpage | 3075 | |
journal lastpage | 3089 | |
tree | Monthly Weather Review:;2010:;volume( 139 ):;issue: 009 | |
contenttype | Fulltext |