Sensitivity of Ensemble Forecast Verification to Model BiasSource: Monthly Weather Review:;2018:;volume 146:;issue 003::page 781DOI: 10.1175/MWR-D-17-0223.1Publisher: American Meteorological Society
Abstract: AbstractThis study demonstrates how model bias can adversely affect the quality assessment of an ensemble prediction system (EPS) by verification metrics. A regional EPS [Global and Regional Assimilation and Prediction Enhanced System-Regional Ensemble Prediction System (GRAPES-REPS)] was verified over a period of one month over China. Three variables (500-hPa and 2-m temperatures, and 250-hPa wind) are selected to represent ?strong? and ?weak? bias situations. Ensemble spread and probabilistic forecasts are compared before and after a bias correction. The results show that the conclusions drawn from ensemble verification about the EPS are dramatically different with or without model bias. This is true for both ensemble spread and probabilistic forecasts. The GRAPES-REPS is severely underdispersive before the bias correction but becomes calibrated afterward, although the improvement in the spread?s spatial structure is much less; the spread?skill relation is also improved. The probabilities become much sharper and almost perfectly reliable after the bias is removed. Therefore, it is necessary to remove forecast biases before an EPS can be accurately evaluated since an EPS deals only with random error but not systematic error. Only when an EPS has no or little forecast bias, can ensemble verification metrics reliably reveal the true quality of an EPS without removing forecast bias first. An implication is that EPS developers should not be expected to introduce methods to dramatically increase ensemble spread (either by perturbation method or statistical calibration) to achieve reliability. Instead, the preferred solution is to reduce model bias through prediction system developments and to focus on the quality of spread (not the quantity of spread). Forecast products should also be produced from the debiased but not the raw ensemble.
|
Collections
Show full item record
contributor author | Wang, Jingzhuo | |
contributor author | Chen, Jing | |
contributor author | Du, Jun | |
contributor author | Zhang, Yutao | |
contributor author | Xia, Yu | |
contributor author | Deng, Guo | |
date accessioned | 2019-09-19T10:04:18Z | |
date available | 2019-09-19T10:04:18Z | |
date copyright | 2/6/2018 12:00:00 AM | |
date issued | 2018 | |
identifier other | mwr-d-17-0223.1.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl1/handle/yetl/4261208 | |
description abstract | AbstractThis study demonstrates how model bias can adversely affect the quality assessment of an ensemble prediction system (EPS) by verification metrics. A regional EPS [Global and Regional Assimilation and Prediction Enhanced System-Regional Ensemble Prediction System (GRAPES-REPS)] was verified over a period of one month over China. Three variables (500-hPa and 2-m temperatures, and 250-hPa wind) are selected to represent ?strong? and ?weak? bias situations. Ensemble spread and probabilistic forecasts are compared before and after a bias correction. The results show that the conclusions drawn from ensemble verification about the EPS are dramatically different with or without model bias. This is true for both ensemble spread and probabilistic forecasts. The GRAPES-REPS is severely underdispersive before the bias correction but becomes calibrated afterward, although the improvement in the spread?s spatial structure is much less; the spread?skill relation is also improved. The probabilities become much sharper and almost perfectly reliable after the bias is removed. Therefore, it is necessary to remove forecast biases before an EPS can be accurately evaluated since an EPS deals only with random error but not systematic error. Only when an EPS has no or little forecast bias, can ensemble verification metrics reliably reveal the true quality of an EPS without removing forecast bias first. An implication is that EPS developers should not be expected to introduce methods to dramatically increase ensemble spread (either by perturbation method or statistical calibration) to achieve reliability. Instead, the preferred solution is to reduce model bias through prediction system developments and to focus on the quality of spread (not the quantity of spread). Forecast products should also be produced from the debiased but not the raw ensemble. | |
publisher | American Meteorological Society | |
title | Sensitivity of Ensemble Forecast Verification to Model Bias | |
type | Journal Paper | |
journal volume | 146 | |
journal issue | 3 | |
journal title | Monthly Weather Review | |
identifier doi | 10.1175/MWR-D-17-0223.1 | |
journal fristpage | 781 | |
journal lastpage | 796 | |
tree | Monthly Weather Review:;2018:;volume 146:;issue 003 | |
contenttype | Fulltext |