Show simple item record

contributor authorMichaelsen, Joel
date accessioned2017-06-09T14:02:04Z
date available2017-06-09T14:02:04Z
date copyright1987/11/01
date issued1987
identifier issn0733-3021
identifier otherams-11263.pdf
identifier urihttp://onlinelibrary.yabesh.ir/handle/yetl/4146472
description abstractCross-validation is a statistical procedure that produces an estimate of forecast skill which is less biased than the usual hindcast skill estimates. The cross-validation method systematically deletes one or more cases in a dataset, derives a forecast model from the remaining cases, and tests it on the deleted case or cases. The procedure is nonparametric and can be applied to any automated model building technique. It can also provide important diagnostic information about influential cases in the dataset and the stability of the model. Two experiments were conducted using cross-validation to estimate forecast skill in different predictive models of North Pacific sea surface temperatures (SSTs). The results indicate that bias, or artificial predictability (defined here as the difference between the usual hindcast skill and the forecast skill estimated by cross-validation), increases with each decision?either screening of potential predictors or fixing the value of a coefficient?drawn from the data. Bias introduced by variable screening depends on the size of the pool of potential predictors, while bias produced by fitting coefficients depends on the number of coefficients. The results also indicate that winter SSTs are predictable with a skill of about 20%?25%. Several models were compared. More flexible ones which allow the data to guide the selection of variables generally show poorer skill than the relatively inflexible models where a priori variable selection is used. The cross-validation estimates of artificial skill were compared with estimates derived from other methods. Davis and Chelton's method showed close agreement with the cross-validation results for a priori models. However, Monte Carlo estimates and cross-validation estimates do not agree well in the case of predictor screening models. The results of this study indicate that the amount of artificial skill depends on the amount of true skill, so Monte Carlo techniques which assume no true skill cannot be expected to perform well when there is some true skill.
publisherAmerican Meteorological Society
titleCross-Validation in Statistical Climate Forecast Models
typeJournal Paper
journal volume26
journal issue11
journal titleJournal of Climate and Applied Meteorology
identifier doi10.1175/1520-0450(1987)026<1589:CVISCF>2.0.CO;2
journal fristpage1589
journal lastpage1600
treeJournal of Climate and Applied Meteorology:;1987:;Volume( 026 ):;Issue: 011
contenttypeFulltext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record