Show simple item record

contributor authorGilleland, Eric
contributor authorHering, Amanda S.
contributor authorFowler, Tressa L.
contributor authorBrown, Barbara G.
date accessioned2019-09-19T10:04:32Z
date available2019-09-19T10:04:32Z
date copyright4/9/2018 12:00:00 AM
date issued2018
identifier othermwr-d-17-0295.1.pdf
identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4261246
description abstractAbstractWhich of two competing continuous forecasts is better? This question is often asked in forecast verification, as well as climate model evaluation. Traditional statistical tests seem to be well suited to the task of providing an answer. However, most such tests do not account for some of the special underlying circumstances that are prevalent in this domain. For example, model output is seldom independent in time, and the models being compared are geared to predicting the same state of the atmosphere, and thus they could be contemporaneously correlated with each other. These types of violations of the assumptions of independence required for most statistical tests can greatly impact the accuracy and power of these tests. Here, this effect is examined on simulated series for many common testing procedures, including two-sample and paired t and normal approximation z tests, the z test with a first-order variance inflation factor applied, and the newer Hering?Genton (HG) test, as well as several bootstrap methods. While it is known how most of these tests will behave in the face of temporal dependence, it is less clear how contemporaneous correlation will affect them. Moreover, it is worthwhile knowing just how badly the tests can fail so that if they are applied, reasonable conclusions can be drawn. It is found that the HG test is the most robust to both temporal dependence and contemporaneous correlation, as well as the specific type and strength of temporal dependence. Bootstrap procedures that account for temporal dependence stand up well to contemporaneous correlation and temporal dependence, but require large sample sizes to be accurate.
publisherAmerican Meteorological Society
titleTesting the Tests: What Are the Impacts of Incorrect Assumptions When Applying Confidence Intervals or Hypothesis Tests to Compare Competing Forecasts?
typeJournal Paper
journal volume146
journal issue6
journal titleMonthly Weather Review
identifier doi10.1175/MWR-D-17-0295.1
journal fristpage1685
journal lastpage1703
treeMonthly Weather Review:;2018:;volume 146:;issue 006
contenttypeFulltext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record