YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • AMS
    • Monthly Weather Review
    • View Item
    •   YE&T Library
    • AMS
    • Monthly Weather Review
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Testing the Tests: What Are the Impacts of Incorrect Assumptions When Applying Confidence Intervals or Hypothesis Tests to Compare Competing Forecasts?

    Source: Monthly Weather Review:;2018:;volume 146:;issue 006::page 1685
    Author:
    Gilleland, Eric
    ,
    Hering, Amanda S.
    ,
    Fowler, Tressa L.
    ,
    Brown, Barbara G.
    DOI: 10.1175/MWR-D-17-0295.1
    Publisher: American Meteorological Society
    Abstract: AbstractWhich of two competing continuous forecasts is better? This question is often asked in forecast verification, as well as climate model evaluation. Traditional statistical tests seem to be well suited to the task of providing an answer. However, most such tests do not account for some of the special underlying circumstances that are prevalent in this domain. For example, model output is seldom independent in time, and the models being compared are geared to predicting the same state of the atmosphere, and thus they could be contemporaneously correlated with each other. These types of violations of the assumptions of independence required for most statistical tests can greatly impact the accuracy and power of these tests. Here, this effect is examined on simulated series for many common testing procedures, including two-sample and paired t and normal approximation z tests, the z test with a first-order variance inflation factor applied, and the newer Hering?Genton (HG) test, as well as several bootstrap methods. While it is known how most of these tests will behave in the face of temporal dependence, it is less clear how contemporaneous correlation will affect them. Moreover, it is worthwhile knowing just how badly the tests can fail so that if they are applied, reasonable conclusions can be drawn. It is found that the HG test is the most robust to both temporal dependence and contemporaneous correlation, as well as the specific type and strength of temporal dependence. Bootstrap procedures that account for temporal dependence stand up well to contemporaneous correlation and temporal dependence, but require large sample sizes to be accurate.
    • Download: (1.455Mb)
    • Show Full MetaData Hide Full MetaData
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Testing the Tests: What Are the Impacts of Incorrect Assumptions When Applying Confidence Intervals or Hypothesis Tests to Compare Competing Forecasts?

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4261246
    Collections
    • Monthly Weather Review

    Show full item record

    contributor authorGilleland, Eric
    contributor authorHering, Amanda S.
    contributor authorFowler, Tressa L.
    contributor authorBrown, Barbara G.
    date accessioned2019-09-19T10:04:32Z
    date available2019-09-19T10:04:32Z
    date copyright4/9/2018 12:00:00 AM
    date issued2018
    identifier othermwr-d-17-0295.1.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4261246
    description abstractAbstractWhich of two competing continuous forecasts is better? This question is often asked in forecast verification, as well as climate model evaluation. Traditional statistical tests seem to be well suited to the task of providing an answer. However, most such tests do not account for some of the special underlying circumstances that are prevalent in this domain. For example, model output is seldom independent in time, and the models being compared are geared to predicting the same state of the atmosphere, and thus they could be contemporaneously correlated with each other. These types of violations of the assumptions of independence required for most statistical tests can greatly impact the accuracy and power of these tests. Here, this effect is examined on simulated series for many common testing procedures, including two-sample and paired t and normal approximation z tests, the z test with a first-order variance inflation factor applied, and the newer Hering?Genton (HG) test, as well as several bootstrap methods. While it is known how most of these tests will behave in the face of temporal dependence, it is less clear how contemporaneous correlation will affect them. Moreover, it is worthwhile knowing just how badly the tests can fail so that if they are applied, reasonable conclusions can be drawn. It is found that the HG test is the most robust to both temporal dependence and contemporaneous correlation, as well as the specific type and strength of temporal dependence. Bootstrap procedures that account for temporal dependence stand up well to contemporaneous correlation and temporal dependence, but require large sample sizes to be accurate.
    publisherAmerican Meteorological Society
    titleTesting the Tests: What Are the Impacts of Incorrect Assumptions When Applying Confidence Intervals or Hypothesis Tests to Compare Competing Forecasts?
    typeJournal Paper
    journal volume146
    journal issue6
    journal titleMonthly Weather Review
    identifier doi10.1175/MWR-D-17-0295.1
    journal fristpage1685
    journal lastpage1703
    treeMonthly Weather Review:;2018:;volume 146:;issue 006
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian