YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • AMS
    • Monthly Weather Review
    • View Item
    •   YE&T Library
    • AMS
    • Monthly Weather Review
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Comparing Forecast Skill

    Source: Monthly Weather Review:;2014:;volume( 142 ):;issue: 012::page 4658
    Author:
    DelSole, Timothy
    ,
    Tippett, Michael K.
    DOI: 10.1175/MWR-D-14-00045.1
    Publisher: American Meteorological Society
    Abstract: basic question in forecasting is whether one prediction system is more skillful than another. Some commonly used statistical significance tests cannot answer this question correctly if the skills are computed on a common period or using a common set of observations, because these tests do not account for correlations between sample skill estimates. Furthermore, the results of these tests are biased toward indicating no difference in skill, a fact that has important consequences for forecast improvement. This paper shows that the magnitude of bias is characterized by a few parameters such as sample size and correlation between forecasts and their errors, which, surprisingly, can be estimated from data. The bias is substantial for typical seasonal forecasts, implying that familiar tests may wrongly judge that differences in seasonal forecast skill are insignificant. Four tests that are appropriate for assessing differences in skill over a common period are reviewed. These tests are based on the sign test, the Wilcoxon signed-rank test, the Morgan?Granger?Newbold test, and a permutation test. These techniques are applied to ENSO hindcasts from the North American Multimodel Ensemble and reveal that the Climate Forecast System, version 2, and the Canadian Climate Model, version 3 (CanCM3), outperform other models in the sense that their squared error is less than that of other single models more frequently. It should be recognized that while certain models may be superior in a certain sense for a particular period and variable, combinations of forecasts are often significantly more skillful than a single model alone. In fact, the multimodel mean significantly outperforms all single models.
    • Download: (1.715Mb)
    • Show Full MetaData Hide Full MetaData
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Comparing Forecast Skill

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4230443
    Collections
    • Monthly Weather Review

    Show full item record

    contributor authorDelSole, Timothy
    contributor authorTippett, Michael K.
    date accessioned2017-06-09T17:32:00Z
    date available2017-06-09T17:32:00Z
    date copyright2014/12/01
    date issued2014
    identifier issn0027-0644
    identifier otherams-86841.pdf
    identifier urihttp://onlinelibrary.yabesh.ir/handle/yetl/4230443
    description abstractbasic question in forecasting is whether one prediction system is more skillful than another. Some commonly used statistical significance tests cannot answer this question correctly if the skills are computed on a common period or using a common set of observations, because these tests do not account for correlations between sample skill estimates. Furthermore, the results of these tests are biased toward indicating no difference in skill, a fact that has important consequences for forecast improvement. This paper shows that the magnitude of bias is characterized by a few parameters such as sample size and correlation between forecasts and their errors, which, surprisingly, can be estimated from data. The bias is substantial for typical seasonal forecasts, implying that familiar tests may wrongly judge that differences in seasonal forecast skill are insignificant. Four tests that are appropriate for assessing differences in skill over a common period are reviewed. These tests are based on the sign test, the Wilcoxon signed-rank test, the Morgan?Granger?Newbold test, and a permutation test. These techniques are applied to ENSO hindcasts from the North American Multimodel Ensemble and reveal that the Climate Forecast System, version 2, and the Canadian Climate Model, version 3 (CanCM3), outperform other models in the sense that their squared error is less than that of other single models more frequently. It should be recognized that while certain models may be superior in a certain sense for a particular period and variable, combinations of forecasts are often significantly more skillful than a single model alone. In fact, the multimodel mean significantly outperforms all single models.
    publisherAmerican Meteorological Society
    titleComparing Forecast Skill
    typeJournal Paper
    journal volume142
    journal issue12
    journal titleMonthly Weather Review
    identifier doi10.1175/MWR-D-14-00045.1
    journal fristpage4658
    journal lastpage4678
    treeMonthly Weather Review:;2014:;volume( 142 ):;issue: 012
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian