On “Field Significance” and the False Discovery RateSource: Journal of Applied Meteorology and Climatology:;2006:;volume( 045 ):;issue: 009::page 1181Author:Wilks, D. S.
DOI: 10.1175/JAM2404.1Publisher: American Meteorological Society
Abstract: The conventional approach to evaluating the joint statistical significance of multiple hypothesis tests (i.e., ?field,? or ?global,? significance) in meteorology and climatology is to count the number of individual (or ?local?) tests yielding nominally significant results and then to judge the unusualness of this integer value in the context of the distribution of such counts that would occur if all local null hypotheses were true. The sensitivity (i.e., statistical power) of this approach is potentially compromised both by the discrete nature of the test statistic and by the fact that the approach ignores the confidence with which locally significant tests reject their null hypotheses. An alternative global test statistic that has neither of these problems is the minimum p value among all of the local tests. Evaluation of field significance using the minimum local p value as the global test statistic, which is also known as the Walker test, has strong connections to the joint evaluation of multiple tests in a way that controls the ?false discovery rate? (FDR, or the expected fraction of local null hypothesis rejections that are incorrect). In particular, using the minimum local p value to evaluate field significance at a level αglobal is nearly equivalent to the slightly more powerful global test based on the FDR criterion. An additional advantage shared by Walker?s test and the FDR approach is that both are robust to spatial dependence within the field of tests. The FDR method not only provides a more broadly applicable and generally more powerful field significance test than the conventional counting procedure but also allows better identification of locations with significant differences, because fewer than αglobal ? 100% (on average) of apparently significant local tests will have resulted from local null hypotheses that are true.
|
Collections
Show full item record
contributor author | Wilks, D. S. | |
date accessioned | 2017-06-09T16:47:59Z | |
date available | 2017-06-09T16:47:59Z | |
date copyright | 2006/09/01 | |
date issued | 2006 | |
identifier issn | 1558-8424 | |
identifier other | ams-74337.pdf | |
identifier uri | http://onlinelibrary.yabesh.ir/handle/yetl/4216551 | |
description abstract | The conventional approach to evaluating the joint statistical significance of multiple hypothesis tests (i.e., ?field,? or ?global,? significance) in meteorology and climatology is to count the number of individual (or ?local?) tests yielding nominally significant results and then to judge the unusualness of this integer value in the context of the distribution of such counts that would occur if all local null hypotheses were true. The sensitivity (i.e., statistical power) of this approach is potentially compromised both by the discrete nature of the test statistic and by the fact that the approach ignores the confidence with which locally significant tests reject their null hypotheses. An alternative global test statistic that has neither of these problems is the minimum p value among all of the local tests. Evaluation of field significance using the minimum local p value as the global test statistic, which is also known as the Walker test, has strong connections to the joint evaluation of multiple tests in a way that controls the ?false discovery rate? (FDR, or the expected fraction of local null hypothesis rejections that are incorrect). In particular, using the minimum local p value to evaluate field significance at a level αglobal is nearly equivalent to the slightly more powerful global test based on the FDR criterion. An additional advantage shared by Walker?s test and the FDR approach is that both are robust to spatial dependence within the field of tests. The FDR method not only provides a more broadly applicable and generally more powerful field significance test than the conventional counting procedure but also allows better identification of locations with significant differences, because fewer than αglobal ? 100% (on average) of apparently significant local tests will have resulted from local null hypotheses that are true. | |
publisher | American Meteorological Society | |
title | On “Field Significance” and the False Discovery Rate | |
type | Journal Paper | |
journal volume | 45 | |
journal issue | 9 | |
journal title | Journal of Applied Meteorology and Climatology | |
identifier doi | 10.1175/JAM2404.1 | |
journal fristpage | 1181 | |
journal lastpage | 1189 | |
tree | Journal of Applied Meteorology and Climatology:;2006:;volume( 045 ):;issue: 009 | |
contenttype | Fulltext |