Strategies for Evaluating Quality Assurance ProceduresSource: Journal of Applied Meteorology and Climatology:;2008:;volume( 047 ):;issue: 006::page 1785DOI: 10.1175/2007JAMC1706.1Publisher: American Meteorological Society
Abstract: The evaluation strategies outlined in this paper constitute a set of tools beneficial to the development and documentation of robust automated quality assurance (QA) procedures. Traditionally, thresholds for the QA of climate data have been based on target flag rates or statistical confidence limits. However, these approaches do not necessarily quantify a procedure?s effectiveness at detecting true errors in the data. Rather, as illustrated by way of an ?extremes check? for daily precipitation totals, information on the performance of a QA test is best obtained through a systematic manual inspection of samples of flagged values combined with a careful analysis of geographical and seasonal patterns of flagged observations. Such an evaluation process not only helps to document the effectiveness of each individual test, but, when applied repeatedly throughout the development process, it also aids in choosing the optimal combination of QA procedures and associated thresholds. In addition, the approach described here constitutes a mechanism for reassessing system performance whenever revisions are made following initial development.
|
Collections
Show full item record
contributor author | Durre, Imke | |
contributor author | Menne, Matthew J. | |
contributor author | Vose, Russell S. | |
date accessioned | 2017-06-09T16:18:19Z | |
date available | 2017-06-09T16:18:19Z | |
date copyright | 2008/06/01 | |
date issued | 2008 | |
identifier issn | 1558-8424 | |
identifier other | ams-65389.pdf | |
identifier uri | http://onlinelibrary.yabesh.ir/handle/yetl/4206608 | |
description abstract | The evaluation strategies outlined in this paper constitute a set of tools beneficial to the development and documentation of robust automated quality assurance (QA) procedures. Traditionally, thresholds for the QA of climate data have been based on target flag rates or statistical confidence limits. However, these approaches do not necessarily quantify a procedure?s effectiveness at detecting true errors in the data. Rather, as illustrated by way of an ?extremes check? for daily precipitation totals, information on the performance of a QA test is best obtained through a systematic manual inspection of samples of flagged values combined with a careful analysis of geographical and seasonal patterns of flagged observations. Such an evaluation process not only helps to document the effectiveness of each individual test, but, when applied repeatedly throughout the development process, it also aids in choosing the optimal combination of QA procedures and associated thresholds. In addition, the approach described here constitutes a mechanism for reassessing system performance whenever revisions are made following initial development. | |
publisher | American Meteorological Society | |
title | Strategies for Evaluating Quality Assurance Procedures | |
type | Journal Paper | |
journal volume | 47 | |
journal issue | 6 | |
journal title | Journal of Applied Meteorology and Climatology | |
identifier doi | 10.1175/2007JAMC1706.1 | |
journal fristpage | 1785 | |
journal lastpage | 1791 | |
tree | Journal of Applied Meteorology and Climatology:;2008:;volume( 047 ):;issue: 006 | |
contenttype | Fulltext |