YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • AMS
    • Bulletin of the American Meteorological Society
    • View Item
    •   YE&T Library
    • AMS
    • Bulletin of the American Meteorological Society
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Measuring the Performance of Data Validators

    Source: Bulletin of the American Meteorological Society:;1988:;volume( 069 ):;issue: 012::page 1448
    Author:
    Guttman, N.
    ,
    Karl, C.
    ,
    Reek, T.
    ,
    Shuler, V.
    DOI: 10.1175/1520-0477(1988)069<1448:MTPODV>2.0.CO;2
    Publisher: American Meteorological Society
    Abstract: The National Climatic Data Center is committed to archiving and disseminating data of high quality. Automated screening of data has proven to be very effective in isolating suspect and erroneous values in large meteorological data sets. However, manual review by validators is required to judge the validity and correct the data that is rejected by the screens. Since the judgment of the validators affects the quality of the data, the efficacy of their actions is of paramount importance. Techniques have been developed to measure whether data validators make the proper decision when editing data. Measurement is accomplished by replacing valid data with known errors (so-called ?seeds?) and then monitoring the validater's decisions. Procedural details and examples are given. The measurement program has several benefits: (1) validator performance is quantitatively evaluated; (2) limited inferences about data quality can be made; (3) feedback to the validators identifies training requirements and operational procedures that could be improved; and (4) errors of omission as well as of commission are found. It is important to recognize that seeding does not detect errors inserted into the data by validators. Thus, seeding is but one aspect of a comprehensive surveillance mechanism.
    • Download: (499.6Kb)
    • Show Full MetaData Hide Full MetaData
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Measuring the Performance of Data Validators

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4160907
    Collections
    • Bulletin of the American Meteorological Society

    Show full item record

    contributor authorGuttman, N.
    contributor authorKarl, C.
    contributor authorReek, T.
    contributor authorShuler, V.
    date accessioned2017-06-09T14:40:37Z
    date available2017-06-09T14:40:37Z
    date copyright1988/12/01
    date issued1988
    identifier issn0003-0007
    identifier otherams-24255.pdf
    identifier urihttp://onlinelibrary.yabesh.ir/handle/yetl/4160907
    description abstractThe National Climatic Data Center is committed to archiving and disseminating data of high quality. Automated screening of data has proven to be very effective in isolating suspect and erroneous values in large meteorological data sets. However, manual review by validators is required to judge the validity and correct the data that is rejected by the screens. Since the judgment of the validators affects the quality of the data, the efficacy of their actions is of paramount importance. Techniques have been developed to measure whether data validators make the proper decision when editing data. Measurement is accomplished by replacing valid data with known errors (so-called ?seeds?) and then monitoring the validater's decisions. Procedural details and examples are given. The measurement program has several benefits: (1) validator performance is quantitatively evaluated; (2) limited inferences about data quality can be made; (3) feedback to the validators identifies training requirements and operational procedures that could be improved; and (4) errors of omission as well as of commission are found. It is important to recognize that seeding does not detect errors inserted into the data by validators. Thus, seeding is but one aspect of a comprehensive surveillance mechanism.
    publisherAmerican Meteorological Society
    titleMeasuring the Performance of Data Validators
    typeJournal Paper
    journal volume69
    journal issue12
    journal titleBulletin of the American Meteorological Society
    identifier doi10.1175/1520-0477(1988)069<1448:MTPODV>2.0.CO;2
    journal fristpage1448
    journal lastpage1452
    treeBulletin of the American Meteorological Society:;1988:;volume( 069 ):;issue: 012
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian