YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Verification, Validation and Uncertainty Quantification
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Verification, Validation and Uncertainty Quantification
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Automatic Ground-Truth Image Labeling for Deep Neural Network Training and Evaluation Using Industrial Robotics and Motion Capture

    Source: Journal of Verification, Validation and Uncertainty Quantification:;2024:;volume( 009 ):;issue: 002::page 21001-1
    Author:
    Helmich, Harrison F.
    ,
    Doherty, Charles J.
    ,
    Costello, Donald H.
    ,
    Kutzer, Michael D. M.
    DOI: 10.1115/1.4064311
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: The United States Navy (USN) intends to increase the amount of uncrewed aircraft in a carrier air wing. To support this increase, carrier-based uncrewed aircraft will be required to have some level of autonomy as there will be situations where a human cannot be in/on the loop. However, there is no existing and approved method to certify autonomy within Naval Aviation. In support of generating certification evidence for autonomy, the United States Naval Academy (USNA) has created a training and evaluation system (TES) to provide quantifiable metrics for feedback performance in autonomous systems. The preliminary use case for this work focuses on autonomous aerial refueling. Prior demonstrations of autonomous aerial refueling have leveraged a deep neural network (DNN) for processing visual feedback to approximate the relative position of an aerial refueling drogue. The training and evaluation system proposed in this work simulates the relative motion between the aerial refueling drogue and feedback camera system using industrial robotics. Ground-truth measurements of the pose between the camera and drogue are measured using a commercial motion capture system. Preliminary results demonstrate calibration methods providing ground-truth measurements with millimeter precision. Leveraging this calibration, the proposed system is capable of providing large-scale datasets for DNN training and evaluation against a precise ground truth.
    • Download: (2.913Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Automatic Ground-Truth Image Labeling for Deep Neural Network Training and Evaluation Using Industrial Robotics and Motion Capture

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4302725
    Collections
    • Journal of Verification, Validation and Uncertainty Quantification

    Show full item record

    contributor authorHelmich, Harrison F.
    contributor authorDoherty, Charles J.
    contributor authorCostello, Donald H.
    contributor authorKutzer, Michael D. M.
    date accessioned2024-12-24T18:46:33Z
    date available2024-12-24T18:46:33Z
    date copyright6/21/2024 12:00:00 AM
    date issued2024
    identifier issn2377-2158
    identifier othervvuq_009_02_021001.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4302725
    description abstractThe United States Navy (USN) intends to increase the amount of uncrewed aircraft in a carrier air wing. To support this increase, carrier-based uncrewed aircraft will be required to have some level of autonomy as there will be situations where a human cannot be in/on the loop. However, there is no existing and approved method to certify autonomy within Naval Aviation. In support of generating certification evidence for autonomy, the United States Naval Academy (USNA) has created a training and evaluation system (TES) to provide quantifiable metrics for feedback performance in autonomous systems. The preliminary use case for this work focuses on autonomous aerial refueling. Prior demonstrations of autonomous aerial refueling have leveraged a deep neural network (DNN) for processing visual feedback to approximate the relative position of an aerial refueling drogue. The training and evaluation system proposed in this work simulates the relative motion between the aerial refueling drogue and feedback camera system using industrial robotics. Ground-truth measurements of the pose between the camera and drogue are measured using a commercial motion capture system. Preliminary results demonstrate calibration methods providing ground-truth measurements with millimeter precision. Leveraging this calibration, the proposed system is capable of providing large-scale datasets for DNN training and evaluation against a precise ground truth.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleAutomatic Ground-Truth Image Labeling for Deep Neural Network Training and Evaluation Using Industrial Robotics and Motion Capture
    typeJournal Paper
    journal volume9
    journal issue2
    journal titleJournal of Verification, Validation and Uncertainty Quantification
    identifier doi10.1115/1.4064311
    journal fristpage21001-1
    journal lastpage21001-10
    page10
    treeJournal of Verification, Validation and Uncertainty Quantification:;2024:;volume( 009 ):;issue: 002
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian