YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Computing in Civil Engineering
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Computing in Civil Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Vision-Based Body Pose Estimation of Excavator Using a Transformer-Based Deep-Learning Model

    Source: Journal of Computing in Civil Engineering:;2025:;Volume ( 039 ):;issue: 002::page 04024064-1
    Author:
    Ankang Ji
    ,
    Hongqin Fan
    ,
    Xiaolong Xue
    DOI: 10.1061/JCCEE5.CPENG-6079
    Publisher: American Society of Civil Engineers
    Abstract: Devoted to safety, efficiency, and productivity management on construction sites, a deep-learning method termed transformer-based mechanical equipment pose network (TransMPNet) is proposed in this research to work on images for the body pose estimation of excavators in effective and efficient ways. TransMPNet contains data processing, an ensemble model coupled with DenseNet201, an improved transformer module, a loss function, and evaluation metrics to perform feature processing and learning for accurate results. To verify the effectiveness and efficiency of the method, a publicly available image database of excavator body poses is adopted for experimental testing and validation. The results indicate that TransMPNet provides excellent performance with a mean-square error (MSE) of 218.626, a root-MSE (RMSE) of 14.786, an average normalized error (NE) of 26.289×10−3, and an average area under the curve (AUC) of 74.487×10−3, and it significantly outperforms other state-of-the-art methods such as the cascaded pyramid network (CPN) and the stacked hourglass network (SHG) in terms of evaluation metrics. Accordingly, TransMPNet contributes to excavator body pose estimation, thereby providing more effective and accurate results with great potential for practical application in on-site construction management.
    • Download: (6.777Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Vision-Based Body Pose Estimation of Excavator Using a Transformer-Based Deep-Learning Model

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4304124
    Collections
    • Journal of Computing in Civil Engineering

    Show full item record

    contributor authorAnkang Ji
    contributor authorHongqin Fan
    contributor authorXiaolong Xue
    date accessioned2025-04-20T10:10:04Z
    date available2025-04-20T10:10:04Z
    date copyright12/31/2024 12:00:00 AM
    date issued2025
    identifier otherJCCEE5.CPENG-6079.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4304124
    description abstractDevoted to safety, efficiency, and productivity management on construction sites, a deep-learning method termed transformer-based mechanical equipment pose network (TransMPNet) is proposed in this research to work on images for the body pose estimation of excavators in effective and efficient ways. TransMPNet contains data processing, an ensemble model coupled with DenseNet201, an improved transformer module, a loss function, and evaluation metrics to perform feature processing and learning for accurate results. To verify the effectiveness and efficiency of the method, a publicly available image database of excavator body poses is adopted for experimental testing and validation. The results indicate that TransMPNet provides excellent performance with a mean-square error (MSE) of 218.626, a root-MSE (RMSE) of 14.786, an average normalized error (NE) of 26.289×10−3, and an average area under the curve (AUC) of 74.487×10−3, and it significantly outperforms other state-of-the-art methods such as the cascaded pyramid network (CPN) and the stacked hourglass network (SHG) in terms of evaluation metrics. Accordingly, TransMPNet contributes to excavator body pose estimation, thereby providing more effective and accurate results with great potential for practical application in on-site construction management.
    publisherAmerican Society of Civil Engineers
    titleVision-Based Body Pose Estimation of Excavator Using a Transformer-Based Deep-Learning Model
    typeJournal Article
    journal volume39
    journal issue2
    journal titleJournal of Computing in Civil Engineering
    identifier doi10.1061/JCCEE5.CPENG-6079
    journal fristpage04024064-1
    journal lastpage04024064-20
    page20
    treeJournal of Computing in Civil Engineering:;2025:;Volume ( 039 ):;issue: 002
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian