YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Construction Engineering and Management
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Construction Engineering and Management
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Vision-Guided Autonomous Block Loading in a Dual-Robot Collaborative Handling Framework

    Source: Journal of Construction Engineering and Management:;2025:;Volume ( 151 ):;issue: 005::page 04025035-1
    Author:
    Zhiyuan Chen
    ,
    Tiemin Li
    ,
    Lichang Qin
    ,
    Yao Jiang
    DOI: 10.1061/JCEMD4.COENG-15847
    Publisher: American Society of Civil Engineers
    Abstract: The construction industry is rapidly evolving and increasingly requires automation in material handling. While robotic solutions have been introduced for transportation and unloading, the loading phase remains largely dependent on manual labor. Blocks, a fundamental building material in construction, lack automated loading solutions due to the unstructured nature of construction sites and the need for high precision. This paper presents a vision-based collaborative robotic system designed for automated block loading. The proposed system integrates a novel three-stage visual localization pipeline that employs a coarse-to-fine hierarchical mechanism for object localization. Stage I utilizes deep vision networks to detect and localize the target block, enabling autonomous robotic grasping. Stage II addresses grasping inaccuracies using binocular stereo-vision models to measure the in-hand block’s pose. Advanced deep learning techniques handle detection complexities and uncertainties, while traditional model-based methods ensure precision. Stage III is used for autonomous placement, employing marker-based metrology to quickly establish a local reference frame, thus mitigating cumulative stacking errors. A highly automated pipeline for generating large-scale, labeled simulation datasets is also developed to train neural networks. Laboratory and field experiments demonstrate the system’s effectiveness, achieving a 95.8% success rate and continuous stacking accuracy of 2.95 mm. This study contributes to the existing body of knowledge by introducing a novel robotic solution for autonomous block loading, offering a three-stage visual localization approach that ensures high success rates and precision. Furthermore, this study advances the understanding of the accuracy assurance mechanism. It demonstrates the effectiveness of multirobot collaboration and visual localization algorithms in construction automation. Block loading is a critical and frequent task in construction, traditionally reliant on manual labor due to the challenges of ensuring precise robotic operation in unstructured environments. This paper introduces a collaborative handling framework utilizing multimobile robots to enhance automation in building material handling. The proposed three-stage visual localization pipeline significantly improves the precision of robotic block handling by dividing the localization process into grasping and in-hand phases. This segmentation reduces the accuracy demands during initial grasping while compensating for any errors in the process. The robotic system is expected to decrease labor reliance, increase productivity, and streamline resource and process coordination within construction environments. The findings of this study provide a foundation for expanding the application of automated robots to handle a wider range of building materials, potentially transforming construction practices in the future.
    • Download: (6.143Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Vision-Guided Autonomous Block Loading in a Dual-Robot Collaborative Handling Framework

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4307276
    Collections
    • Journal of Construction Engineering and Management

    Show full item record

    contributor authorZhiyuan Chen
    contributor authorTiemin Li
    contributor authorLichang Qin
    contributor authorYao Jiang
    date accessioned2025-08-17T22:40:27Z
    date available2025-08-17T22:40:27Z
    date copyright5/1/2025 12:00:00 AM
    date issued2025
    identifier otherJCEMD4.COENG-15847.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4307276
    description abstractThe construction industry is rapidly evolving and increasingly requires automation in material handling. While robotic solutions have been introduced for transportation and unloading, the loading phase remains largely dependent on manual labor. Blocks, a fundamental building material in construction, lack automated loading solutions due to the unstructured nature of construction sites and the need for high precision. This paper presents a vision-based collaborative robotic system designed for automated block loading. The proposed system integrates a novel three-stage visual localization pipeline that employs a coarse-to-fine hierarchical mechanism for object localization. Stage I utilizes deep vision networks to detect and localize the target block, enabling autonomous robotic grasping. Stage II addresses grasping inaccuracies using binocular stereo-vision models to measure the in-hand block’s pose. Advanced deep learning techniques handle detection complexities and uncertainties, while traditional model-based methods ensure precision. Stage III is used for autonomous placement, employing marker-based metrology to quickly establish a local reference frame, thus mitigating cumulative stacking errors. A highly automated pipeline for generating large-scale, labeled simulation datasets is also developed to train neural networks. Laboratory and field experiments demonstrate the system’s effectiveness, achieving a 95.8% success rate and continuous stacking accuracy of 2.95 mm. This study contributes to the existing body of knowledge by introducing a novel robotic solution for autonomous block loading, offering a three-stage visual localization approach that ensures high success rates and precision. Furthermore, this study advances the understanding of the accuracy assurance mechanism. It demonstrates the effectiveness of multirobot collaboration and visual localization algorithms in construction automation. Block loading is a critical and frequent task in construction, traditionally reliant on manual labor due to the challenges of ensuring precise robotic operation in unstructured environments. This paper introduces a collaborative handling framework utilizing multimobile robots to enhance automation in building material handling. The proposed three-stage visual localization pipeline significantly improves the precision of robotic block handling by dividing the localization process into grasping and in-hand phases. This segmentation reduces the accuracy demands during initial grasping while compensating for any errors in the process. The robotic system is expected to decrease labor reliance, increase productivity, and streamline resource and process coordination within construction environments. The findings of this study provide a foundation for expanding the application of automated robots to handle a wider range of building materials, potentially transforming construction practices in the future.
    publisherAmerican Society of Civil Engineers
    titleVision-Guided Autonomous Block Loading in a Dual-Robot Collaborative Handling Framework
    typeJournal Article
    journal volume151
    journal issue5
    journal titleJournal of Construction Engineering and Management
    identifier doi10.1061/JCEMD4.COENG-15847
    journal fristpage04025035-1
    journal lastpage04025035-21
    page21
    treeJournal of Construction Engineering and Management:;2025:;Volume ( 151 ):;issue: 005
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian