YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Computing in Civil Engineering
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Computing in Civil Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Robotic Cross-Platform Sensor Fusion and Augmented Visualization for Large Indoor Space Reality Capture

    Source: Journal of Computing in Civil Engineering:;2022:;Volume ( 036 ):;issue: 006::page 04022036
    Author:
    Fang Xu
    ,
    Pengxiang Xia
    ,
    Hengxu You
    ,
    Jing Du
    DOI: 10.1061/(ASCE)CP.1943-5487.0001047
    Publisher: ASCE
    Abstract: The advancement in sensors, robotics, and artificial intelligence has enabled a series of methods such as simultaneous localization and mapping (SLAM), semantic segmentation, and point cloud registration to help with the reality capture process. To completely investigate an unknown indoor space, obtaining a general spatial comprehension as well as detailed scene reconstruction for a digital twin model requires a deeper insight into the characteristics of different ranging sensors, as well as corresponding techniques to combine data from distinct systems. This paper discusses the necessity and workflow of utilizing two distinct types of scanning sensors, including depth camera and light detection and ranging sensor (LiDAR), paired with a quadrupedal ground robot to obtain spatial data of a large, complex indoor space. A digital twin model was built in real time with two SLAM methods and then consolidated with the geometric feature extraction methods of fast point feature histograms (FPFH) and fast global registration. Finally, the reconstructed scene was streamed to a HoloLens 2 headset to create an illusion of seeing through walls. Results showed that both the depth camera and LiDAR could handle a large space reality capture with both required coverage and fidelity with textural information. As a result, the proposed workflow and analytical pipeline provides a hierarchical data fusion strategy to integrate the advantages of distinct sensing methods and to carry out a complete indoor investigation. It also validates the feasibility of robot-assisted reality capture in larger spaces.
    • Download: (3.938Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Robotic Cross-Platform Sensor Fusion and Augmented Visualization for Large Indoor Space Reality Capture

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4287561
    Collections
    • Journal of Computing in Civil Engineering

    Show full item record

    contributor authorFang Xu
    contributor authorPengxiang Xia
    contributor authorHengxu You
    contributor authorJing Du
    date accessioned2022-12-27T20:33:19Z
    date available2022-12-27T20:33:19Z
    date issued2022/11/01
    identifier other(ASCE)CP.1943-5487.0001047.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4287561
    description abstractThe advancement in sensors, robotics, and artificial intelligence has enabled a series of methods such as simultaneous localization and mapping (SLAM), semantic segmentation, and point cloud registration to help with the reality capture process. To completely investigate an unknown indoor space, obtaining a general spatial comprehension as well as detailed scene reconstruction for a digital twin model requires a deeper insight into the characteristics of different ranging sensors, as well as corresponding techniques to combine data from distinct systems. This paper discusses the necessity and workflow of utilizing two distinct types of scanning sensors, including depth camera and light detection and ranging sensor (LiDAR), paired with a quadrupedal ground robot to obtain spatial data of a large, complex indoor space. A digital twin model was built in real time with two SLAM methods and then consolidated with the geometric feature extraction methods of fast point feature histograms (FPFH) and fast global registration. Finally, the reconstructed scene was streamed to a HoloLens 2 headset to create an illusion of seeing through walls. Results showed that both the depth camera and LiDAR could handle a large space reality capture with both required coverage and fidelity with textural information. As a result, the proposed workflow and analytical pipeline provides a hierarchical data fusion strategy to integrate the advantages of distinct sensing methods and to carry out a complete indoor investigation. It also validates the feasibility of robot-assisted reality capture in larger spaces.
    publisherASCE
    titleRobotic Cross-Platform Sensor Fusion and Augmented Visualization for Large Indoor Space Reality Capture
    typeJournal Article
    journal volume36
    journal issue6
    journal titleJournal of Computing in Civil Engineering
    identifier doi10.1061/(ASCE)CP.1943-5487.0001047
    journal fristpage04022036
    journal lastpage04022036_15
    page15
    treeJournal of Computing in Civil Engineering:;2022:;Volume ( 036 ):;issue: 006
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian