YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Construction Engineering and Management
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Construction Engineering and Management
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Three-Dimensional Object Detection with Deep Neural Networks for Automatic As-Built Reconstruction

    Source: Journal of Construction Engineering and Management:;2021:;Volume ( 147 ):;issue: 009::page 04021098-1
    Author:
    Yongzhi Xu
    ,
    Xuesong Shen
    ,
    Samsung Lim
    ,
    Xuesong Li
    DOI: 10.1061/(ASCE)CO.1943-7862.0002003
    Publisher: ASCE
    Abstract: Automatic three-dimensional (3D) as-built reconstruction for non-Manhattan structures and multiroom buildings remains an industrywide challenge due to complex building environments and high demands for generating volumetric and object-level models. Conventional approaches are based on multiple separate steps extracting geometric and semantic features independently that cannot fully exploit object-level features. This paper aims to develop an end-to-end, fully automatic, and object-level reconstruction approach to converting point clouds of non-Manhattan and multiroom buildings into 3D models. A two-stage 3D object-detection method is proposed using region-based convolutional neural networks (R-CNN). Feature fusion between sparse 3D and two-dimensional (2D) bird’s eye view (BEV) feature maps is investigated to improve the generality and efficiency of modeling building primitives. In order to address the difficulties of training label generation caused by largely overlapped building objects, a dual-channel network is developed with one channel detecting walls and the other channel detecting remaining categories. The experimental results achieved an overall detection accuracy of 85.79% and localization accuracy of 79.03%, which have increased by 12.75% and 5.71% over the latest benchmarks, respectively. It took an average of 4.75 s to reconstruct a single-story building with a mean footprint of 471.936  m2. The resulting computing efficiency outweighs a majority of existing as-built modeling approaches and thus holds significant potential for future industrial applications.
    • Download: (1.783Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Three-Dimensional Object Detection with Deep Neural Networks for Automatic As-Built Reconstruction

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4271952
    Collections
    • Journal of Construction Engineering and Management

    Show full item record

    contributor authorYongzhi Xu
    contributor authorXuesong Shen
    contributor authorSamsung Lim
    contributor authorXuesong Li
    date accessioned2022-02-01T21:44:47Z
    date available2022-02-01T21:44:47Z
    date issued9/1/2021
    identifier other%28ASCE%29CO.1943-7862.0002003.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4271952
    description abstractAutomatic three-dimensional (3D) as-built reconstruction for non-Manhattan structures and multiroom buildings remains an industrywide challenge due to complex building environments and high demands for generating volumetric and object-level models. Conventional approaches are based on multiple separate steps extracting geometric and semantic features independently that cannot fully exploit object-level features. This paper aims to develop an end-to-end, fully automatic, and object-level reconstruction approach to converting point clouds of non-Manhattan and multiroom buildings into 3D models. A two-stage 3D object-detection method is proposed using region-based convolutional neural networks (R-CNN). Feature fusion between sparse 3D and two-dimensional (2D) bird’s eye view (BEV) feature maps is investigated to improve the generality and efficiency of modeling building primitives. In order to address the difficulties of training label generation caused by largely overlapped building objects, a dual-channel network is developed with one channel detecting walls and the other channel detecting remaining categories. The experimental results achieved an overall detection accuracy of 85.79% and localization accuracy of 79.03%, which have increased by 12.75% and 5.71% over the latest benchmarks, respectively. It took an average of 4.75 s to reconstruct a single-story building with a mean footprint of 471.936  m2. The resulting computing efficiency outweighs a majority of existing as-built modeling approaches and thus holds significant potential for future industrial applications.
    publisherASCE
    titleThree-Dimensional Object Detection with Deep Neural Networks for Automatic As-Built Reconstruction
    typeJournal Paper
    journal volume147
    journal issue9
    journal titleJournal of Construction Engineering and Management
    identifier doi10.1061/(ASCE)CO.1943-7862.0002003
    journal fristpage04021098-1
    journal lastpage04021098-11
    page11
    treeJournal of Construction Engineering and Management:;2021:;Volume ( 147 ):;issue: 009
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian