YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Construction Engineering and Management
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Construction Engineering and Management
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Integrating Inverse Photogrammetry and a Deep Learning–Based Point Cloud Segmentation Approach for Automated Generation of BIM Models

    Source: Journal of Construction Engineering and Management:;2023:;Volume ( 149 ):;issue: 009::page 04023074-1
    Author:
    Zhongming Xiang
    ,
    Abbas Rashidi
    ,
    Ge Ou
    DOI: 10.1061/JCEMD4.COENG-13020
    Publisher: ASCE
    Abstract: Automatically converting three-dimensional (3D) point clouds into building information modeling (BIM) has been an active research area over the past few years. However, existing solutions in the literature have been suffering the limitations of covering all different design scenarios (prior knowledge-based approach) or collecting sufficient point clouds as training data sets (3D deep learning–based approach). To tackle this issue, we propose a fused system to automatically develop as-built BIMs from photogrammetric point clouds. A series of images is captured to generate a high-quality point cloud, which is then preprocessed by removing noise and downsizing points. Meanwhile, a two-dimensional (2D) deep-learning method, DeepLab, is utilized to semantically segment elements (e.g., walls, slabs, and columns) from the collected images. Subsequently, an inverse photogrammetric pipeline is employed to recognize element categories in the point cloud by projecting the isolated 3D planes into 2D images and assigning the identified elements to the 3D planes. Finally, the industry foundation classes are devised to create as-built BIMs based on the segmented point clouds. In order to evaluate the performance of the proposed system, we selected six cases with various elements as the testbed. The prospective results reveal that (1) our system can provide a highly automated solution to develop as-built BIMs; and (2) 39 out of 45 elements in six different cases are successfully recognized in point clouds.
    • Download: (3.770Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Integrating Inverse Photogrammetry and a Deep Learning–Based Point Cloud Segmentation Approach for Automated Generation of BIM Models

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4293419
    Collections
    • Journal of Construction Engineering and Management

    Show full item record

    contributor authorZhongming Xiang
    contributor authorAbbas Rashidi
    contributor authorGe Ou
    date accessioned2023-11-27T23:15:20Z
    date available2023-11-27T23:15:20Z
    date issued6/22/2023 12:00:00 AM
    date issued2023-06-22
    identifier otherJCEMD4.COENG-13020.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4293419
    description abstractAutomatically converting three-dimensional (3D) point clouds into building information modeling (BIM) has been an active research area over the past few years. However, existing solutions in the literature have been suffering the limitations of covering all different design scenarios (prior knowledge-based approach) or collecting sufficient point clouds as training data sets (3D deep learning–based approach). To tackle this issue, we propose a fused system to automatically develop as-built BIMs from photogrammetric point clouds. A series of images is captured to generate a high-quality point cloud, which is then preprocessed by removing noise and downsizing points. Meanwhile, a two-dimensional (2D) deep-learning method, DeepLab, is utilized to semantically segment elements (e.g., walls, slabs, and columns) from the collected images. Subsequently, an inverse photogrammetric pipeline is employed to recognize element categories in the point cloud by projecting the isolated 3D planes into 2D images and assigning the identified elements to the 3D planes. Finally, the industry foundation classes are devised to create as-built BIMs based on the segmented point clouds. In order to evaluate the performance of the proposed system, we selected six cases with various elements as the testbed. The prospective results reveal that (1) our system can provide a highly automated solution to develop as-built BIMs; and (2) 39 out of 45 elements in six different cases are successfully recognized in point clouds.
    publisherASCE
    titleIntegrating Inverse Photogrammetry and a Deep Learning–Based Point Cloud Segmentation Approach for Automated Generation of BIM Models
    typeJournal Article
    journal volume149
    journal issue9
    journal titleJournal of Construction Engineering and Management
    identifier doi10.1061/JCEMD4.COENG-13020
    journal fristpage04023074-1
    journal lastpage04023074-22
    page22
    treeJournal of Construction Engineering and Management:;2023:;Volume ( 149 ):;issue: 009
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian