YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Management in Engineering
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Management in Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Photogrammetric Point Cloud Segmentation and Object Information Extraction for Creating Virtual Environments and Simulations

    Source: Journal of Management in Engineering:;2020:;Volume ( 036 ):;issue: 002
    Author:
    Meida Chen
    ,
    Andrew Feng
    ,
    Ryan McAlinden
    ,
    Lucio Soibelman
    DOI: 10.1061/(ASCE)ME.1943-5479.0000737
    Publisher: ASCE
    Abstract: Photogrammetric techniques have dramatically improved over the last few years, enabling the creation of visually compelling three-dimensional (3D) meshes using unmanned aerial vehicle imagery. These high-quality 3D meshes have attracted notice from both academicians and industry practitioners in developing virtual environments and simulations. However, photogrammetric generated point clouds and meshes do not allow both user-level and system-level interaction because they do not contain the semantic information to distinguish between objects. Thus, segmenting generated point clouds and meshes and extracting the associated object information is a necessary step. A framework for point cloud and mesh classification and segmentation is presented in this paper. The proposed framework was designed considering photogrammetric data-quality issues and provides a novel way of extracting object information, including (1) individual tree locations and related features and (2) building footprints. Experiments were conducted to rank different point descriptors and evaluate supervised machine-learning algorithms for segmenting photogrammetric generated point clouds. The proposed framework was validated using data collected at the University of Southern California (USC) and the Muscatatuck Urban Training Center (MUTC).
    • Download: (4.185Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Photogrammetric Point Cloud Segmentation and Object Information Extraction for Creating Virtual Environments and Simulations

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4266055
    Collections
    • Journal of Management in Engineering

    Show full item record

    contributor authorMeida Chen
    contributor authorAndrew Feng
    contributor authorRyan McAlinden
    contributor authorLucio Soibelman
    date accessioned2022-01-30T19:49:58Z
    date available2022-01-30T19:49:58Z
    date issued2020
    identifier other%28ASCE%29ME.1943-5479.0000737.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4266055
    description abstractPhotogrammetric techniques have dramatically improved over the last few years, enabling the creation of visually compelling three-dimensional (3D) meshes using unmanned aerial vehicle imagery. These high-quality 3D meshes have attracted notice from both academicians and industry practitioners in developing virtual environments and simulations. However, photogrammetric generated point clouds and meshes do not allow both user-level and system-level interaction because they do not contain the semantic information to distinguish between objects. Thus, segmenting generated point clouds and meshes and extracting the associated object information is a necessary step. A framework for point cloud and mesh classification and segmentation is presented in this paper. The proposed framework was designed considering photogrammetric data-quality issues and provides a novel way of extracting object information, including (1) individual tree locations and related features and (2) building footprints. Experiments were conducted to rank different point descriptors and evaluate supervised machine-learning algorithms for segmenting photogrammetric generated point clouds. The proposed framework was validated using data collected at the University of Southern California (USC) and the Muscatatuck Urban Training Center (MUTC).
    publisherASCE
    titlePhotogrammetric Point Cloud Segmentation and Object Information Extraction for Creating Virtual Environments and Simulations
    typeJournal Paper
    journal volume36
    journal issue2
    journal titleJournal of Management in Engineering
    identifier doi10.1061/(ASCE)ME.1943-5479.0000737
    page04019046
    treeJournal of Management in Engineering:;2020:;Volume ( 036 ):;issue: 002
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian