YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Dynamic Rendering of Remote Indoor Environments Using Real-Time Point Cloud Data

    Source: Journal of Computing and Information Science in Engineering:;2018:;volume( 018 ):;issue: 003::page 31006
    Author:
    Lesniak, Kevin
    ,
    Tucker, Conrad S.
    DOI: 10.1115/1.4039472
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: Modern color and depth (RGB-D) sensing systems are capable of reconstructing convincing virtual representations of real world environments. These virtual reconstructions can be used as the foundation for virtual reality (VR) and augmented reality environments due to their high-quality visualizations. However, a main limitation of modern virtual reconstruction methods is the time it takes to incorporate new data and update the virtual reconstruction. This delay prevents the reconstruction from accurately rendering dynamic objects or portions of the environment (like an engineer performing an inspection of a machinery or laboratory space). The authors propose a multisensor method to dynamically capture objects in an indoor environment. The method automatically aligns the sensors using modern image homography techniques, leverages graphics processing units (GPUs) to process the large number of independent RGB-D data points, and renders them in real time. Incorporating and aligning multiple sensors allows a larger area to be captured from multiple angles, providing a more complete virtual representation of the physical space. Performing processing on GPU's leverages the large number of processing cores available to minimize the delay between data capture and rendering. A case study using commodity RGB-D sensors, computing hardware, and standard transmission control protocol internet connections is presented to demonstrate the viability of the proposed method.
    • Download: (1.274Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Dynamic Rendering of Remote Indoor Environments Using Real-Time Point Cloud Data

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4253820
    Collections
    • Journal of Computing and Information Science in Engineering

    Show full item record

    contributor authorLesniak, Kevin
    contributor authorTucker, Conrad S.
    date accessioned2019-02-28T11:12:23Z
    date available2019-02-28T11:12:23Z
    date copyright6/12/2018 12:00:00 AM
    date issued2018
    identifier issn1530-9827
    identifier otherjcise_018_03_031006.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4253820
    description abstractModern color and depth (RGB-D) sensing systems are capable of reconstructing convincing virtual representations of real world environments. These virtual reconstructions can be used as the foundation for virtual reality (VR) and augmented reality environments due to their high-quality visualizations. However, a main limitation of modern virtual reconstruction methods is the time it takes to incorporate new data and update the virtual reconstruction. This delay prevents the reconstruction from accurately rendering dynamic objects or portions of the environment (like an engineer performing an inspection of a machinery or laboratory space). The authors propose a multisensor method to dynamically capture objects in an indoor environment. The method automatically aligns the sensors using modern image homography techniques, leverages graphics processing units (GPUs) to process the large number of independent RGB-D data points, and renders them in real time. Incorporating and aligning multiple sensors allows a larger area to be captured from multiple angles, providing a more complete virtual representation of the physical space. Performing processing on GPU's leverages the large number of processing cores available to minimize the delay between data capture and rendering. A case study using commodity RGB-D sensors, computing hardware, and standard transmission control protocol internet connections is presented to demonstrate the viability of the proposed method.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleDynamic Rendering of Remote Indoor Environments Using Real-Time Point Cloud Data
    typeJournal Paper
    journal volume18
    journal issue3
    journal titleJournal of Computing and Information Science in Engineering
    identifier doi10.1115/1.4039472
    journal fristpage31006
    journal lastpage031006-11
    treeJournal of Computing and Information Science in Engineering:;2018:;volume( 018 ):;issue: 003
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian