YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Concise and Effective Network for 3D Human Modeling From Orthogonal Silhouettes

    Source: Journal of Computing and Information Science in Engineering:;2022:;volume( 022 ):;issue: 005::page 51004-1
    Author:
    Liu, Bin
    ,
    Liu, Xiuping
    ,
    Yang, Zhixin
    ,
    Wang, Charlie C. L.
    DOI: 10.1115/1.4054001
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: In this article, we revisit the problem of 3D human modeling from two orthogonal silhouettes of individuals (i.e., front and side views). Different from our previous work (Wang et al. (2003, “Virtual Human Modeling From Photographs for Garment Industry,” Comput. Aided Des., 35, pp. 577–589).), a supervised learning approach based on the convolutional neural network (CNN) is investigated to solve the problem by establishing a mapping function that can effectively extract features from two silhouettes and fuse them into coefficients in the shape space of human bodies. A new CNN structure is proposed in our work to extract not only the discriminative features of front and side views but also their mixed features for the mapping function. 3D human models with high accuracy are synthesized from coefficients generated by the mapping function. Existing CNN approaches for 3D human modeling usually learn a large number of parameters (from 8.5 M to 355.4 M) from two binary images. Differently, we investigate a new network architecture and conduct the samples on silhouettes as the input. As a consequence, more accurate models can be generated by our network with only 2.4 M coefficients. The training of our network is conducted on samples obtained by augmenting a publicly accessible dataset. Learning transfer by using datasets with a smaller number of scanned models is applied to our network to enable the function of generating results with gender-oriented (or geographical) patterns.
    • Download: (1.130Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Concise and Effective Network for 3D Human Modeling From Orthogonal Silhouettes

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4285250
    Collections
    • Journal of Computing and Information Science in Engineering

    Show full item record

    contributor authorLiu, Bin
    contributor authorLiu, Xiuping
    contributor authorYang, Zhixin
    contributor authorWang, Charlie C. L.
    date accessioned2022-05-08T09:32:06Z
    date available2022-05-08T09:32:06Z
    date copyright3/31/2022 12:00:00 AM
    date issued2022
    identifier issn1530-9827
    identifier otherjcise_22_5_051004.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4285250
    description abstractIn this article, we revisit the problem of 3D human modeling from two orthogonal silhouettes of individuals (i.e., front and side views). Different from our previous work (Wang et al. (2003, “Virtual Human Modeling From Photographs for Garment Industry,” Comput. Aided Des., 35, pp. 577–589).), a supervised learning approach based on the convolutional neural network (CNN) is investigated to solve the problem by establishing a mapping function that can effectively extract features from two silhouettes and fuse them into coefficients in the shape space of human bodies. A new CNN structure is proposed in our work to extract not only the discriminative features of front and side views but also their mixed features for the mapping function. 3D human models with high accuracy are synthesized from coefficients generated by the mapping function. Existing CNN approaches for 3D human modeling usually learn a large number of parameters (from 8.5 M to 355.4 M) from two binary images. Differently, we investigate a new network architecture and conduct the samples on silhouettes as the input. As a consequence, more accurate models can be generated by our network with only 2.4 M coefficients. The training of our network is conducted on samples obtained by augmenting a publicly accessible dataset. Learning transfer by using datasets with a smaller number of scanned models is applied to our network to enable the function of generating results with gender-oriented (or geographical) patterns.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleConcise and Effective Network for 3D Human Modeling From Orthogonal Silhouettes
    typeJournal Paper
    journal volume22
    journal issue5
    journal titleJournal of Computing and Information Science in Engineering
    identifier doi10.1115/1.4054001
    journal fristpage51004-1
    journal lastpage51004-11
    page11
    treeJournal of Computing and Information Science in Engineering:;2022:;volume( 022 ):;issue: 005
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian