YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Mechanical Design
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Mechanical Design
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Attention-Enhanced Multimodal Learning for Conceptual Design Evaluations

    Source: Journal of Mechanical Design:;2023:;volume( 145 ):;issue: 004::page 41410-1
    Author:
    Song, Binyang
    ,
    Miller, Scarlett
    ,
    Ahmed, Faez
    DOI: 10.1115/1.4056669
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: Conceptual design evaluation is an indispensable component of innovation in the early stage of engineering design. Properly assessing the effectiveness of conceptual design requires a rigorous evaluation of the outputs. Traditional methods to evaluate conceptual designs are slow, expensive, and difficult to scale because they rely on human expert input. An alternative approach is to use computational methods to evaluate design concepts. However, most existing methods have limited utility because they are constrained to unimodal design representations (e.g., texts or sketches). To overcome these limitations, we propose an attention-enhanced multimodal learning (AEMML)-based machine learning (ML) model to predict five design metrics: drawing quality, uniqueness, elegance, usefulness, and creativity. The proposed model utilizes knowledge from large external datasets through transfer learning (TL), simultaneously processes text and sketch data from early-phase concepts, and effectively fuses the multimodal information through a mutual cross-attention mechanism. To study the efficacy of multimodal learning (MML) and attention-based information fusion, we compare (1) a baseline MML model and the unimodal models and (2) the attention-enhanced models with baseline models in terms of their explanatory power for the variability of the design metrics. The results show that MML improves the model explanatory power by 0.05–0.12 and the mutual cross-attention mechanism further increases the explanatory power of the approach by 0.05–0.09, leading to the highest explanatory power of 0.44 for drawing quality, 0.60 for uniqueness, 0.45 for elegance, 0.43 for usefulness, and 0.32 for creativity. Our findings highlight the benefit of using multimodal representations for design metric assessment.
    • Download: (710.9Kb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Attention-Enhanced Multimodal Learning for Conceptual Design Evaluations

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4292372
    Collections
    • Journal of Mechanical Design

    Show full item record

    contributor authorSong, Binyang
    contributor authorMiller, Scarlett
    contributor authorAhmed, Faez
    date accessioned2023-08-16T18:43:06Z
    date available2023-08-16T18:43:06Z
    date copyright2/3/2023 12:00:00 AM
    date issued2023
    identifier issn1050-0472
    identifier othermd_145_4_041410.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4292372
    description abstractConceptual design evaluation is an indispensable component of innovation in the early stage of engineering design. Properly assessing the effectiveness of conceptual design requires a rigorous evaluation of the outputs. Traditional methods to evaluate conceptual designs are slow, expensive, and difficult to scale because they rely on human expert input. An alternative approach is to use computational methods to evaluate design concepts. However, most existing methods have limited utility because they are constrained to unimodal design representations (e.g., texts or sketches). To overcome these limitations, we propose an attention-enhanced multimodal learning (AEMML)-based machine learning (ML) model to predict five design metrics: drawing quality, uniqueness, elegance, usefulness, and creativity. The proposed model utilizes knowledge from large external datasets through transfer learning (TL), simultaneously processes text and sketch data from early-phase concepts, and effectively fuses the multimodal information through a mutual cross-attention mechanism. To study the efficacy of multimodal learning (MML) and attention-based information fusion, we compare (1) a baseline MML model and the unimodal models and (2) the attention-enhanced models with baseline models in terms of their explanatory power for the variability of the design metrics. The results show that MML improves the model explanatory power by 0.05–0.12 and the mutual cross-attention mechanism further increases the explanatory power of the approach by 0.05–0.09, leading to the highest explanatory power of 0.44 for drawing quality, 0.60 for uniqueness, 0.45 for elegance, 0.43 for usefulness, and 0.32 for creativity. Our findings highlight the benefit of using multimodal representations for design metric assessment.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleAttention-Enhanced Multimodal Learning for Conceptual Design Evaluations
    typeJournal Paper
    journal volume145
    journal issue4
    journal titleJournal of Mechanical Design
    identifier doi10.1115/1.4056669
    journal fristpage41410-1
    journal lastpage41410-12
    page12
    treeJournal of Mechanical Design:;2023:;volume( 145 ):;issue: 004
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian