YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Evaluating Large Language Models for Material Selection

    Source: Journal of Computing and Information Science in Engineering:;2024:;volume( 025 ):;issue: 002::page 21004-1
    Author:
    Grandi, Daniele
    ,
    Jain, Yash Patawari
    ,
    Groom, Allin
    ,
    Cramer, Brandon
    ,
    McComb, Christopher
    DOI: 10.1115/1.4066730
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: Material selection is a crucial step in conceptual design due to its significant impact on the functionality, aesthetics, manufacturability, and sustainability impact of the final product. This study investigates the use of large language models (LLMs) for material selection in the product design process and compares the performance of LLMs against expert choices for various design scenarios. By collecting a dataset of expert material preferences, the study provides a basis for evaluating how well LLMs can align with expert recommendations through prompt engineering and hyperparameter tuning. The divergence between LLM and expert recommendations is measured across different model configurations, prompt strategies, and temperature settings. This approach allows for a detailed analysis of factors influencing the LLMs' effectiveness in recommending materials. The results from this study highlight two failure modes: the low variance of recommendations across different design scenarios and the tendency toward overestimating material appropriateness. Parallel prompting is identified as a useful prompt-engineering method when using LLMs for material selection. The findings further suggest that, while LLMs can provide valuable assistance, their recommendations often vary significantly from those of human experts. This discrepancy underscores the need for further research into how LLMs can be better tailored to replicate expert decision-making in material selection. This work contributes to the growing body of knowledge on how LLMs can be integrated into the design process, offering insights into their current limitations and potential for future improvements.
    • Download: (998.7Kb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Evaluating Large Language Models for Material Selection

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4306579
    Collections
    • Journal of Computing and Information Science in Engineering

    Show full item record

    contributor authorGrandi, Daniele
    contributor authorJain, Yash Patawari
    contributor authorGroom, Allin
    contributor authorCramer, Brandon
    contributor authorMcComb, Christopher
    date accessioned2025-04-21T10:37:41Z
    date available2025-04-21T10:37:41Z
    date copyright11/14/2024 12:00:00 AM
    date issued2024
    identifier issn1530-9827
    identifier otherjcise_25_2_021004.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4306579
    description abstractMaterial selection is a crucial step in conceptual design due to its significant impact on the functionality, aesthetics, manufacturability, and sustainability impact of the final product. This study investigates the use of large language models (LLMs) for material selection in the product design process and compares the performance of LLMs against expert choices for various design scenarios. By collecting a dataset of expert material preferences, the study provides a basis for evaluating how well LLMs can align with expert recommendations through prompt engineering and hyperparameter tuning. The divergence between LLM and expert recommendations is measured across different model configurations, prompt strategies, and temperature settings. This approach allows for a detailed analysis of factors influencing the LLMs' effectiveness in recommending materials. The results from this study highlight two failure modes: the low variance of recommendations across different design scenarios and the tendency toward overestimating material appropriateness. Parallel prompting is identified as a useful prompt-engineering method when using LLMs for material selection. The findings further suggest that, while LLMs can provide valuable assistance, their recommendations often vary significantly from those of human experts. This discrepancy underscores the need for further research into how LLMs can be better tailored to replicate expert decision-making in material selection. This work contributes to the growing body of knowledge on how LLMs can be integrated into the design process, offering insights into their current limitations and potential for future improvements.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleEvaluating Large Language Models for Material Selection
    typeJournal Paper
    journal volume25
    journal issue2
    journal titleJournal of Computing and Information Science in Engineering
    identifier doi10.1115/1.4066730
    journal fristpage21004-1
    journal lastpage21004-12
    page12
    treeJournal of Computing and Information Science in Engineering:;2024:;volume( 025 ):;issue: 002
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian