Evaluating Large Language Models for Material SelectionSource: Journal of Computing and Information Science in Engineering:;2024:;volume( 025 ):;issue: 002::page 21004-1Author:Grandi, Daniele
,
Jain, Yash Patawari
,
Groom, Allin
,
Cramer, Brandon
,
McComb, Christopher
DOI: 10.1115/1.4066730Publisher: The American Society of Mechanical Engineers (ASME)
Abstract: Material selection is a crucial step in conceptual design due to its significant impact on the functionality, aesthetics, manufacturability, and sustainability impact of the final product. This study investigates the use of large language models (LLMs) for material selection in the product design process and compares the performance of LLMs against expert choices for various design scenarios. By collecting a dataset of expert material preferences, the study provides a basis for evaluating how well LLMs can align with expert recommendations through prompt engineering and hyperparameter tuning. The divergence between LLM and expert recommendations is measured across different model configurations, prompt strategies, and temperature settings. This approach allows for a detailed analysis of factors influencing the LLMs' effectiveness in recommending materials. The results from this study highlight two failure modes: the low variance of recommendations across different design scenarios and the tendency toward overestimating material appropriateness. Parallel prompting is identified as a useful prompt-engineering method when using LLMs for material selection. The findings further suggest that, while LLMs can provide valuable assistance, their recommendations often vary significantly from those of human experts. This discrepancy underscores the need for further research into how LLMs can be better tailored to replicate expert decision-making in material selection. This work contributes to the growing body of knowledge on how LLMs can be integrated into the design process, offering insights into their current limitations and potential for future improvements.
|
Show full item record
contributor author | Grandi, Daniele | |
contributor author | Jain, Yash Patawari | |
contributor author | Groom, Allin | |
contributor author | Cramer, Brandon | |
contributor author | McComb, Christopher | |
date accessioned | 2025-04-21T10:37:41Z | |
date available | 2025-04-21T10:37:41Z | |
date copyright | 11/14/2024 12:00:00 AM | |
date issued | 2024 | |
identifier issn | 1530-9827 | |
identifier other | jcise_25_2_021004.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl1/handle/yetl/4306579 | |
description abstract | Material selection is a crucial step in conceptual design due to its significant impact on the functionality, aesthetics, manufacturability, and sustainability impact of the final product. This study investigates the use of large language models (LLMs) for material selection in the product design process and compares the performance of LLMs against expert choices for various design scenarios. By collecting a dataset of expert material preferences, the study provides a basis for evaluating how well LLMs can align with expert recommendations through prompt engineering and hyperparameter tuning. The divergence between LLM and expert recommendations is measured across different model configurations, prompt strategies, and temperature settings. This approach allows for a detailed analysis of factors influencing the LLMs' effectiveness in recommending materials. The results from this study highlight two failure modes: the low variance of recommendations across different design scenarios and the tendency toward overestimating material appropriateness. Parallel prompting is identified as a useful prompt-engineering method when using LLMs for material selection. The findings further suggest that, while LLMs can provide valuable assistance, their recommendations often vary significantly from those of human experts. This discrepancy underscores the need for further research into how LLMs can be better tailored to replicate expert decision-making in material selection. This work contributes to the growing body of knowledge on how LLMs can be integrated into the design process, offering insights into their current limitations and potential for future improvements. | |
publisher | The American Society of Mechanical Engineers (ASME) | |
title | Evaluating Large Language Models for Material Selection | |
type | Journal Paper | |
journal volume | 25 | |
journal issue | 2 | |
journal title | Journal of Computing and Information Science in Engineering | |
identifier doi | 10.1115/1.4066730 | |
journal fristpage | 21004-1 | |
journal lastpage | 21004-12 | |
page | 12 | |
tree | Journal of Computing and Information Science in Engineering:;2024:;volume( 025 ):;issue: 002 | |
contenttype | Fulltext |