DesignQA: A Multimodal Benchmark for Evaluating Large Language Models’ Understanding of Engineering DocumentationSource: Journal of Computing and Information Science in Engineering:;2024:;volume( 025 ):;issue: 002::page 21009-1Author:Doris, Anna C.
,
Grandi, Daniele
,
Tomich, Ryan
,
Alam, Md Ferdous
,
Ataei, Mohammadmehdi
,
Cheong, Hyunmin
,
Ahmed, Faez
DOI: 10.1115/1.4067333Publisher: The American Society of Mechanical Engineers (ASME)
Abstract: This research introduces DesignQA, a novel benchmark aimed at evaluating the proficiency of multimodal large language models (MLLMs) in comprehending and applying engineering requirements in technical documentation. Developed with a focus on real-world engineering challenges, DesignQA uniquely combines multimodal data—including textual design requirements, CAD images, and engineering drawings—derived from the Formula SAE student competition. Unlike many existing MLLM benchmarks, DesignQA contains document-grounded visual questions where the input image and the input document come from different sources. The benchmark features automatic evaluation metrics and is divided into segments—Rule Comprehension, Rule Compliance, and Rule Extraction—based on tasks that engineers perform when designing according to requirements. We evaluate state-of-the-art models (at the time of writing) like GPT-4o, GPT-4, Claude-Opus, Gemini-1.0, and LLaVA-1.5 against the benchmark, and our study uncovers the existing gaps in MLLMs’ abilities to interpret complex engineering documentation. The MLLMs tested, while promising, struggle to reliably retrieve relevant rules from the Formula SAE documentation, face challenges in recognizing technical components in CAD images and encounter difficulty in analyzing engineering drawings. These findings underscore the need for multimodal models that can better handle the multifaceted questions characteristic of design according to technical documentation. This benchmark sets a foundation for future advancements in AI-supported engineering design processes. DesignQA is publicly available at online.
|
Show full item record
contributor author | Doris, Anna C. | |
contributor author | Grandi, Daniele | |
contributor author | Tomich, Ryan | |
contributor author | Alam, Md Ferdous | |
contributor author | Ataei, Mohammadmehdi | |
contributor author | Cheong, Hyunmin | |
contributor author | Ahmed, Faez | |
date accessioned | 2025-04-21T10:08:52Z | |
date available | 2025-04-21T10:08:52Z | |
date copyright | 12/23/2024 12:00:00 AM | |
date issued | 2024 | |
identifier issn | 1530-9827 | |
identifier other | jcise_25_2_021009.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl1/handle/yetl/4305596 | |
description abstract | This research introduces DesignQA, a novel benchmark aimed at evaluating the proficiency of multimodal large language models (MLLMs) in comprehending and applying engineering requirements in technical documentation. Developed with a focus on real-world engineering challenges, DesignQA uniquely combines multimodal data—including textual design requirements, CAD images, and engineering drawings—derived from the Formula SAE student competition. Unlike many existing MLLM benchmarks, DesignQA contains document-grounded visual questions where the input image and the input document come from different sources. The benchmark features automatic evaluation metrics and is divided into segments—Rule Comprehension, Rule Compliance, and Rule Extraction—based on tasks that engineers perform when designing according to requirements. We evaluate state-of-the-art models (at the time of writing) like GPT-4o, GPT-4, Claude-Opus, Gemini-1.0, and LLaVA-1.5 against the benchmark, and our study uncovers the existing gaps in MLLMs’ abilities to interpret complex engineering documentation. The MLLMs tested, while promising, struggle to reliably retrieve relevant rules from the Formula SAE documentation, face challenges in recognizing technical components in CAD images and encounter difficulty in analyzing engineering drawings. These findings underscore the need for multimodal models that can better handle the multifaceted questions characteristic of design according to technical documentation. This benchmark sets a foundation for future advancements in AI-supported engineering design processes. DesignQA is publicly available at online. | |
publisher | The American Society of Mechanical Engineers (ASME) | |
title | DesignQA: A Multimodal Benchmark for Evaluating Large Language Models’ Understanding of Engineering Documentation | |
type | Journal Paper | |
journal volume | 25 | |
journal issue | 2 | |
journal title | Journal of Computing and Information Science in Engineering | |
identifier doi | 10.1115/1.4067333 | |
journal fristpage | 21009-1 | |
journal lastpage | 21009-17 | |
page | 17 | |
tree | Journal of Computing and Information Science in Engineering:;2024:;volume( 025 ):;issue: 002 | |
contenttype | Fulltext |