YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Mechanical Design
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Mechanical Design
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Large Language Models for Computer-Aided Design Fine Tuned: Dataset and Experiments

    Source: Journal of Mechanical Design:;2025:;volume( 147 ):;issue: 004::page 41710-1
    Author:
    Sun, Yuewan
    ,
    Li, Xingang
    ,
    Sha, Zhenghui
    DOI: 10.1115/1.4067713
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: Despite the power of large language models (LLMs) in various cross-modal generation tasks, their ability to generate 3D computer-aided design (CAD) models from text remains underexplored due to the scarcity of suitable datasets. Additionally, there is a lack of multimodal CAD datasets that include both reconstruction parameters and text descriptions, which are essential for the quantitative evaluation of the CAD generation capabilities of multimodal LLMs. To address these challenges, we developed a dataset of CAD models, sketches, and image data for representative mechanical components such as gears, shafts, and springs, along with natural language descriptions collected via Amazon Mechanical Turk. By using CAD programs as a bridge, we facilitate the conversion of textual output from LLMs into precise 3D CAD designs. To enhance the text-to-CAD generation capabilities of GPT models and demonstrate the utility of our dataset, we developed a pipeline to generate fine-tuning training data for GPT-3.5. We fine-tuned four GPT-3.5 models with various data sampling strategies based on the length of a CAD program. We evaluated these models using parsing rate and intersection over union (IoU) metrics, comparing their performance to that of GPT-4 without fine-tuning. The new knowledge gained from the comparative study on the four different fine-tuned models provided us with guidance on the selection of sampling strategies to build training datasets in fine-tuning practices of LLMs for text-to-CAD generation, considering the trade-off between part complexity, model performance, and cost.
    • Download: (1.624Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Large Language Models for Computer-Aided Design Fine Tuned: Dataset and Experiments

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4308337
    Collections
    • Journal of Mechanical Design

    Show full item record

    contributor authorSun, Yuewan
    contributor authorLi, Xingang
    contributor authorSha, Zhenghui
    date accessioned2025-08-20T09:28:26Z
    date available2025-08-20T09:28:26Z
    date copyright2/27/2025 12:00:00 AM
    date issued2025
    identifier issn1050-0472
    identifier othermd-24-1560.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4308337
    description abstractDespite the power of large language models (LLMs) in various cross-modal generation tasks, their ability to generate 3D computer-aided design (CAD) models from text remains underexplored due to the scarcity of suitable datasets. Additionally, there is a lack of multimodal CAD datasets that include both reconstruction parameters and text descriptions, which are essential for the quantitative evaluation of the CAD generation capabilities of multimodal LLMs. To address these challenges, we developed a dataset of CAD models, sketches, and image data for representative mechanical components such as gears, shafts, and springs, along with natural language descriptions collected via Amazon Mechanical Turk. By using CAD programs as a bridge, we facilitate the conversion of textual output from LLMs into precise 3D CAD designs. To enhance the text-to-CAD generation capabilities of GPT models and demonstrate the utility of our dataset, we developed a pipeline to generate fine-tuning training data for GPT-3.5. We fine-tuned four GPT-3.5 models with various data sampling strategies based on the length of a CAD program. We evaluated these models using parsing rate and intersection over union (IoU) metrics, comparing their performance to that of GPT-4 without fine-tuning. The new knowledge gained from the comparative study on the four different fine-tuned models provided us with guidance on the selection of sampling strategies to build training datasets in fine-tuning practices of LLMs for text-to-CAD generation, considering the trade-off between part complexity, model performance, and cost.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleLarge Language Models for Computer-Aided Design Fine Tuned: Dataset and Experiments
    typeJournal Paper
    journal volume147
    journal issue4
    journal titleJournal of Mechanical Design
    identifier doi10.1115/1.4067713
    journal fristpage41710-1
    journal lastpage41710-15
    page15
    treeJournal of Mechanical Design:;2025:;volume( 147 ):;issue: 004
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian