YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Mechanical Design
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Mechanical Design
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Goal-Directed Design Agents: Integrating Visual Imitation With One-Step Lookahead Optimization for Generative Design

    Source: Journal of Mechanical Design:;2021:;volume( 143 ):;issue: 012::page 0124501-1
    Author:
    Raina, Ayush
    ,
    Puentes, Lucas
    ,
    Cagan, Jonathan
    ,
    McComb, Christopher
    DOI: 10.1115/1.4051013
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: Engineering design problems often involve large state and action spaces along with highly sparse rewards. Since an exhaustive search of those spaces is not feasible, humans utilize relevant domain knowledge to condense the search space. Deep learning agents (DLAgents) were previously introduced to use visual imitation learning to model design domain knowledge. This note builds on DLAgents and integrates them with one-step lookahead search to develop goal-directed agents capable of enhancing learned strategies for sequentially generating designs. Goal-directed DLAgents can employ human strategies learned from data along with optimizing an objective function. The visual imitation network from DLAgents is composed of a convolutional encoder–decoder network, acting as a rough planning step that is agnostic to feedback. Meanwhile, the lookahead search identifies the fine-tuned design action guided by an objective. These design agents are trained on an unconstrained truss design problem modeled as a sequential, action-based configuration design problem. The agents are then evaluated on two versions of the problem: the original version used for training and an unseen constrained version with an obstructed construction space. The goal-directed agents outperform the human designers used to train the network as well as the previous feedback-agnostic versions of the agent in both scenarios. This illustrates a design agent framework that can efficiently use feedback to not only enhance learned design strategies but also adapt to unseen design problems.
    • Download: (445.6Kb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Goal-Directed Design Agents: Integrating Visual Imitation With One-Step Lookahead Optimization for Generative Design

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4278706
    Collections
    • Journal of Mechanical Design

    Show full item record

    contributor authorRaina, Ayush
    contributor authorPuentes, Lucas
    contributor authorCagan, Jonathan
    contributor authorMcComb, Christopher
    date accessioned2022-02-06T05:45:49Z
    date available2022-02-06T05:45:49Z
    date copyright6/9/2021 12:00:00 AM
    date issued2021
    identifier issn1050-0472
    identifier othermd_143_12_124501.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4278706
    description abstractEngineering design problems often involve large state and action spaces along with highly sparse rewards. Since an exhaustive search of those spaces is not feasible, humans utilize relevant domain knowledge to condense the search space. Deep learning agents (DLAgents) were previously introduced to use visual imitation learning to model design domain knowledge. This note builds on DLAgents and integrates them with one-step lookahead search to develop goal-directed agents capable of enhancing learned strategies for sequentially generating designs. Goal-directed DLAgents can employ human strategies learned from data along with optimizing an objective function. The visual imitation network from DLAgents is composed of a convolutional encoder–decoder network, acting as a rough planning step that is agnostic to feedback. Meanwhile, the lookahead search identifies the fine-tuned design action guided by an objective. These design agents are trained on an unconstrained truss design problem modeled as a sequential, action-based configuration design problem. The agents are then evaluated on two versions of the problem: the original version used for training and an unseen constrained version with an obstructed construction space. The goal-directed agents outperform the human designers used to train the network as well as the previous feedback-agnostic versions of the agent in both scenarios. This illustrates a design agent framework that can efficiently use feedback to not only enhance learned design strategies but also adapt to unseen design problems.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleGoal-Directed Design Agents: Integrating Visual Imitation With One-Step Lookahead Optimization for Generative Design
    typeJournal Paper
    journal volume143
    journal issue12
    journal titleJournal of Mechanical Design
    identifier doi10.1115/1.4051013
    journal fristpage0124501-1
    journal lastpage0124501-6
    page6
    treeJournal of Mechanical Design:;2021:;volume( 143 ):;issue: 012
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian