YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Offline Reinforcement Learning for Adaptive Control in Manufacturing Processes: A Press Hardening Case Study

    Source: Journal of Computing and Information Science in Engineering:;2024:;volume( 025 ):;issue: 001::page 11004-1
    Author:
    Nievas, Nuria
    ,
    Espinosa-Leal, Leonardo
    ,
    Pagès-Bernaus, Adela
    ,
    Abio, Albert
    ,
    Echeverria, Lluís
    ,
    Bonada, Francesc
    DOI: 10.1115/1.4066999
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: This paper explores the application of offline reinforcement learning in batch manufacturing, with a specific focus on press hardening processes. Offline reinforcement learning presents a viable alternative to traditional control and reinforcement learning methods, which often rely on impractical real-world interactions or complex simulations and iterative adjustments to bridge the gap between simulated and real-world environments. We demonstrate how offline reinforcement learning can improve control policies by leveraging existing data, thereby streamlining the training pipeline and reducing reliance on high-fidelity simulators. Our study evaluates the impact of varying data exploration rates by creating five datasets with exploration rates ranging from ε=0 to ε=0.8. Using the conservative Q-learning algorithm, we train and assess policies against both a dynamic baseline and a static industry-standard policy. The results indicate that while offline reinforcement learning effectively refines behavior policies and enhances supervised learning methods, its effectiveness is heavily dependent on the quality and exploratory nature of the initial behavior policy.
    • Download: (914.8Kb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Offline Reinforcement Learning for Adaptive Control in Manufacturing Processes: A Press Hardening Case Study

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4306369
    Collections
    • Journal of Computing and Information Science in Engineering

    Show full item record

    contributor authorNievas, Nuria
    contributor authorEspinosa-Leal, Leonardo
    contributor authorPagès-Bernaus, Adela
    contributor authorAbio, Albert
    contributor authorEcheverria, Lluís
    contributor authorBonada, Francesc
    date accessioned2025-04-21T10:31:23Z
    date available2025-04-21T10:31:23Z
    date copyright11/12/2024 12:00:00 AM
    date issued2024
    identifier issn1530-9827
    identifier otherjcise_25_1_011004.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4306369
    description abstractThis paper explores the application of offline reinforcement learning in batch manufacturing, with a specific focus on press hardening processes. Offline reinforcement learning presents a viable alternative to traditional control and reinforcement learning methods, which often rely on impractical real-world interactions or complex simulations and iterative adjustments to bridge the gap between simulated and real-world environments. We demonstrate how offline reinforcement learning can improve control policies by leveraging existing data, thereby streamlining the training pipeline and reducing reliance on high-fidelity simulators. Our study evaluates the impact of varying data exploration rates by creating five datasets with exploration rates ranging from ε=0 to ε=0.8. Using the conservative Q-learning algorithm, we train and assess policies against both a dynamic baseline and a static industry-standard policy. The results indicate that while offline reinforcement learning effectively refines behavior policies and enhances supervised learning methods, its effectiveness is heavily dependent on the quality and exploratory nature of the initial behavior policy.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleOffline Reinforcement Learning for Adaptive Control in Manufacturing Processes: A Press Hardening Case Study
    typeJournal Paper
    journal volume25
    journal issue1
    journal titleJournal of Computing and Information Science in Engineering
    identifier doi10.1115/1.4066999
    journal fristpage11004-1
    journal lastpage11004-11
    page11
    treeJournal of Computing and Information Science in Engineering:;2024:;volume( 025 ):;issue: 001
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian