YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Computing and Information Science in Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Knowledge Acquisition of Self-Organizing Systems With Deep Multiagent Reinforcement Learning

    Source: Journal of Computing and Information Science in Engineering:;2021:;volume( 022 ):;issue: 002::page 21010-1
    Author:
    Ji, Hao
    ,
    Jin, Yan
    DOI: 10.1115/1.4052800
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: Self-organizing systems (SOS) can perform complex tasks in unforeseen situations with adaptability. Previous work has introduced field-based approaches and rule-based social structuring for individual agents to not only comprehend the task situations but also take advantage of the social rule-based agent relations to accomplish their tasks without a centralized controller. Although the task fields and social rules can be predefined for relatively simple task situations, when the task complexity increases and the task environment changes, having a priori knowledge about these fields and the rules may not be feasible. In this paper, a multiagent reinforcement learning (RL) based model is proposed as a design approach to solving the rule generation problem with complex SOS tasks. A deep multiagent reinforcement learning algorithm was devised as a mechanism to train SOS agents for knowledge acquisition of the task field and social rules. Learning stability, functional differentiation, and robustness properties of this learning approach were investigated with respect to the changing team sizes and task variations. Through computer simulation studies of a box-pushing problem, the results have shown that there is an optimal range of the number of agents that achieves good learning stability
     
    agents in a team learn to differentiate from other agents with changing team sizes and box dimensions
     
    the robustness of the learned knowledge shows to be stronger to the external noises than with changing task constraints.
     
    • Download: (1.077Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Knowledge Acquisition of Self-Organizing Systems With Deep Multiagent Reinforcement Learning

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4285198
    Collections
    • Journal of Computing and Information Science in Engineering

    Show full item record

    contributor authorJi, Hao
    contributor authorJin, Yan
    date accessioned2022-05-08T09:29:28Z
    date available2022-05-08T09:29:28Z
    date copyright12/9/2021 12:00:00 AM
    date issued2021
    identifier issn1530-9827
    identifier otherjcise_22_2_021010.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4285198
    description abstractSelf-organizing systems (SOS) can perform complex tasks in unforeseen situations with adaptability. Previous work has introduced field-based approaches and rule-based social structuring for individual agents to not only comprehend the task situations but also take advantage of the social rule-based agent relations to accomplish their tasks without a centralized controller. Although the task fields and social rules can be predefined for relatively simple task situations, when the task complexity increases and the task environment changes, having a priori knowledge about these fields and the rules may not be feasible. In this paper, a multiagent reinforcement learning (RL) based model is proposed as a design approach to solving the rule generation problem with complex SOS tasks. A deep multiagent reinforcement learning algorithm was devised as a mechanism to train SOS agents for knowledge acquisition of the task field and social rules. Learning stability, functional differentiation, and robustness properties of this learning approach were investigated with respect to the changing team sizes and task variations. Through computer simulation studies of a box-pushing problem, the results have shown that there is an optimal range of the number of agents that achieves good learning stability
    description abstractagents in a team learn to differentiate from other agents with changing team sizes and box dimensions
    description abstractthe robustness of the learned knowledge shows to be stronger to the external noises than with changing task constraints.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleKnowledge Acquisition of Self-Organizing Systems With Deep Multiagent Reinforcement Learning
    typeJournal Paper
    journal volume22
    journal issue2
    journal titleJournal of Computing and Information Science in Engineering
    identifier doi10.1115/1.4052800
    journal fristpage21010-1
    journal lastpage21010-12
    page12
    treeJournal of Computing and Information Science in Engineering:;2021:;volume( 022 ):;issue: 002
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian