Dynamic Resource Allocation in Systems-of-Systems Using a Heuristic-Based Interpretable Deep Reinforcement LearningSource: Journal of Mechanical Design:;2022:;volume( 144 ):;issue: 009::page 91711Author:Chen, Qiliang;Heydari, Babak
DOI: 10.1115/1.4055057Publisher: The American Society of Mechanical Engineers (ASME)
Abstract: Systems-of-systems (SoS) often include multiple agents that interact in both cooperative and competitive modes. Moreover, they involve multiple resources, including energy, information, and bandwidth. If these resources are limited, agents need to decide how to share resources cooperatively to reach the system-level goal, while performing the tasks assigned to them autonomously. This paper takes a step toward addressing these challenges by proposing a dynamic two-tier learning framework, based on deep reinforcement learning that enables dynamic resource allocation while acknowledging the autonomy of systems constituents. The two-tier learning framework that decouples the learning process of the SoS constituents from that of the resource manager ensures that the autonomy and learning of the SoS constituents are not compromised as a result of interventions executed by the resource manager. We apply the proposed two-tier learning framework on a customized OpenAI Gym environment and compare the results of the proposed framework to baseline methods of resource allocation to show the superior performance of the two-tier learning scheme across a different set of SoS key parameters. We then use the results of this experiment and apply our heuristic inference method to interpret the decisions of the resource manager for a range of environment and agent parameters.
|
Collections
Show full item record
| contributor author | Chen, Qiliang;Heydari, Babak | |
| date accessioned | 2022-12-27T23:17:55Z | |
| date available | 2022-12-27T23:17:55Z | |
| date copyright | 8/8/2022 12:00:00 AM | |
| date issued | 2022 | |
| identifier issn | 1050-0472 | |
| identifier other | md_144_9_091711.pdf | |
| identifier uri | http://yetl.yabesh.ir/yetl1/handle/yetl/4288328 | |
| description abstract | Systems-of-systems (SoS) often include multiple agents that interact in both cooperative and competitive modes. Moreover, they involve multiple resources, including energy, information, and bandwidth. If these resources are limited, agents need to decide how to share resources cooperatively to reach the system-level goal, while performing the tasks assigned to them autonomously. This paper takes a step toward addressing these challenges by proposing a dynamic two-tier learning framework, based on deep reinforcement learning that enables dynamic resource allocation while acknowledging the autonomy of systems constituents. The two-tier learning framework that decouples the learning process of the SoS constituents from that of the resource manager ensures that the autonomy and learning of the SoS constituents are not compromised as a result of interventions executed by the resource manager. We apply the proposed two-tier learning framework on a customized OpenAI Gym environment and compare the results of the proposed framework to baseline methods of resource allocation to show the superior performance of the two-tier learning scheme across a different set of SoS key parameters. We then use the results of this experiment and apply our heuristic inference method to interpret the decisions of the resource manager for a range of environment and agent parameters. | |
| publisher | The American Society of Mechanical Engineers (ASME) | |
| title | Dynamic Resource Allocation in Systems-of-Systems Using a Heuristic-Based Interpretable Deep Reinforcement Learning | |
| type | Journal Paper | |
| journal volume | 144 | |
| journal issue | 9 | |
| journal title | Journal of Mechanical Design | |
| identifier doi | 10.1115/1.4055057 | |
| journal fristpage | 91711 | |
| journal lastpage | 91711_14 | |
| page | 14 | |
| tree | Journal of Mechanical Design:;2022:;volume( 144 ):;issue: 009 | |
| contenttype | Fulltext |