Adaptive Network Intervention for Complex Systems: A Hierarchical Graph Reinforcement Learning ApproachSource: Journal of Computing and Information Science in Engineering:;2025:;volume( 025 ):;issue: 006::page 61006-1DOI: 10.1115/1.4068483Publisher: The American Society of Mechanical Engineers (ASME)
Abstract: Effective governance and steering of behavior in complex multiagent systems (MAS) are essential for managing system-wide outcomes, particularly in environments where interactions are structured by dynamic networks. In many applications, the goal is to promote pro-social behavior among agents, where network structure plays a pivotal role in shaping these interactions. This article introduces a hierarchical graph reinforcement learning (HGRL) framework that governs such systems through targeted interventions in the network structure. Operating within the constraints of limited managerial authority, the HGRL framework demonstrates superior performance across a range of environmental conditions, outperforming established baseline methods. Our findings highlight the critical influence of agent-to-agent learning (social learning) on system behavior: under low social learning, the HGRL manager preserves cooperation, forming robust core-periphery networks dominated by cooperators. In contrast, high social learning accelerates defection, leading to sparser, chain-like networks. Additionally, the study underscores the importance of the system manager’s authority level in preventing system-wide failures, such as agent rebellion or collapse, positioning HGRL as a powerful tool for dynamic network-based governance.
|
Show full item record
contributor author | Chen, Qiliang | |
contributor author | Heydari, Babak | |
date accessioned | 2025-08-20T09:35:27Z | |
date available | 2025-08-20T09:35:27Z | |
date copyright | 4/30/2025 12:00:00 AM | |
date issued | 2025 | |
identifier issn | 1530-9827 | |
identifier other | jcise-24-1571.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl1/handle/yetl/4308525 | |
description abstract | Effective governance and steering of behavior in complex multiagent systems (MAS) are essential for managing system-wide outcomes, particularly in environments where interactions are structured by dynamic networks. In many applications, the goal is to promote pro-social behavior among agents, where network structure plays a pivotal role in shaping these interactions. This article introduces a hierarchical graph reinforcement learning (HGRL) framework that governs such systems through targeted interventions in the network structure. Operating within the constraints of limited managerial authority, the HGRL framework demonstrates superior performance across a range of environmental conditions, outperforming established baseline methods. Our findings highlight the critical influence of agent-to-agent learning (social learning) on system behavior: under low social learning, the HGRL manager preserves cooperation, forming robust core-periphery networks dominated by cooperators. In contrast, high social learning accelerates defection, leading to sparser, chain-like networks. Additionally, the study underscores the importance of the system manager’s authority level in preventing system-wide failures, such as agent rebellion or collapse, positioning HGRL as a powerful tool for dynamic network-based governance. | |
publisher | The American Society of Mechanical Engineers (ASME) | |
title | Adaptive Network Intervention for Complex Systems: A Hierarchical Graph Reinforcement Learning Approach | |
type | Journal Paper | |
journal volume | 25 | |
journal issue | 6 | |
journal title | Journal of Computing and Information Science in Engineering | |
identifier doi | 10.1115/1.4068483 | |
journal fristpage | 61006-1 | |
journal lastpage | 61006-13 | |
page | 13 | |
tree | Journal of Computing and Information Science in Engineering:;2025:;volume( 025 ):;issue: 006 | |
contenttype | Fulltext |