Literatur vom gleichen Autor/der gleichen Autor*in
plus bei Google Scholar

Bibliografische Daten exportieren
 

Emergence and Resilience in Multi-Agent Reinforcement Learning

Titelangaben

Phan, Thomy:
Emergence and Resilience in Multi-Agent Reinforcement Learning.
München : Ludwig-Maximilians-Universität , 2023 . - XIV, 69 S.
( Dissertation, 2023 , Ludwig-Maximilians-Universität München)
DOI: https://doi.org/10.5282/edoc.31981

Volltext

Link zum Volltext (externe URL): Volltext

Angaben zu Projekten

Projekttitel:
Offizieller Projekttitel
Projekt-ID
Innovationszentrum Mobiles Internet (InnoMI)
Ohne Angabe

Projektfinanzierung: Bayerisches Staatsministerium für Wirtschaft, Infrastruktur, Verkehr und Technologie

Abstract

Our world represents an enormous multi-agent system (MAS), consisting of a plethora of agents that make decisions under uncertainty to achieve certain goals. The interaction of agents constantly affects our world in various ways, leading to the emergence of interesting phenomena like life forms and civilizations that can last for many years while withstanding various kinds of disturbances. Building artificial MAS that are able to adapt and survive similarly to natural MAS is a major goal in artificial intelligence as a wide range of potential real-world applications like autonomous driving, multi-robot warehouses, and cyber-physical production systems can be straightforwardly modeled as MAS. Multi-agent reinforcement learning (MARL) is a promising approach to build such systems which has achieved remarkable progress in recent years. However, state-of-the-art MARL commonly assumes very idealized conditions to optimize performance in best-case scenarios while neglecting further aspects that are relevant to the real world. In this thesis, we address emergence and resilience in MARL which are important aspects to build artificial MAS that adapt and survive as effectively as natural MAS do. We first focus on emergent cooperation from local interaction of self-interested agents and introduce a peer incentivization approach based on mutual acknowledgments. We then propose to exploit emergent phenomena to further improve coordination in large cooperative MAS via decentralized planning or hierarchical value function factorization. To maintain multi-agent coordination in the presence of partial changes similar to classic distributed systems, we present adversarial methods to improve and evaluate resilience in MARL. Finally, we briefly cover a selection of further topics that are relevant to advance MARL towards real-world applicability.

Weitere Angaben

Publikationsform: Dissertation
Keywords: artificial intelligence; multi-agent system; reinforcement learning; emergence; resilience
Institutionen der Universität: Fakultäten > Fakultät für Mathematik, Physik und Informatik > Institut für Informatik
Titel an der UBT entstanden: Nein
Themengebiete aus DDC: 000 Informatik,Informationswissenschaft, allgemeine Werke > 004 Informatik
Eingestellt am: 17 Nov 2025 07:58
Letzte Änderung: 17 Nov 2025 07:58
URI: https://eref.uni-bayreuth.de/id/eprint/95259