Literatur vom gleichen Autor/der gleichen Autor*in
plus bei Google Scholar

Bibliografische Daten exportieren
 

Adaptive Anytime Multi-Agent Path Finding Using Bandit-Based Large Neighborhood Search

Titelangaben

Phan, Thomy ; Huang, Taoan ; Dilkina, Bistra ; Koenig, Sven:
Adaptive Anytime Multi-Agent Path Finding Using Bandit-Based Large Neighborhood Search.
In: Proceedings of the AAAI Conference on Artificial Intelligence. Bd. 38 (2024) Heft 16 . - S. 17514-17522.
ISSN 2159-5399
DOI: https://doi.org/10.1609/aaai.v38i16.29701

Volltext

Link zum Volltext (externe URL): Volltext

Angaben zu Projekten

Projektfinanzierung: Andere
National Science Foundation (NSF) under grant numbers 1817189, 1837779, 1935712, 2121028, 2112533, and 2321786, as well as a gift from Amazon Robotics.

Abstract

Anytime multi-agent path finding (MAPF) is a promising approach to scalable path optimization in large-scale multi-agent systems. State-of-the-art anytime MAPF is based on Large Neighborhood Search (LNS), where a fast initial solution is iteratively optimized by destroying and repairing a fixed number of parts, i.e., the neighborhood of the solution, using randomized destroy heuristics and prioritized planning. Despite their recent success in various MAPF instances, current LNS-based approaches lack exploration and flexibility due to greedy optimization with a fixed neighborhood size which can lead to low-quality solutions in general. So far, these limitations have been addressed with extensive prior effort in tuning or offline machine learning beyond actual planning. In this paper, we focus on online learning in LNS and propose Bandit-based Adaptive LArge Neighborhood search Combined with Exploration (BALANCE). BALANCE uses a bi-level multi-armed bandit scheme to adapt the selection of destroy heuristics and neighborhood sizes on the fly during search. We evaluate BALANCE on multiple maps from the MAPF benchmark set and empirically demonstrate performance improvements of at least 50% compared to state-of-the-art anytime MAPF in large-scale scenarios. We find that Thompson Sampling performs particularly well compared to alternative multi-armed bandit algorithms.

Weitere Angaben

Publikationsform: Artikel in einer Zeitschrift
Begutachteter Beitrag: Ja
Keywords: Multiagent Planning; Motion and Path Planning; Coordination and Collaboration; Heuristic Search
Institutionen der Universität: Fakultäten > Fakultät für Mathematik, Physik und Informatik > Institut für Informatik
Titel an der UBT entstanden: Nein
Themengebiete aus DDC: 000 Informatik,Informationswissenschaft, allgemeine Werke > 004 Informatik
Eingestellt am: 17 Nov 2025 13:22
Letzte Änderung: 17 Nov 2025 13:22
URI: https://eref.uni-bayreuth.de/id/eprint/95263