Tutorial


Tutorial I

Human-Centric Cooperative Multi-Agent RL with LLM Reasoning

Yali Du
King's College London, London, UK


From collaborative industrial robots to personal AI assistants, the integration of AI into daily life underscores the critical need for effective and reliable coordination, both among autonomous agents and between agents and humans. Achieving multi-agent cooperation goes beyond individual interactions to encompass broader societal considerations, including aligning with human values and intentions. In this talk, I will explore two paradigms for leveraging large language models (LLMs) to transfer human knowledge into cooperative decision making. I will discuss how human feedback can address the sparse reward problem in reinforcement learning, and how learning from tutorial books, without real-world interaction, can enhance agents' capabilities.



Biosketch

Yali Du is a Senior Lecturer in AI at King's College London and a Turing Fellow at The Alan Turing Institute. She leads the Cooperative AI Lab. Her research aims to enable machines to demonstrate cooperative and safe behaviour in intelligent decision-making tasks, encompassing areas of multi-agent cooperation, Human-AI coordination, and value alignment. She received the AAAI New Faculty Highlights award in 2023 and was named a Rising Star in AI for the same year. She has also conducted tutorials on cooperative multi-agent learning at ACML 2022 and AAAI 2023. She serves as an associate editor for the Journal of AAMAS and IEEE Transactions on Artificial Intelligence, Area Chair for NeurIPS 2024, Senior PC for AAAI 2022, and ECAI 2024. She also serves on the organising committee for NeurIPS 2024 and AAMAS 2023. Her research receives support from the EPSRC and the UK AI Safety Institute.