LLM-Coordination: Evaluating and Analyzing Multi-agent Coordination Abilities in Large Language Models

1University of California, Santa Cruz
Teaser image.

LLM-Coordination Benchmark consists of two tasks: Agentic Coordination to study the ability of LLMs to act, and Coordination QA to study the ability of LLMs to reason.

Abstract

The emergent reasoning and Theory of Mind (ToM) abilities demonstrated by Large Language Models (LLMs) make them promising candidates for developing coordination agents. In this study, we introduce a new LLM-Coordination Benchmark aimed at a detailed analysis of LLMs within the context of Pure Coordination Games, where participating agents need to cooperate for the most gain. This benchmark evaluates LLMs through two distinct tasks: (1) Agentic Coordination, where LLMs act as proactive participants for cooperation in 4 pure coordination games; (2) \emphCoordination Question Answering (QA), where LLMs are prompted to answer 198 multiple-choice questions from the 4 games for evaluation of three key reasoning abilities: Environment Comprehension, ToM Reasoning, and Joint Planning. Furthermore, to enable LLMs for multi-agent coordination, we introduce a Cognitive Architecture for Coordination (CAC) framework that can easily integrate different LLMs as plug-and-play modules for pure coordination games. Our findings indicate that LLM agents equipped with GPT-4-turbo achieve comparable performance to state-of-the-art reinforcement learning methods in games that require commonsense actions based on the environment. Besides, zero-shot coordination experiments reveal that, unlike RL methods, LLM agents are robust to new unseen partners. However, results on Coordination QA show a large room for improvement in the Theory of Mind reasoning and joint planning abilities of LLMs. The analysis also sheds light on how the ability of LLMs to understand their environment and their partner's beliefs and intentions plays a part in their ability to plan for coordination.

Cognitive Architecture for Coordination (CAC)

Video image.

We present the Cognitive Architecture for Coordination (CAC) framework that facilitates LLM interaction with game environments in a plug-and-play approach. CAC translates game elements into textual formats and leverages auxiliary LLMs for improved coordination to enable effective multi-agent collaboration.

CAC Demo Video

Video image.

CoordinationQA

Video image.
LLMs achieve their best results on the Environment Comprehension question. The best performing LLM GPT-4-turbo gets more than 80% Environment Comprehension Questions correct. The overall performance across LLMs drops on the more challenging Theory of Mind reasoning questions, but GPT-4-turbo is still competent, reaching 54% accuracy. The overall accuracy of LLMs on Joint Planning questions is still significantly weak, with even the best LLM scoring less than 40%, indicating a large room for improvement in LLMs' ability to perform coordination reasoning.

BibTeX

@misc{agashe2023evaluating,
      title={Evaluating Multi-Agent Coordination Abilities in Large Language Models}, 
      author={Saaket Agashe and Yue Fan and Xin Eric Wang},
      year={2023},
      eprint={2310.03903},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}