Instructing LLMs to Negotiate using Reinforcement Learning with Verifiable Rewards
Shuze Daniel Liu, Claire Chen, Jiabao Sean Xiao, Lei Lei, Yuheng Zhang, Yisong Yue, David Simchi-Levi
Feedback
Why It Matters
By effectively training LLMs to negotiate using verifiable rewards, this research advances the potential for autonomous agents in complex economic interactions, which could have wide-ranging applications in commerce and automated decision-making.
Contributions
- Introduction of a framework for training LLMs in negotiation using RLVR, demonstrating superior performance in surplus extraction.
Insights
- The four-phase strategic evolution of the agent highlights the complexity of learning negotiation tactics in LLMs.
Limitations
- The study focuses on a specific buyer-seller scenario and may not generalize to all negotiation contexts or types of agents.
Tags
- agent
- alignment
- llm
Abstract
arXiv:2604.09855v1 Announce Type: new Abstract: The recent advancement of Large Language Models (LLMs) has established their potential as autonomous interactive agents. However, they often struggle in strategic games of incomplete information, such as bilateral price negotiation. In this paper, we investigate if Reinforcement Learning from Verifiable Rewards (RLVR) can effectively teach LLMs to negotiate. Specifically, we explore the strategic behaviors that emerge during the learning process. We introduce a framework that trains a mid-sized buyer agent against a regulated LLM seller across a wide distribution of real-world products. By grounding reward signals directly in the maximization of economic surplus and strict adherence to private budget constraints, we reveal a novel four-phase strategic evolution. The agent progresses from naive bargaining to using aggressive starting prices, moves through a phase of deadlock, and ultimately develops sophisticated persuasive skills. Our results demonstrate that this verifiable training allows a 30B agent to significantly outperform frontier models over ten times its size in extracting surplus. Furthermore, the trained agent generalizes robustly to stronger counterparties unseen during training and remains effective even when facing hostile, adversarial seller personas.