Back to top papers

General-purpose LLMs as Models of Human Driver Behavior: The Case of Simplified Merging

Samir H. A. Mohammad, Wouter Mooi, Arkady Zgonnikov

Scoren/a
LLMn/a
Embedding0.442
Recencyn/a

Feedback

Why It Matters

No evaluation available.

Contributions

  • None available.

Insights

  • None available.

Limitations

  • None available.

Tags

  • cs.AI
  • cs.RO

Abstract

arXiv:2604.09609v1 Announce Type: new Abstract: Human behavior models are essential as behavior references and for simulating human agents in virtual safety assessment of automated vehicles (AVs), yet current models face a trade-off between interpretability and flexibility. General-purpose large language models (LLMs) offer a promising alternative: a single model potentially deployable without parameter fitting across diverse scenarios. However, what LLMs can and cannot capture about human driving behavior remains poorly understood. We address this gap by embedding two general-purpose LLMs (OpenAI o3 and Google Gemini 2.5 Pro) as standalone, closed-loop driver agents in a simplified one-dimensional merging scenario and comparing their behavior against human data using quantitative and qualitative analyses. Both models reproduce human-like intermittent operational control and tactical dependencies on spatial cues. However, neither consistently captures the human response to dynamic velocity cues, and safety performance diverges sharply between models. A systematic prompt ablation study reveals that prompt components act as model-specific inductive biases that do not transfer across LLMs. These findings suggest that general-purpose LLMs could potentially serve as standalone, ready-to-use human behavior models in AV evaluation pipelines, but future research is needed to better understand their failure modes and ensure their validity as models of human driving behavior.