Back to top papers

How LLMs Might Think

Joseph Gottlieb, Ethan Kemp, Matthew Trager

Score7.500
LLMn/a
Embedding0.528
Recencyn/a

Feedback

Why It Matters

Understanding the nature of LLM cognition is crucial for developing more effective AI systems and addressing philosophical questions about machine intelligence.

Contributions

  • Introduces the concept of arational, associative thinking in LLMs as a potential mode of cognition.

Insights

  • The distinction between rational and arational thinking in LLMs could reshape our understanding of AI cognition.

Limitations

  • The argument may lack empirical support and relies heavily on philosophical reasoning.

Tags

  • llm
  • other
  • reasoning

Abstract

arXiv:2604.09674v1 Announce Type: new Abstract: Do large language models (LLMs) think? Daniel Stoljar and Zhihe Vincent Zhang have recently developed an argument from rationality for the claim that LLMs do not think. We contend, however, that the argument from rationality not only falters, but leaves open an intriguing possibility: that LLMs engage only in arational, associative forms of thinking, and have purely associative minds. Our positive claim is that if LLMs think at all, they likely think precisely in this manner.