19
08
2025
[PDG 449] Context Rot: How Increasing Input Tokens Impacts LLM Performance
In this AI Paper Discussion Group (PDG) session, we will discuss the paper "Context Rot: How Increasing Input Tokens Impacts LLM Performance." The paper explores how the performance of Large Language Models (LLMs) degrades as the input length grows, revealing that these models don't process context uniformly. We will evaluate the findings of the study, which assessed 18 state-of-the-art LLMs, including GPT-4.1, Claude 4, Gemini 2.5, and Qwen3. Everyone is welcome to join the discussion, whether you've fully understood the paper or simply want to listen in. The discussion will be held in German or English, depending on the participants.