The Capability-Reliability Split in Agent Systems
Why frontier agents reach state-of-the-art on one run, and fail at the same task on the next
A frontier agent can occasionally surpass a published research baseline and, in another run on the same task, fail to make any meaningful progress. The pattern recurs often enough across recent evaluations that researchers have started to treat it as a structural feature of agent systems rather than a quirk of any single implementation. Capability asks whether a model can perform a task in principle. Reliability asks whether it does so consistently, across repeated attempts, across small perturbations, and across tasks that take dozens or hundreds of steps to complete. Recent evidence suggests these two properties drift apart faster than benchmark headlines make visible.
The split has practical stakes. An agent system, in this context, refers to a large language model (LLM, the underlying neural network that processes text) coupled with a scaffold (the surrounding software that decides when to call the model, what tools to invoke, and how to handle errors). When the same agent passes a benchmark on Monday and breaks on a near-identical task on Tuesday, the deployment question is no longer whether the technology can do the work. The question becomes how often it does.
When the Same Agent Both Wins and Fails
ResearchGym, a benchmark that places agents inside containerized research environments rebuilt from accepted papers at ICML, ICLR, and ACL, captures the split with unusual clarity. In a controlled evaluation of an agent powered by GPT-5, the system improved over the provided baselines in only 1 of 15 evaluations, an improvement rate of 6.7%, and completed only 26.5% of sub-tasks on average across 39 sub-tasks total [1]. In a single run, the same agent surpassed the solution from an ICML 2025 Spotlight paper, evidence that the underlying capability is real even when the reliability is not. Proprietary scaffolds built on Claude Code (Opus-4.5) and Codex (GPT-5.2) displayed a similar gap.
Across Long Horizons
HORIZON, a cross-domain diagnostic benchmark released in April 2026, looked at the same problem from a different angle. Across more than 3,100 trajectories collected from frontier models in the GPT-5 and Claude families, the authors documented a horizon-dependent degradation pattern. Agents that performed strongly on short tasks broke down on long-horizon work that required extended, interdependent action sequences [2].
Across Many Models
The Holistic Agent Leaderboard (HAL), introduced by a group at Princeton, ran 21,730 agent rollouts spanning 9 models, 9 benchmarks, and four domains, comparing models, scaffolds, and benchmarks side by side and bringing the cost of large-scale agent evaluation down by roughly an order of magnitude [3]. One counterintuitive finding from that data is worth pausing on. Higher reasoning effort, the practice of allocating more inference-time compute to deliberation, reduced accuracy in the majority of runs.
A move that should obviously help did not. Bigger headline numbers and steadier behavior are not the same thing, even when the same lever is being pulled.
Why Standard Benchmarks Miss the Gap
Part of the reliability story is methodological. Most agent evaluations report pass@1, the probability that an agent succeeds on a single attempt. A 2026 study collected 60,000 agentic trajectories on SWE-Bench-Verified, a software engineering benchmark, across three models and two scaffolds, and found that single-run pass@1 estimates vary by 2.2 to 6.0 percentage points depending on which run is selected, with standard deviations exceeding 1.5 percentage points even at temperature 0, the setting that should produce the most deterministic behavior [4]. Reported improvements of 2 to 3 percentage points, the kind that often headline a new release, may reflect evaluation noise rather than genuine progress. Trajectories diverged early, often within the first few percent of generated tokens (a token is the unit of text the model processes, roughly a word or word fragment), and these small differences cascaded into entirely different solution strategies.
A Reliability Science for Agents
Just as a cockpit instrument panel separates altitude, airspeed, and fuel into independent gauges so a pilot can see when one is failing, a reliability science framework released in March 2026 splits agent performance into separate dimensions tracked over time. The authors evaluated 10 models across 23,392 episodes on a 396-task benchmark that varied task duration and domain, and proposed four metrics including a Reliability Decay Curve, which tracks how success rate falls as tasks lengthen, and a Variance Amplification Factor, which measures how variability in outcomes grows with horizon [5]. Capability and reliability rankings diverged substantially, with multi-rank inversions at long horizons. A model ranked first on short tasks could fall to fourth or fifth once tasks stretched out. Frontier models showed the highest meltdown rates, up to 19%, because they attempted ambitious multi-step strategies that sometimes spiraled into failure.
A March 2025 survey of agent evaluation methods, updated through 2026, identified the same pattern at a higher level. Cost-efficiency, safety, and robustness remain underassessed in most agent benchmarks [6].
The Mechanics of Long-Horizon Failure
The next question is mechanical. What is actually breaking when an agent that performs well on short tasks falls apart on long ones? A January 2026 analysis frames the answer as a mismatch between reasoning and planning. Step-wise reasoning, the chain-of-thought pattern that has driven much of the recent progress in LLMs, induces what the authors call a step-wise greedy policy [7]. The agent picks the locally best next action without modeling delayed consequences. Over short horizons this often suffices. Over long horizons, early myopic commitments compound and become difficult to recover from. The proposed fix, FLARE (Future-aware Lookahead with Reward Estimation), pushes value propagation back through the trajectory so that downstream outcomes can shape early decisions. Across multiple benchmarks, FLARE often allowed a smaller open-source model to outperform a larger frontier model running standard step-by-step reasoning. The argument draws a clearer line between reasoning, the local manipulation of intermediate steps, and planning, the explicit consideration of how early choices constrain later ones.
ResearchGym catalogs the same phenomenon from the failure side. Across runs, the recurring problems were impatience, poor time and resource management, overconfidence in weak hypotheses, difficulty coordinating parallel experiments, and hard limits imposed by context length, the maximum number of tokens an LLM can consider at once [1]. None of these are pure capability failures. An agent that knows what a good experiment looks like can still abandon it too early, commit to the wrong hypothesis with too much confidence, or simply run out of working memory before the task ends. The capabilities the model has in isolation do not translate cleanly into behavior under sustained pressure.
What Helps, and What Surprisingly Does Not
Mitigation research has clustered around test-time scaling, the practice of allocating more compute at inference time to improve outcomes without retraining. The first systematic study of test-time scaling for language agents, published in mid-2025, found that scaling helps, that knowing when to reflect matters, that list-wise verification methods, which compare a list of candidates rather than ranking them pairwise, outperform alternatives, and that diversifying rollouts has a positive effect on task performance [8]. A 2026 framework called ARTIS extended these ideas to settings where actions touch external systems and cannot be undone, by decoupling exploration from commitment through simulated interactions before real-world execution [9]. The authors flag a less obvious finding. Naive LLM-based simulators struggle to capture rare but high-impact failure modes, which means simulators have to be deliberately trained to be honest about how things go wrong, not only how they go right.
What Helps
For long-horizon coding agents specifically, a 2026 study argued that test-time scaling is fundamentally a problem of representation, selection, and reuse rather than generating more attempts [10]. By converting each rollout into a structured summary of hypotheses, progress, and failure modes, then using methods like Recursive Tournament Voting and Parallel-Distill-Refine to select among candidates, the authors moved Claude-4.5-Opus from 70.9% to 77.6% on SWE-Bench Verified and from 46.9% to 59.1% on Terminal-Bench v2.0.
What Hurts
The same reliability framework that documented divergence between capability and reliability also reported a counterintuitive negative result. Across all 10 models tested, memory scaffolds, the systems designed to give agents persistent context across turns, universally hurt long-horizon performance [5]. The default assumption that more memory is always better appears to be wrong in this regime, at least for the scaffolds and tasks studied. The HAL finding that higher reasoning effort can reduce accuracy points in a similar direction. More of a thing is not always more useful.
What This Might Mean
The picture that emerges, while still incomplete, points toward a few useful adjustments rather than a single fix. The field appears to be moving toward treating reliability as a first-class evaluation dimension rather than a footnote to capability. Multi-run pass@1, statistical power analysis, and pessimistic bounds like pass^k are entering the conversation precisely because the cost of mistaking noise for progress is now visible. The design assumption that more compute, more memory, or more reasoning effort always helps is being tested empirically and sometimes failing. The gap between "the agent did this once" and "the agent does this when it matters" remains the gap that separates impressive demos from production deployments.
For organizations evaluating agent systems, the implication is straightforward enough to state without overstatement. A single high score on a benchmark suggests what the system can sometimes do. It does not, on its own, describe what the system will do under repetition, perturbation, or duration. The evidence from late 2025 and early 2026 suggests treating these as different questions, and budgeting evaluation accordingly. One open question is whether the next generation of agent improvements will close the split or widen it.
References
- A. Garikaparthi et al., "ResearchGym: Evaluating Language Model Agents on Real-World AI Research," arXiv, 2026, [Online]
- X. J. Wang et al., "The Long-Horizon Task Mirage? Diagnosing Where and Why Agentic Systems Break," arXiv, 2026, [Online]
- S. Kapoor et al., "Holistic Agent Leaderboard: The Missing Infrastructure for AI Agent Evaluation," arXiv, 2025, [Online]
- B. Bjarnason et al., "On Randomness in Agentic Evals," arXiv, 2026, [Online]
- A. Khanal et al., "Beyond pass@1: A Reliability Science Framework for Long-Horizon LLM Agents," arXiv, 2026, [Online]
- A. Yehudai et al., "Survey on Evaluation of LLM-based Agents," arXiv, 2025, [Online]
- Z. Wang et al., "Why Reasoning Fails to Plan: A Planning-Centric Analysis of Long-Horizon Decision Making in LLM Agents," arXiv, 2026, [Online]
- K. Zhu et al., "Scaling Test-time Compute for LLM Agents," arXiv, 2025, [Online]
- X. Zeng et al., "ARTIS: Agentic Risk-Aware Test-Time Scaling via Iterative Simulation," arXiv, 2026, [Online]
- J. Kim et al., "Scaling Test-Time Compute for Agentic Coding," arXiv, 2026, [Online]
Discuss This with Our AI Experts
Have questions about implementing these insights? Schedule a consultation to explore how this applies to your business.