Summary:
- The developers of Terminal-Bench have released version 2.0 along with Harbor framework for testing AI agents.
- Terminal-Bench 2.0 offers a more challenging task set with improved task quality and reliability.
- Initial results show GPT-5 leading in task success on the Terminal-Bench 2.0 leaderboard.
Article:
The creators of Terminal-Bench, a benchmark suite designed to evaluate the performance of autonomous AI agents on real-world terminal-based tasks, have introduced version 2.0 alongside a new framework called Harbor. This dual release aims to address longstanding challenges in testing and optimizing AI agents, especially those operating autonomously in realistic developer environments.
Terminal-Bench 2.0 replaces the previous version as the standard for assessing the capabilities of cutting-edge models. The updated suite features 89 tasks that have undergone extensive validation to ensure they are solvable, realistic, and well-defined. Tasks like ‘download-youtube’ have been removed or revamped due to their reliance on unstable third-party APIs.
Harbor, the accompanying runtime framework, allows developers and researchers to scale evaluations across thousands of cloud containers. It supports various agent architectures, scalable supervised fine-tuning, reinforcement learning pipelines, and seamless integration with Terminal-Bench 2.0.
Early results from the Terminal-Bench 2.0 leaderboard showcase OpenAI’s Codex CLI, a GPT-5 powered variant, leading with a 49.6% success rate. Other GPT-5 variants and Claude Sonnet 4.5-based agents are also performing well, highlighting the active competition among top models.
Users can easily test or submit agents by installing Harbor and running benchmarks using simple CLI commands. Terminal-Bench 2.0 is already being integrated into research workflows focusing on agentic reasoning, code generation, and tool use. The release of Terminal-Bench 2.0 and Harbor signifies a step towards standardized and scalable agent evaluation infrastructure in the AI ecosystem.