

AI coding agents are becoming more capable, but evaluating them is harder than it looks. Most benchmarks focus on a single dimension of agent capabilities; for instance, the popular SWE-Bench benchmark only focuses on fixing issues on open source Python repositories. Real-world software engineering involves fixing bugs of course, but it is a lot more multifaceted: in any single week a software developer may also debug complex issues, building a new greenfield script or app, improving test coverage, fix bugs on a frontend repo, research unfamiliar APIs – the list goes on.
The OpenHands Index addresses this by building a much broader benchmark evaluating language models across five distinct categories: Issue Resolution (fixing bugs), Greenfield development (building new applications), Frontend development (UI tasks requiring visual understanding), Testing (generating tests to reproduce bugs), and Information Gathering (research and documentation tasks). This diversity matters because no single benchmark can capture the full range of what developers actually need from AI assistants.
We’ve evaluated many models to date, including commercial APIs and open-weights models, across five benchmark categories. All results, including complete agent trajectories, are published openly on the site. Here are five key findings.
1. Open Models Achieve Strong Top Performance at an Order of Magnitude Less Cost
The most expensive models don’t always deliver proportionally better results. Across all five benchmark categories, the performance spread between models is often narrower than their cost differences.
Top-tier commercial models achieve average scores in the 55–65% range across all categories. Meanwhile, more economical options, including some open-weights models, achieve 45–55% at a fraction of the per-task cost. For a typical development workflow involving hundreds of agent invocations per month, this cost difference compounds quickly.
The takeaway: Teams can start with the most capable models to establish the feasibility of incorporating AI agents into their workflow, but if cost becomes a concern there are plenty of competitive options at a fraction of the cost.
2. Locally Deployable Models Now Compete with Commercial APIs
Related to the above, the gap between open-weights and commercial models has narrowed significantly. In our latest evaluations, several open-weights models achieved average scores within a few percentage points of leading commercial offerings across all benchmark categories.
In addition to cost, this matters for organizations with specific requirements around data privacy, on-premises deployment, or customization. Open-weights models can be fine-tuned for specific codebases, integrated with internal tooling, and deployed on dedicated hardware—options not available with API-only services.
The takeaway: Open-weights alternatives are now viable for production use cases, not just experimentation.
3. No Single Model Dominates All Categories
Performance varies substantially across task types. A model that leads in bug fixing (SWE-Bench) may rank mid-pack for greenfield development (Commit0) or information gathering (GAIA).
In our evaluations, the top performer in issue resolution scored only 56% on application building tasks. Conversely, the leader in information gathering achieved 80% on that benchmark but ranked fourth on bug fixing.
The takeaway: Model selection should be driven by your team’s actual task distribution. The OpenHands Index can serve as an initial guide about what models to take a look at, and then you can do “vibe checks”, systematic evaluations, or A/B testing with the top contenders.
4. Multimodal Tasks Remain Challenging
Frontend development tasks where agents must interpret screenshots, mockups, and visual requirements show the widest performance variance across models.
On SWE-Bench Multimodal, scores range from 22% to 42%, with most models clustering in the 27–36% range. Even top-performing models struggle with tasks requiring visual understanding combined with code generation.
The takeaway: Multimodal capabilities are still maturing. Teams working heavily on frontend development should expect more iteration cycles when using AI agents.
5. Transparent Benchmarking Catches Issues That Aggregate Scores Miss
Comprehensive evaluation reveals failure modes invisible in single-number scores. By publishing full agent trajectories, we’ve identified cases where models achieved correct outcomes through unintended shortcuts.
One recent example: analysis of our Commit0 (application building) results revealed that some models were retrieving code from git history rather than implementing it from scratch. After identifying this behavior through trajectory analysis, we updated the benchmark methodology, and several models’ scores dropped by 10–30 percentage points.
The takeaway: Transparent, reproducible benchmarks enable continuous improvement. Single-number leaderboards can obscure important details about how models actually perform.
Methodology
The OpenHands Index evaluates models across five benchmark categories:
- SWE-Bench Verified – Fixing real GitHub issues from Python repositories
- Commit0 – Building applications from specifications
- SWE-Bench Multimodal – Frontend tasks requiring visual understanding
- SWT-Bench – Generating tests to reproduce bugs
- GAIA – Information gathering and research tasks
Each model runs in a sandboxed environment with access to standard developer tools. We measure accuracy (task completion rate), cost per task, and average runtime. All evaluation code is open source at github.com/OpenHands/benchmarks, and complete results—including agent trajectories—are published at github.com/OpenHands/openhands-index-results.
Explore the full results at index.openhands.dev.