Google’s Next Coding Agent Could Change How Developers Think About Their Work

Jules was just the beginning. Google’s internally referenced “Jitro” project signals a bigger shift — from task execution to outcome-driven development.

Most AI coding agents work the same way. A developer spots a problem, writes a prompt, and watches the agent execute. It’s fast. It’s useful. But it still puts the developer in the driver’s seat for every single decision.

Google appears to be rethinking that model entirely.

The company is reportedly building the next generation of Jules, its autonomous coding agent, under an internal project name: Jitro. While the current Jules experiment has seen little visible progress in recent months, evidence points to a parallel effort focused on a completely new version that moves beyond the prompt-and-execute model that defines most coding agents today.

If the early signals are accurate, this isn’t just a feature update. It’s a different way of thinking about what a coding agent actually does.

Where Jules Stands Today

To understand where Jitro is headed, it helps to know what Jules already does.

Jules is an asynchronous, agentic coding assistant that integrates directly with existing repositories. It operates asynchronously, allowing developers to focus on other tasks while it works in the background. Upon completion, it presents its plan, reasoning, and a diff of the changes made.

By running asynchronously in a virtual machine, Jules stands apart from top AI coding tools like Cursor, Windsurf, and Lovable, which all operate synchronously and require users to wait for the output after each prompt.

That asynchronous model has been a genuine differentiator. During the beta, thousands of developers tackled tens of thousands of tasks, resulting in over 140,000 code improvements shared publicly.

Jules is now out of beta and available across free and paid tiers, integrated into Google AI Pro and Ultra subscriptions. It’s a solid tool. But Jitro, if it ships as described, would be something meaningfully different.

From Prompts to Goals

Rather than asking developers to manually instruct an agent on what to build or fix, Jules V2 appears designed around high-level goal-setting — think KPI-driven development, where the agent autonomously identifies what needs to change in a codebase to move a metric in the right direction.

That’s a significant shift. Instead of telling the agent what to do, a developer would define the desired outcome — better test coverage, lower error rates, improved accessibility compliance — and the agent figures out the path to get there.

A dedicated workspace for the agent suggests Google envisions Jitro as a persistent collaborator rather than a one-shot tool. Early signals point to a workspace where developers can list goals, track insights, and configure tool integrations — a layer of continuity that current coding agents don’t offer.

This would mark a departure from the task-level paradigm used by competitors like GitHub Copilot, Cursor, and even OpenAI’s Codex agent, all of which still rely on developers to define specific work items.

According to Mitch Ashley, VP and practice lead for software lifecycle engineering at The Futurum Group, “Goal-driven agent execution changes the observability requirement for development teams. When agents pursue outcomes autonomously across production codebases, understanding what the agent was optimizing for, the reasoning it applied, and the constraints it evaluated becomes the foundation for trust.”

“Intent is the first step in understanding the agent execution process. Engineering teams that cannot instrument goal-level agent behavior will find adoption bounded by risk tolerance. Organizations that observe the complete decision cycle, from objective through reasoning to outcome, will be the ones that expand agent authority into consequential work.”

The Trust Problem

Outcome-driven development sounds compelling, but it raises a real question: how much do engineering teams trust an agent to pursue goals autonomously across a production codebase?

The risk, of course, is that autonomous goal-pursuing agents introduce unpredictable changes, and trust will be the key barrier to adoption.

This is where Google’s existing Jules framework could actually help. Jules already surfaces its plan and reasoning before making changes, and developers can steer the work mid-execution. If Jitro inherits that transparency layer and extends it into goal-level visibility, it could give teams more confidence in what the agent is doing and why.

The early indication is that Jitro won’t be a fire-and-forget system. It looks more like a structured workflow — set a goal, review the agent’s approach, approve the direction — which puts enough guardrails around autonomy to make it practical for real teams.

Timing and Context

The timing is notable. Google I/O 2026 kicks off May 19, and this is exactly the kind of showcase-ready feature Google would want to unveil alongside its broader Gemini ecosystem updates.

Google has been steadily expanding its AI developer tooling through Gemini integrations in Android Studio, Firebase, and Cloud, and a goal-oriented coding agent would fit neatly into that strategy — particularly for enterprise teams that care more about outcomes than individual pull requests.

The launch is expected to roll out under a waitlist, suggesting Google is taking a measured approach rather than a broad release. That tracks. Goal-driven agents operating across large codebases need careful onboarding, not a splash-and-scale approach.

What it Means for Engineering Teams

If Jitro ships as described, the developers most likely to benefit are those managing large, mature codebases — teams where incremental, compounding improvements in areas like performance, test coverage, and security posture matter more than one-off feature builds.

For smaller teams or individual developers already comfortable with Jules, Jitro may represent more capability than they immediately need. But for enterprise engineering organizations trying to measure software quality at scale, a goal-setting agent with a persistent workspace could change how they approach planning and execution.

The current generation of coding agents has made developers faster. The next one might make them think differently about what they’re optimizing for in the first place.

Read More

Scroll to Top