Accelerating .NET MAUI Development with AI Agents

This is a guest blog from Syncfusion. Learn more about the free, open-source Syncfusion Toolkit for .NET MAUI.

As a proud partner with the .NET MAUI team, Syncfusion is excited to share how custom-built AI agents are dramatically improving the development workflow and contributor experience for the entire .NET MAUI community.

The Traditional Contributor Challenge

Contributing to .NET MAUI has historically required significant time investment for even straightforward bug fixes. Our team identified key bottlenecks in the contribution workflow:

  • Issue Reproduction – Setting up the Sandbox app and reproducing platform-specific issues (30-60 minutes)
  • Root Cause Analysis – Debugging across multiple platforms and handlers (1-3 hours)
  • Fix Implementation – Writing and testing the fix (30-120 minutes)
  • Test Creation – Developing comprehensive test coverage (1-2 hours)

For community contributors new to the repository, this could easily extend to days of effort, creating a significant barrier to entry.

Our Solution: Custom-Built AI Agents and Skills for .NET MAUI

The .NET MAUI team has developed a suite of specialized agents and skills that work together to streamline the entire contribution lifecycle. Syncfusion’s team has been leveraging these to dramatically accelerate our .NET MAUI contributions.

pr-review skill: Intelligent Issue Resolution with Built-In Quality Assurance

The pr-review skill implements a systematic 4-phase workflow that handles the complete pull request lifecycle:

Phase 1: Pre-Flight Analysis

The skill begins by conducting a comprehensive issue analysis:

  • Reads the GitHub issue and extracts reproduction steps
  • Analyzes the codebase to understand affected components
  • Identifies platform-specific considerations (Android, iOS, Windows, Mac Catalyst)

Phase 2: Gate – Test Verification

Before any fix is attempted, the skill verifies that tests exist and correctly catch the issues:

  • Checks if tests exist for the issue/PR
  • If tests are missing, notifies the user to create them first using write-tests-agent
  • Validates that existing tests actually fail without a fix (proving they catch the bug)

Note

The recommended workflow is to use write-tests-agent first to create tests, then use the pr-review skill to verify and work on the fix.

Phase 3: Try-Fix – Multi-Attempt Problem Solving

This is where the skill’s intelligence shines. Using the try-fix skill with 4 AI models, the skill:

  • Proposes independent fix approaches – Up to 4 different strategies, each taking a unique angle
  • Applies and tests empirically – Runs the test suite after each fix attempt
  • Records detailed results for comparison

Example try-fix workflow:

Attempt 1: Handler-level fix in CollectionViewHandler → Tests pass on iOS, fail on Android

Attempt 2: Platform-specific fix in Items2 → Tests pass on all platforms, but causes regression

Attempt 3: Core control fix with platform guards → All tests pass, no regressions

Attempt 3 selected as optimal solution

Phase 4: Report Generation

The skill produces a comprehensive summary including:

  • Fix description and approach rationale
  • Test results (before/after comparison)
  • Alternative approaches attempted and why they weren’t selected
  • Recommendation (approve PR or request changes)

write-tests-agent: Intelligent Test Strategy Selection

The write-tests-agent acts as a test strategist that determines the optimal testing approach for each scenario.

Multi-Strategy Test Creation

The agent analyzes the issue and selects appropriate test types:

For UI Interaction Bugs:

  • Invokes write-ui-tests skill
  • Creates Appium-based tests in TestCases.HostApp and TestCases.Shared.Tests
  • Adds proper AutomationId attributes for element location
  • Implements platform-appropriate assertions

For XAML Parsing and Compilation Bugs:

  • Invokes write-xaml-tests skill
  • Creates tests in Controls.Xaml.UnitTests project
  • Tests XAML parsing, XamlC compilation, and source generation
  • Validates markup extensions and binding syntax
  • Tests across all three XAML inflators (Runtime, XamlC, SourceGen)

Future Test Types:

  • Unit Tests (API behavior, logic, calculations)
  • Device Tests (platform-specific API testing)
  • Integration Tests (end-to-end scenarios)

Test Verification: Fail → Pass Validation

A critical feature of write-tests-agent is its use of the verify-tests-fail-without-fix skill:

Mode 1: Verify Failure Only (Test Creation — no fix yet)

Use when writing tests before a fix exists:

  1. Run tests against the current codebase (which still has the bug)
  2. Verify tests FAIL (proving they correctly detect the bug)
  3. ✓ Tests confirmed to reproduce the issue

No files are reverted or modified. This is a single test run that validates your tests actually catch the bug.

Mode 2: Full Verification (Fix + Test Validation)

Use when a PR contains both a fix and tests:

  1. Revert fix files to pre-fix state (test files remain unchanged throughout)
  2. Run tests → Should FAIL (bug is present without fix)
  3. Restore fix files
  4. Run tests → Should PASS (fix resolves bug)
  5. ✓ Both tests and fix verified

This verification step ensures test quality — we avoid the common problem of tests that pass regardless of whether the bug is fixed.

Comprehensive Coverage Through Multiple Test Types

When appropriate, write-tests-agent creates layered test coverage:

Example: CollectionView Scrolling Bug

  • UI Test – Appium test that scrolls and verifies visual positioning
  • XAML Test – Validates that the ItemTemplate XAML compiles correctly across all inflators (Runtime, XamlC, SourceGen)

This dual-layer approach provides both behavioral validation (does the scrolling work?) and structural validation (does the XAML compile correctly?).

Note

As more test type skills are added (device tests, unit tests), the agent will be able to provide even more comprehensive coverage across different levels of the stack.

sandbox-agent: Manual Testing and Validation

The sandbox-agent complements automated testing with manual validation capabilities:

  • Creates test scenarios in the Controls.Sample.Sandbox app
  • Builds and deploys to iOS simulators, Android emulators, or Mac Catalyst
  • Generates Appium test scripts for automated interaction

When to use sandbox-agent:

  • Functional validation of PR fixes before merge
  • Reproducing complex user-reported issues
  • Visual verification of layout and rendering bugs
  • Testing scenarios that are difficult to automate

learn-from-pr agent: Continuous Improvement

The learn-from-pr agent analyzes completed PRs to extract lessons learned and applies improvements to instruction files, skills, and documentation — creating a feedback loop that makes the entire system smarter over time.

How to Use These Tools: Prompt Examples

Using the pr-review skill to Fix an Issue

When you want to create a fix for a GitHub issue, use the pr-review skill to guide you through the entire workflow.

Tip

These prompts are typed directly in the GitHub Copilot CLI terminal while inside the cloned .NET MAUI repository. The skill reads your local repository files, runs builds and tests on your machine, and interacts with GitHub APIs.

Basic fix invocation:

Fix issue #67890

With additional context for complex scenarios:

Fix issue #67890. The issue appears related to async lifecycle events
during CollectionView item recycling. Previous attempts may have failed
because they didn't account for view recycling on Android.

For alternative fix exploration:

Fix issue #67890. Try a handler-level approach first. If that doesn't work,
consider modifying the core control with platform guards.

The skill will:

  • Analyze the issue and codebase (Pre-Flight)
  • Check if tests exist; if not, notify you to create them with write-tests-agent (Gate)
  • Verify tests fail without fix and pass with fix (Validation)
  • Try up to 4 different fix approaches across 4 AI models (Try-Fix)

Using the pr-review skill to Review a Pull Request

When reviewing an existing PR (yours or someone else’s), use the pr-review skill to validate the fix and ensure quality:

Basic PR review:

Review PR #12345

With focus areas:

Review PR #12345. Focus on thread safety in the async handlers
and ensure Android platform-specific code follows conventions.

For test coverage validation:

Review PR #12345. Verify that the tests actually reproduce the bug
and cover all affected platforms (iOS and Mac Catalyst).

The skill will:

  • Analyze the PR changes and linked issue
  • Check if tests exist; if not, notify you to create them with write-tests-agent (Gate)
  • Verify tests fail without the fix and pass with it (Validation)
  • Provide a detailed review report

Important

Fix issue #XXXXX creates a new fix from scratch. Review PR #XXXXX validates and improves an existing PR. The skill adapts its workflow based on whether you’re creating or reviewing.

Writing Tests with write-tests-agent

Simple invocation:

Write tests for issue #12345

Specifying test type:

Write UI tests for issue #12345 that reproduce the button click behavior.

The agent analyzes the issue, selects appropriate test types, and creates comprehensive coverage. If you provide hints about reproduction steps or failure conditions, it incorporates them into the test strategy.

Testing with sandbox-agent

Basic testing:

Test PR #12345 in Sandbox

Platform-specific testing:

Test PR #12345 on iOS 18.5. Focus on the layout changes in SafeArea handling.

Reproducing user-reported issues:

Reproduce issue #12345 in Sandbox on Android. The user reported it happens
when rotating the device while a dialog is open.

Multi-Model Architecture for Quality Assurance

The pr-review skill leverages 4 AI models sequentially in Phase 3 (Try-Fix) to provide comprehensive solution exploration:

Order Model Purpose
1 Claude Opus 4.6 First fix attempt – deep analysis and reasoning
2 Claude Sonnet 4.6 Second attempt – balanced speed and quality
3 GPT-5.3-Codex Third attempt – code-specialized model
4 Gemini 3 Pro Preview Fourth attempt – different model family perspective

Why sequential, not parallel?

  • Only one Appium session can control a device or emulator at a time; parallel runs would interfere with each other’s test execution
  • All try-fix runs modify the same source files — simultaneous changes would overwrite each other’s code and corrupt the working tree
  • Each model runs in a completely separate context with zero visibility into what other models are doing, ensuring every fix attempt is genuinely independent and uninfluenced
  • Before each new model starts, a mandatory cleanup restores the working tree to a clean state — reverting any files the previous attempt modified, ensuring every model begins from an identical baseline

Cross-Pollination Rounds:

The 4 models don’t just run once — they participate in multiple rounds of cross-pollination:

  1. Round 1: Each model independently proposes and tests one fix approach (4 attempts total)
  2. Round 2: Each model reviews all Round 1 results and decides:
    • “NO NEW IDEAS” — Confirms exploration is exhausted for this model
    • “NEW IDEA: [description]” — Proposes a new approach that hasn’t been tried
  3. Round 3 (if needed): Repeat until all 4 models confirm “NO NEW IDEAS” (max 3 rounds)

This ensures comprehensive exploration — models see what failed, why it failed, and what succeeded, allowing them to propose fundamentally different approaches.

This multi-model approach ensures:

  • Diverse solution exploration — Each model brings different problem-solving patterns
  • Comprehensive fix coverage — 4 independent attempts with different AI architectures
  • Learning from failures — Later models see why earlier attempts failed
  • Reduced hallucination — Multiple models must independently solve the problem
  • Best fix selection — Data-driven comparison across 4 different approaches

The try-fix skill benefits most from this architecture — each model proposes an independent fix, tests it empirically, and records detailed results for comparison.

Measurable Impact on Team Productivity

Since implementing these agents, we’ve observed significant improvements across our team:

Task Before (Manual) After (Agents) Time Saved
Issue reproduction 30-60 min 5-10 min ~50 min
Root cause analysis 1-3 hours 20-40 min ~1.5 hours
Implementing fix 30-120 min Automated ~1 hour
Writing tests 1-2 hours 10-20 min ~1.5 hours
Exploring alternatives ❌ Not feasible ✅ Built-in Priceless
Total per issue 4-8 hours 45 min – 2.5 hours ~4-5 hours

That’s a 50-70% time reduction per issue. Our team can now address 2-3x more issues per week while maintaining higher quality standards.

Quality Improvements

Beyond time savings, we’ve seen measurable quality improvements:

  • Test Coverage: 95%+ of PRs now include comprehensive test coverage (up from ~60%)
  • First-Time Fix Rate: 80% of fixes work correctly on first attempt (up from ~50%)
  • Code Review Cycles: Reduced back-and-forth during review

The Skills Ecosystem: Composable Capabilities

These agents are built on a foundation of reusable skills — modular capabilities that can be composed together for different workflows.

Core Skills

try-fix

  • Proposes ONE independent fix approach per invocation
  • Applies fix, runs tests, captures results
  • Records failure analysis for learning
  • Iterated up to 3 times per model if errors occur

write-ui-tests

  • Creates test pages in TestCases.HostApp/Issues/
  • Generates Appium tests in TestCases.Shared.Tests/Tests/Issues/
  • Adds AutomationIds for element location
  • Implements platform-appropriate assertions

write-xaml-tests

  • Creates XAML test files in Controls.Xaml.UnitTests/Issues/
  • Tests across Runtime, XamlC, and SourceGen inflators
  • Validates XAML parsing, compilation, and code generation
  • Handles special file extensions (.rt.xaml, .rtsg.xaml) for invalid code generation cases

verify-tests-fail-without-fix

  • Mode 1 (Failure Only): Run tests once to verify they FAIL, proving they catch the bug
  • Mode 2 (Full Verification): Revert fix files → tests FAIL → restore fix → tests PASS
  • Test files are never reverted — only fix files are manipulated
  • Ensures test quality by proving tests detect the bug

Supporting Skills

azdo-build-investigator

  • Queries Azure DevOps for PR build information
  • Retrieves failed job details
  • Downloads Helix test logs for investigation
  • Identifies build failures and test failures

run-device-tests

  • Executes device tests locally on iOS/Android/Mac Catalyst
  • Supports test filtering by category
  • Manages device/simulator lifecycle
  • Captures test results and logs

pr-finalize

  • Verifies PR title and description match implementation
  • Performs final code review for best practices
  • Used before merging to ensure quality and documentation

Why Skills Matter

Skills provide:

  • Reusability — Same skill used across multiple agents and workflows
  • Testability — Each skill can be tested and improved independently
  • Composability — Agents combine skills to create complex workflows

Impact on Open Source Community

These agents aren’t just improving our internal team productivity — they’re transforming the contributor experience for the entire .NET MAUI community.

Lowering the Barrier to Entry

Before: New contributors faced a steep learning curve:

  • Understanding multi-platform handler architecture
  • Knowing which test type is appropriate
  • Following undocumented platform-specific conventions
  • Navigating complex build and test infrastructure

Now: Agents automatically:

  • Generate platform-appropriate code patterns
  • Select and create correct test types
  • Follow repository conventions automatically
  • Handle build and test infrastructure complexity

Improving Contribution Quality

Every PR now benefits from:

  • ✅ Comprehensive test coverage — Multiple test types covering different scenarios
  • ✅ Alternative fix exploration — Data-driven comparison of approaches
  • ✅ Automated code review — Catches common issues before human review

Accelerating the Contribution Cycle

Maintainer perspective:

  • Fewer back-and-forth review cycles
  • Less time requesting test coverage
  • Reduced need to explain platform-specific conventions
  • Higher confidence in community PRs

Contributor perspective:

  • Faster feedback through automated validation
  • Clear guidance when fixes don’t work
  • Learning repository best practices through agent interactions
  • Greater confidence in submitting PRs

Getting Started as a Contributor

We encourage the community to leverage these agents when contributing to .NET MAUI.

Step 1: Set Up GitHub Copilot CLI

Install GitHub Copilot CLI and authenticate it. See the GitHub Copilot CLI Documentation for setup instructions.

Step 2: Find an Issue to Work On

Browse our issue tracker for contribution opportunities:

Step 3: Use the Agents and Skills

For issue fixes:

First, write tests:

Write tests for issue #12345

Then, implement the fix:

Fix issue #12345

For PR review and improvement:

Review PR #12345

The workflow:

  1. write-tests-agent creates tests (UI, XAML) and verifies they catch the bug
  2. pr-review skill verifies tests exist, explores fix alternatives, compares approaches
  3. Human reviews and refines the output

Note

If you run “Fix issue #12345” without tests, the pr-review skill will notify you to create them first using write-tests-agent.

Step 4: Review and Refine

The agents produce high-quality output, but human review is essential:

  • Verify the fix addresses the root cause
  • Check that tests cover edge cases
  • Ensure code follows .NET MAUI conventions
  • Add additional context where needed

Step 5: Submit Your PR

With agent assistance, your PR will typically include:

  • Working fix with clear rationale
  • Comprehensive test coverage
  • Proper commit messages and PR description
  • Validation that tests prove the fix works

This significantly increases merge rates and reduces review cycles.

Hypothetical Example: From Issue to Merged PR

Let’s walk through a typical contribution workflow with agents:

Issue #12345: CollectionView items disappear on iOS when scrolling rapidly

Traditional Workflow (4-6 hours):

  1. Set up Sandbox app with CollectionView (30 min)
  2. Try to reproduce on iOS simulator (45 min)
  3. Debug handler code to find root cause (2 hours)
  4. Implement fix in Items2/iOS/ (1 hour)

Agent-Assisted Workflow:

Step 1: Create tests first

Write tests for issue #12345

Step 2: Fix the issue

Fix issue #12345

The pr-review skill executes:

  • Pre-Flight (5-10 min) — Reads issue, identifies iOS-specific CollectionView scrolling bug, analyzes Items2/iOS/ handler code, identifies potential causes: view recycling, async loading
  • Gate — Verifies tests from Step 1 catch the bug
  • Try-Fix (20-40 min) — Tries up to 4 fix approaches across 4 models, tests each empirically
  • Report — Compares all approaches, selects optimal solution

Result: High-quality PR ready in under an hour instead of half a day.

Conclusion

The introduction of custom-built AI agents has fundamentally changed how our team approaches .NET MAUI development. By automating the mechanical aspects of issue resolution — reproduction, testing, fix exploration — we can focus on what matters most: understanding the problem, reviewing solutions, and ensuring quality.

Key Takeaways

  • 50-70% reduction in time per issue
  • 2-3x increase in issues addressed per week
  • 95%+ test coverage on new PRs (up from ~60%)
  • Lower barrier to community contribution
  • Higher quality through multi-model fix exploration

We invite the .NET community to experience this new workflow. Your contributions make .NET MAUI better for millions of developers worldwide, and our agents are here to make that contribution process as smooth as possible.

Resources

Documentation

Getting Started

We’d love to hear about your experience using these agents. Share your success stories, challenges, and suggestions in the dotnet/maui discussions or on social media with #dotnetMAUI.

Happy coding! 🚀

The post Accelerating .NET MAUI Development with AI Agents appeared first on .NET Blog.

Scroll to Top