Here’s a scenario every developer knows too well: your AI coding assistant writes a beautiful chunk of code, the compiler gives you a green light, and you feel like a productivity superhero — until you actually run the app and realize the “Add to Cart” button has floated off the edge of the screen on every Android device smaller than a tablet. The AI that wrote the code? It had no idea. It never actually looked at what it built.
That gap between “it compiles” and “it actually works” has been one of the most frustrating blind spots in AI-assisted development. But a new pairing between Google’s Antigravity IDE and the Uno Platform App MCP is closing that gap in a genuinely interesting way. For the first time, your AI agent can launch your app, poke around the live UI, take screenshots, and tell you whether that button is actually where it’s supposed to be — all without you lifting a finger.
Let’s dig into what this means and why it matters.
Wait, What Is Antigravity Again?
If you haven’t been keeping up with Google’s developer tooling moves, Antigravity might sound like a physics experiment gone rogue. It’s actually Google’s agent-first development platform, built on top of VS Code, that goes well beyond the typical “autocomplete on steroids” approach of most AI coding assistants.
The core idea is straightforward: instead of an AI that only helps you write code, Antigravity gives you agents that can plan, execute, and verify tasks across your editor, terminal, and even a browser. Think of it as a “Mission Control” for AI agents — you can dispatch multiple agents to work on different tasks simultaneously, and each one can autonomously work through multi-step problems.
Antigravity ships with Gemini 3 Pro and also supports Claude Sonnet 4.5 and OpenAI’s GPT-OSS. It’s currently available in public preview at no cost for individuals. But the real magic isn’t in the model selection — it’s in how the platform lets agents interact with the actual running software, not just the source code.
And What Exactly Is Uno Platform App MCP?
The Uno Platform is already well-known in the .NET world as a way to write a single C#/XAML codebase that runs on Windows, Android, iOS, macOS, WebAssembly, and Linux. With their Studio 2.0 release, the team introduced something called App MCP — a local runtime service that gives AI agents direct access to your live, running application.
Here’s what the App MCP can actually do:
- Take screenshots of your running app at any point
- Dump the visual tree — that’s the hierarchical structure of every UI element on screen — as a machine-readable snapshot
- Simulate pointer clicks at specific coordinates
- Type text and press keys just like a real user would
- Invoke automation peer actions on UI elements
- Read the DataContext of any element to see what data is actually bound to your controls
In plain English: it gives the AI agent eyes, hands, and the ability to read the app’s internal state. The agent can see what the app looks like, interact with it, and understand what’s happening under the hood — all while the app is running on any supported platform.
Why “It Compiles” Was Never Good Enough
Let’s be honest about the state of AI coding assistants in 2025. They’re remarkably good at generating syntactically correct code. They can write entire CRUD controllers, suggest complex LINQ queries, and scaffold a new page with proper bindings. But here’s the thing: UI is fundamentally a runtime problem.
A button can exist perfectly in your XAML, with all the right bindings and event handlers wired up, and still be completely invisible to the user because a margin value pushed it offscreen on a certain screen size. A dialog can have the correct layout on Windows but overlap with the navigation bar on iOS. A dark mode toggle can compile without errors but produce unreadable white-on-white text because a style wasn’t applied correctly at runtime.
None of these issues show up at compile time. Traditional AI assistants, which work purely at the code level, are structurally incapable of catching them. They’re essentially writing code while blindfolded — they can tell you the syntax is correct, but they can’t tell you whether the result looks right.
This is why most teams still maintain separate QA processes, manual testers, and UI test suites written in frameworks like Selenium or Appium. The irony? You’re using an AI assistant to reduce the amount of code you need to write, and then writing even more code to test what the AI wrote.
How Antigravity + App MCP Actually Work Together
When you pair Antigravity with the Uno Platform App MCP, the workflow looks something like this:
Step 1: The agent gets a task. You might say something like “Make sure the Save button stays enabled after a network error” or “Add a settings page with three toggle switches and verify they’re bound correctly.”
Step 2: The agent writes the code. This is the part AI assistants already do well — generating the XAML and C# needed for the feature.
Step 3: The app builds and launches. Antigravity can trigger the build and launch the app under the App MCP harness, targeting whatever platform you need — Android emulator, WebAssembly in a browser, or a Windows desktop.
Step 4: The agent actually looks at the app. Using uno_app_get_screenshot, the agent captures what the user would actually see. Using uno_app_visualtree_snapshot, it gets a detailed breakdown of every UI element, their positions, sizes, and states.
Step 5: The agent interacts with the app. It can click buttons with uno_app_pointer_click, type text with uno_app_type_text, and trigger automation actions. It’s essentially running through the same steps a human tester would.
Step 6: Everything gets recorded. Antigravity’s artifact system stores screenshots, visual tree dumps, logs, and step-by-step timelines. Anyone on the team can go back and review exactly what the agent did and what it found.
The result is a closed feedback loop: the AI writes code, runs it, sees the result, and can determine whether the result matches expectations — all without human intervention. When something doesn’t look right, the agent has the actual evidence (screenshots, visual tree state, DataContext values) to diagnose the problem rather than guessing.
Real Scenarios Where This Changes the Game
Catching DPI-Specific Layout Bugs
That “Add to Cart” button that disappears on low-DPI Android devices? You can now tell the agent: “Run the app on a 320 DPI emulator, take a screenshot of the home screen, and verify the Add to Cart button is visible in the visual tree.” The agent spins up the emulator, captures the evidence, and either confirms the button is there or shows you exactly where things went wrong — with a screenshot attached.
Debugging Silent Binding Failures
XAML binding failures are famously quiet. A button renders on screen but nothing happens when you tap it, because the Command binding path doesn’t match the actual property name on the ViewModel. With App MCP, the agent can call uno_app_get_element_datacontext on the problematic button, see that the Command property is null, compare it against the DataContext’s actual properties, and identify the mismatch. No more staring at output windows hoping for a clue.
Verifying Accessibility Compliance
You can ask the agent to toggle large text settings, run the app, and inspect the visual tree for proper AutomationProperties.Name attributes on every interactive element. The resulting screenshots and tree dumps become an accessibility audit artifact you can hand directly to your compliance reviewer.
Cross-Platform Consistency Checks
Since Uno Platform targets multiple platforms from one codebase, you can ask the agent to run the same interaction on Windows, Android, and WebAssembly, then compare the visual trees. Any platform-specific discrepancy — a missing margin on iOS, a font rendering difference on WebAssembly — surfaces immediately with visual evidence.
Automated Bug Reproduction
A tester files a bug: “The app crashes after I tap Refresh twice.” You hand that description to the agent. It launches the app, simulates two taps on the Refresh button, captures the crash log and a screenshot of the UI just before the crash. Now you have a fully reproducible, machine-generated bug report complete with stack trace and visual context.
How This Compares to Traditional UI Testing
If you’re already using Selenium, Appium, or Cypress, you might be wondering what this adds. Here’s a practical comparison:
| Feature | Antigravity + Uno App MCP | Traditional UI Test Suites (Selenium/Appium/Cypress) |
|---|---|---|
| How tests are created | Agent generates actions from natural language prompts | Developers hand-write test scripts in code |
| Artifact output | Screenshots + visual tree JSON + step logs, automatically stored | Usually only logs; screenshots require manual setup |
| Cross-platform coverage | Same binary targets 6+ platforms via Uno Platform | Separate test suites and drivers per platform |
| AI integration | Native — agents verify their own code changes before you merge | No built-in AI hook; requires custom wrapper |
| Setup effort | Requires Antigravity + App MCP harness configuration | Driver installation + test runner configuration |
| Ecosystem maturity | Growing, still relatively new | Mature, extensive plugin ecosystem |
The key difference isn’t that one replaces the other — it’s that Antigravity + App MCP adds a verification layer inside the development loop itself. Traditional test suites run after development, often in a separate CI pipeline. This approach lets the AI verify its work during development, before the code ever reaches a pull request.
What to Watch Out For
No tool is without trade-offs, and this combination is no exception.
CI performance impact. Running an app inside the MCP harness takes time, especially when targeting multiple platforms. If you’re running these checks on every commit for Android, iOS, and WebAssembly, your CI pipeline will feel it. Antigravity supports parallelizing these runs, but that means more compute resources.
Flaky environment issues. UI tests have always been sensitive to environment differences — a missing font on a headless Linux runner, a slightly different emulator configuration — and this approach inherits those challenges. The advantage is that the artifact system gives you concrete visual evidence to distinguish real problems from environmental noise.
Learning curve. The App MCP exposes a detailed API surface (visual tree queries, pointer simulation, DataContext inspection). Getting comfortable with the JSON schemas and understanding how Antigravity structures its “missions” takes a day or two of hands-on experimentation.
Privacy considerations. Since Antigravity stores screenshots and logs as artifacts, any sensitive data visible in the app’s UI (user names, email addresses, financial information) could end up in those records. Best practice is to run verification against test accounts with sanitized data.
The Bigger Picture: From “Suggest” to “Validate”
What’s genuinely exciting here isn’t just a new way to catch broken buttons. It’s a conceptual shift in what AI coding assistants are capable of.
Until now, the development workflow with AI has been essentially one-directional: you ask the AI for code, it generates code, and then it’s your job to verify whether that code works. The feedback loop is human-dependent. You’re the eyes. You’re the tester. You’re the quality gate.
With runtime verification baked into the agent’s workflow, that loop starts to close. The AI writes the code, runs it, sees the result, and evaluates it — all before presenting the final output to you. Imagine a future where you ask an AI to add dark mode support to a settings page and it comes back with not just the code changes, but also a set of screenshots proving the contrast ratios meet WCAG AA standards, along with visual tree diffs showing the before and after states.
We’re not fully there yet — you still need to define what “correct” looks like and structure the verification missions appropriately. But the infrastructure for self-validating code generation is now real, and that’s a significant step forward for the entire industry.
Getting Started
If you want to try this yourself, the entry points are straightforward. Antigravity is available as a free public preview from Google, downloadable for macOS, Windows, and Linux. The Uno Platform App MCP is included in Uno Platform Studio’s Community Edition, with additional tools available in the Pro version. During the current launch period, AI features in Uno Platform Studio are running without credit limits.
The Uno Platform team has published detailed setup guides for configuring the App MCP in both VS Code and Visual Studio environments, and their blog includes a series of “Tech Bites” — short tutorials walking through specific agent-driven development scenarios.
Whether you’re building a cross-platform .NET app and want smarter AI assistance, or you’re just curious about what “agents that can actually see your UI” looks like in practice, this is worth exploring. The days of AI coders working with their eyes closed are, finally, starting to end.
Sources: