In the 2025 Google I/O developer conference, Google introduced Stitch, an AI-powered tool that assists in designing the front-end of web and mobile applications by generating essential UI components and code.
Stitch can create application UIs based on just a few words or even an image, providing HTML and CSS markup for its generated designs. Users have the option to utilize Google's Gemini 2.5 Pro and Gemini 2.5 Flash AI models to support Stitch's code and interface conceptualization.
Amid the growing popularity of so-called ambient coding—using AI models to generate code—Stitch has emerged. Many large tech startups are pursuing this burgeoning market, including Anysphere, the maker of Cursor, Cognition, and Windsurf. Just last week, OpenAI launched a new assisted coding service called Codex. And yesterday, during its Build 2025 launch, Microsoft announced a series of updates to its GitHub Copilot coding assistant.
Compared to some other ambient coding products, Stitch's functionality is somewhat limited but offers considerable customization options. The tool supports direct export to Figma and allows code sharing for optimization and refinement in an IDE. Stitch also enables users to fine-tune any generated application design elements.
In a demonstration for TechCrunch, Google product manager Kathy Korevec showcased two projects created using Stitch: a responsive mobile UI design for bookworms and a beekeeping web dashboard.
"[Stitch is] a place where you can complete your initial iterations and then move forward from there," Korevec said. "What we want to do is make it easier and more accessible for people to engage in the next phase of design thinking or software building."
Shortly after I/O, Google plans to add a feature allowing users to modify UI designs by taking a screenshot of the object they wish to adjust and annotating their desired changes, according to Korevec. She added that while Stitch is fairly powerful, it’s not intended to be a full-fledged design platform like Figma or Adobe XD.
Beyond Stitch, Google has expanded access to Jules, an AI agent designed to help developers fix code bugs. Currently in public beta, the tool helps developers understand complex code, create pull requests on GitHub, and handle certain backlog items and programming tasks.
In another demo, Korevec showed how Jules upgraded a website running the deprecated Node.js version 16 to Node.js 22. Jules cloned the website's codebase in a clean virtual machine and shared an upgrade "plan" which Korevec was prompted to approve. Once the upgrade was complete, Korevec asked Jules to verify that the website still functioned correctly—and it did.
Jules currently uses Gemini 2.5 Pro, but Korevec told TechCrunch that users will be able to switch between different models in the future.