Google announced the launch of Gemini 3 on Tuesday, describing it as another major milestone on the path toward Artificial General Intelligence (AGI).
In a statement, Google CEO Sundar Pichai said, “It achieves state-of-the-art reasoning capabilities, designed to grasp depth and nuance—whether capturing subtle cues in creative ideas or unpacking the overlapping layers of complex problems.”
Google stated that Gemini 3 is being integrated into its core products, including Search. The model is already live in Search’s AI mode, offering enhanced reasoning and a new dynamic user experience.
Users can now access the model through the Gemini app, while developers can leverage it via AI Studio, Vertex AI, and Google’s newly introduced agent-centric development platform, Google Antigravity.
Demis Hassabis, CEO of Google DeepMind, and Koray Kavukcuoglu, Chief Technology Officer and Chief AI Architect, jointly announced in a statement that the Gemini 3 Pro version is now available in preview.
“We’re entering the Gemini 3 era,” they said, noting that the model is gradually being rolled out across Search, Workspace, the Gemini app, and various developer platforms.
According to Google, Gemini 3 Pro outperforms Gemini 2.5 Pro, OpenAI’s GPT-5.1, and Anthropic’s Claude Sonnet 4.5 across key AI benchmarks, including LMArena, Humanity’s Last Exam, GPQA Diamond, and MathArena Apex.
The company highlighted significant improvements in multimodal capabilities, citing scores of 81% on MMMU-Pro (Multimodal Multitask Understanding Benchmark – Pro), 87.6% on Video-MMMU, and 72.1% on SimpleQA Verified—a benchmark measuring factual accuracy.
The launch also introduces Gemini 3 Deep Think, an optimized reasoning mode. Google reports it scored 41% on Humanity’s Last Exam, 93.8% on GPQA Diamond, and 45.1% on ARC-AGI-2 (with code execution enabled). “Deep Think pushes the boundaries of intelligence even further,” the company stated.
With broader multimodal input support, extended context handling, and new planning features, Gemini 3 can assist with diverse tasks such as analyzing research papers, translating handwritten family recipes, generating data visualizations, and evaluating athletic performance. In Search, the AI mode now supports generative UI elements and interactive simulations.
For developers, Google unveiled Google Antigravity—an agent-focused platform built around Gemini 3. The company claims Antigravity enables agents to “autonomously plan and execute complex end-to-end software tasks” with direct access to editors, terminals, and browsers. Gemini 3 also integrates with tools like Google AI Studio, Vertex AI, Gemini CLI, Cursor, GitHub, JetBrains, and Replit.
Another key advancement is the model’s long-horizon planning capability. Google noted that Gemini 3 Pro tops the Vending-Bench 2 leaderboard, demonstrating consistent decision-making over simulated one-year operational cycles.
Subscribers to Google AI Ultra can access these agent capabilities through the Gemini Agent feature in the Gemini app.
Google also emphasized its expanded safety evaluations, stating that Gemini 3 has undergone its most comprehensive assessment to date, with external partners including Apollo, Vaultis, and Dreadnode.
“Gemini 3 is our safest model yet,” the company asserted, adding that it shows significantly reduced sycophancy, improved resistance to prompt injection attacks, and stronger safeguards against misuse.