Gemini 3
Our most intelligent AI model that brings any idea to life
Gemini 3.1 Pro
Best for complex tasks and bringing creative concepts to life
Gemini 3 Flash
Our latest Gemini 3 model that helps you bring any idea to life - faster.
Google Antigravity
Build with our new agentic development platform.
Nano Banana Pro
Create and edit images with studio-quality levels of precision and control.
Introducing our most intelligent model yet. With state-of-the-art reasoning to help you learn, build, and plan anything.
Models
Completing everyday tasks, or solving complex problems. Discover the right model for what you need
Capabilities
Pushes the boundaries of intelligence, delivering a significant upgrade to Gemini 3’s specialized reasoning mode to help you solve the most complex technical problems.
Gemini 3 Deep Think can better help tackle real world problems that require rigor, breakthrough creativity and intelligence. Available forGoogle AI Ultra subscribers.
| Benchmark | Notes | Gemini 3 Deep ThinkFeb 2026 | Gemini 3 Pro PreviewThinking (High) | Opus 4.6Thinking (Max) | GPT-5.2Thinking (xhigh) |
|---|---|---|---|---|---|
| ARC-AGI-2Abstract reasoning puzzles | ARC prize verified | 84.6% | 31.1% | 68.8% | 52.9% |
| Humanity's Last Exam Academic reasoning (full set, text + MM) | No tools | 48.4% | 37.5% | 40.0% | 34.5% |
| Search + code execution | 53.4% | 45.8% | 53.1% | 45.5% | |
| MMMU-Pro Multimodal understanding and reasoning | No tools | 81.5% | 81.0% | 73.9% | 79.5% |
| International Math Olympiad 2025Mathematics | 81.5% | 14.3% | — | 71.4% | |
| CodeforcesCoding and algorithms | No tools, Elo | 3455 | 2512 | 2352 | — |
| International Physics Olympiad 2025 (theory)Physics | 87.7% | 76.3% | 71.6% | 70.5% | |
| CMT-BenchmarkCondensed matter theory | 50.5% | 39.5% | 17.1% | 41.0% | |
| International Chemistry Olympiad 2025 (theory)Chemistry | 82.8% | 69.6% | — | 72.0% |
Gemini 1 introduced native multimodality and long context to help AI understand the world. Gemini 2 added thinking, reasoning and tool use to create a foundation for agents.
Now, Gemini 3 brings these capabilities together – so you can bring any idea to life.
Get started
Build with Gemini 3
Hands-on
Explore what you can do with Gemini 3
Showcase
“Gemini 3 Pro brings a new level of multimodal understanding, planning, and tool-calling that transforms how Box AI interprets and applies your institutional knowledge. The result is content actively working for you to deliver faster decisions and execute across mission-critical workflows, from sales and marketing to legal and finance.”
Ben Kus, CTO, Box
“Gemini 3 has been a game-changer for Cline. We're using it to handle complex, long-horizon coding tasks that require deep context understanding across entire codebases. The model uses long context far more effectively than Gemini 2.5 Pro and has solved problems that stumped other leading models... This is a massive leap.”
Nik Pash, Head of AI, Cline
“We’re excited to partner with Google to launch Gemini 3 in Cursor! Gemini 3 Pro shows noticeable improvements in frontend quality, and works well for solving the most ambitious tasks.”
Sualeh Asif, Co-founder and Chief Product Officer, Cursor
“With Gemini 3 Pro in Figma Make, teams have a strong foundation to explore and steer their ideas with code-backed prototypes. The model translates designs with precision and generates a wide, inventive range of styles, layouts, and interactions. As foundation models get better, Figma gets better — and I’m excited to see how Gemini 3 Pro helps our community unlock new creative possibilities.”
Loredana Crisan, Chief Design Officer, Figma
“By bringing Gemini 3 Pro to GitHub Copilot, we’re seeing promising gains in how quickly and confidently developers can move from idea to code. In our early testing in VS Code, Gemini 3 Pro demonstrated 35% higher accuracy in resolving software engineering challenges than Gemini 2.5 Pro. That's the kind of potential that translates to developers solving real-world problems with more speed and effectiveness.”
Joe Binder, VP of Product, GitHub
“At JetBrains, we pride ourselves on code quality, so we challenged Gemini 3 Pro with demanding frontline tasks: from generating thousands of lines of front-end code to even simulating an operating-system interface from a single prompt. The new Gemini 3 Pro model advances the depth, reasoning, and reliability of AI in developer tools, showing more than a 50% improvement over Gemini 2.5 Pro in the number of solved benchmark tasks. In collaboration with Google, we’re now integrating Gemini 3 Pro into Junie and AI Assistant, to deliver smarter, more context-aware experiences to millions of developers worldwide.”
Vladislav Tankov, Director of AI, Jetbrains
“We’ve observed even stronger performance in the model’s reasoning and problem-solving capabilities. Many of Manus’ recent advancements—such as Wide Research and the web-building capabilities introduced in Manus 1.5—have become significantly more powerful with Gemini 3’s support.”
Tao Zhang, Co-Founder and Chief Product Officer, Manus AI
“Gemini 3 represents a significant advancement in multimodal AI... From accurately transcribing 3-hour multilingual meetings with superior speaker identification, to extracting structured data from poor-quality document photos, outperforming baseline models by over 50%, it showcased impressive capabilities that redefine enterprise potential.”
Yusuke Kaji, General Manager, AI for Business, Rakuten Group Inc
“Gemini 3 Pro truly stands out for its design capabilities, offering an unprecedented level of flexibility while creating apps. Like a skilled UI designer, it can range from well-organized wireframes to stunning high-fidelity prototypes.”
Michele Catasta, President & Head of AI, Replit
“Gemini 3 is a major leap forward for agentic AI. It follows complex instructions with minimal prompt tuning and reliably calls tools, which are critical capabilities to build truly helpful agents.”
Mikhail Parakhin, Chief Technology Officer, Shopify
“Our early evaluations indicate that Gemini 3 is delivering state-of-the-art reasoning with depth and nuance. We have observed measurable and significant progress in both legal reasoning and complex contract understanding.”
Joel Hron, Chief Technology Officer, Thomson Reuters
“At Wayfair, we’ve been piloting Google’s Gemini 3 Pro to turn complex partner support SOPs into clear, data-accurate infographics for our field associates. Compared with Gemini 2.5 Pro, it’s a clear step forward in handling structured business tasks that require precision and consistency — helping our teams grasp key information faster and support partners more effectively.”
Fiona Tan, CTO, Wayfair
Performance
Gemini 3 is state-of-the-art across a wide range of benchmarks
Our most intelligent model yet sets a new bar for AI model performance
| Benchmark | Notes | Gemini 3.1 ProThinking (High) | Gemini 3 ProThinking (High) | Sonnet 4.6Thinking (Max) | Opus 4.6Thinking (Max) | GPT-5.2Thinking (xhigh) | GPT-5.3-CodexThinking (xhigh) |
|---|---|---|---|---|---|---|---|
| Humanity's Last Exam Academic reasoning (full set, text + MM) | No tools | 44.4% | 37.5% | 33.2% | 40.0% | 34.5% | — |
| Search (blocklist) + Code | 51.4% | 45.8% | 49.0% | 53.1% | 45.5% | — | |
| ARC-AGI-2Abstract reasoning puzzles | ARC Prize Verified | 77.1% | 31.1% | 58.3% | 68.8% | 52.9% | — |
| GPQA DiamondScientific knowledge | No tools | 94.3% | 91.9% | 89.9% | 91.3% | 92.4% | — |
| Terminal-Bench 2.0Agentic terminal coding | Terminus-2 harness | 68.5% | 56.9% | 59.1% | 65.4% | 54.0% | 64.7% |
| Other best self-reported harness | — | — | — | — | 62.2%(Codex) | 77.3%(Codex) | |
| SWE-Bench VerifiedAgentic coding | Single attempt | 80.6% | 76.2% | 79.6% | 80.8% | 80.0% | — |
| SWE-Bench Pro (Public) Diverse agentic coding tasks | Single attempt | 54.2% | 43.3% | — | — | 55.6% | 56.8% |
| LiveCodeBench Pro Competitive coding problems from Codeforces, ICPC, and IOI | Elo | 2887 | 2439 | — | — | 2393 | — |
| SciCodeScientific research coding | 59% | 56% | 47% | 52% | 52% | — | |
| APEX-Agents Long horizon professional tasks | 33.5% | 18.4% | — | 29.8% | 23.0% | — | |
| GDPval-AA EloExpert tasks | 1317 | 1195 | 1633 | 1606 | 1462 | — | |
| τ2-benchAgentic and tool use | Retail | 90.8% | 85.3% | 91.7% | 91.9% | 82.0% | — |
| Telecom | 99.3% | 98.0% | 97.9% | 99.3% | 98.7% | — | |
| MCP Atlas Multi-step workflows using MCP | 69.2% | 54.1% | 61.3% | 59.5% | 60.6% | — | |
| BrowseCompAgentic search | Search + Python + Browse | 85.9% | 59.2% | 74.7% | 84.0% | 65.8% | — |
| MMMU-Pro Multimodal understanding and reasoning | No tools | 80.5% | 81.0% | 74.5% | 73.9% | 79.5% | — |
| MMMLUMultilingual Q&A | 92.6% | 91.8% | 89.3% | 91.1% | 89.6% | — | |
| MRCR v2 (8-needle)Long context performance | 128k (average) | 84.9% | 77.0% | 84.9% | 84.0% | 83.8% | — |
| 1M (pointwise) | 26.3% | 26.3% | Not supported | Not supported | Not supported | — |
Safety
Building with responsibility at the core
As we develop these new technologies, we recognize the responsibility it entails, and aim to prioritize safety and security in all our efforts.
For developers
Build with cutting-edge generative AI models and tools to make AI helpful for everyone
Gemini’s advanced thinking, native multimodality and massive context window empowers developers to build next-generation experiences.

