Google's latest AI model, Gemini 2.0, is here to usher in the 'agentic era'

The Gemini 2.0 graphic provided by Google.
(Image credit: Google)

What you need to know

  • Google is releasing Gemini 2.0 starting today, which will power what it calls the "agentic era."
  • Today, Gemini 2.0 Flash Experimental is available to developers in the Google AI Studio and Vertex AI using the Gemini API.
  • Gemini 2.0 Flash Experimental is also available in a chat-optimized form in the Gemini web client starting today, and the full lineup of Gemini 2.0 models will make their way to more Google products and services next year.

Roughly 10 months after Google's current generation of AI models were released to the public, the company is previewing the future: Gemini 2.0. Starting today, the smaller Gemini 2.0 Flash Experimental model will be available to developers and Gemini users, with more sizes and implementations to come next year.

Google views the next step forward for AI as the "agentic era," and in a blog post, the company outlined how Gemini 2.0 models are purpose-built to power AI agents. This will be the foundation for Google's most ambitious projects, from multimodal helpers to Chrome extensions that can do your browsing for you.

"Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision," wrote Google CEO Sundar Pichai. "With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant."

Project Astra | Exploring the future capabilities of a universal AI assistant - YouTube Project Astra | Exploring the future capabilities of a universal AI assistant - YouTube
Watch On

Although only an experimental model of Google's smallest AI model is available today, there are a few key reasons to be excited. Notably, Google's Gemini 2.0 Flash model not only outperforms Gemini 1.5 Flash, but also the larger and more powerful Gemini 1.5 Pro model in certain benchmarks. Gemini 2.0 Flash now supports both multimodal inputs and outputs, meaning the model can natively generate images, text, speech, or a combination of the three.

Gemini 2.0 Flash Experimental is available globally as an option in the Gemini web client today, and will come to the Gemini mobile app soon. Additionally, developers can try out the experimental version of Gemini 2.0 Flash using the Gemini API in Google AI Studio or Vertex AI.

Eventually, Gemini 2.0 will power most or all of Google's AI-equipped features. The models are specifically tailored for use with AI agents, like Project Astra, which was previewed at Google I/O 2024. It's a multimodal AI helper that can take in what's happening around you and answer questions with the context of your environment. There are also new projects, like Project Mariner. Currently, Project Mariner is a Chrome extension being tested as a research prototype that can handle your browsing needs for you.

Google's agentic vision is ambitious, with plenty more applications. There's Jules, a helper made for developers that integrates with GitHub workflows. Another proof-of-concept is the company's collaboration with Supercell, which tests how AI agents can be applied to strategy and simulation games.

Finally, there's now a new Deep Research mode in Gemini Advanced that uses long context windows and advanced reasoning to function as a research assistant.

Many of Google's Gemini 2.0-powered ideas aren't available now, but are in active development. Project Astra, for example, is being tested externally through Google's trusted tester program. However, some are available starting today, like Gemini 2.0 Flash Experimental in Gemini and Gemini 2.0 in AI Overviews — which is in limited testing and will be more widely available next year.

Brady Snyder
Contributor

Brady is a tech journalist for Android Central, with a focus on news, phones, tablets, audio, wearables, and software. He has spent the last three years reporting and commenting on all things related to consumer technology for various publications. Brady graduated from St. John's University with a bachelor's degree in journalism. His work has been published in XDA, Android Police, Tech Advisor, iMore, Screen Rant, and Android Headlines. When he isn't experimenting with the latest tech, you can find Brady running or watching Big East basketball.