Google is launching a new “AI Mode” experimental feature in Search that looks to take on popular services like Perplexity AI and OpenAI’s ChatGPT Search. The tech giant announced on Wednesday that the new mode is designed to allow users to ask complex, multi-part questions and follow-ups to dig deeper on a topic directly within Google Search.
AI Mode is rolling out to Google One AI Premium subscribers starting this week and is accessible via Search Labs, Google’s experimental arm.
The feature uses a custom version of Gemini 2.0 and is particularly helpful for questions that need further exploration and comparisons thanks to advanced reasoning, thinking, and multimodal capabilities.
For instance, you could ask: “What’s the difference in sleep tracking features between a smart ring, smartwatch, and tracking mat?”
AI Mode can then give you a detailed comparison of what each product offers, along with links to articles that it’s pulling the information from. You could then ask a follow-up question, such as: “What happens to your heart rate during deep sleep?” to continue your search.

Google says that in the past, it would have taken multiple queries to compare detailed options or explore a new concept through traditional searches.
With AI Mode, you can access web content but also tap into real-time sources like the Knowledge Graph, info about the real world, and shopping data for billions of products.
“What we’re seeing in testing is people are asking questions that are about twice the query length of traditional search, and they’re also following up and asking follow up questions about a quarter of the time,” Robby Stein, VP of Product at Google Search, told TechCrunch in an interview. “And so they’re really getting at these maybe harder questions, ones that need more back and forth, and we think, it creates an expanded opportunity to do more with Google search, and that’s what we’re really excited about.”
Stein noted that as Google has rolled out AI Overviews, a feature that displays a snapshot of information at the top of the results page, it has heard that users want a way to get these sorts of AI-powered answers for even more of their searches, which is why the company is introducing AI Mode.
AI Mode works by using a “query fan-out” technique that issues multiple related searches concurrently across multiple data sources to then bring those results together in an easy-to-understand response.

“The model has learned to really prioritize factuality and backing up what it says through information that can be verified, and that’s really important, and it pays extra attention to really sensitive areas,” Stein said. “So this might be health, as an example, and where it’s not confident, it actually might just respond with a list of web links and web URLs, because that’s most helpful in the moment. It’s going to do its best and just be most helpful given the context of the information available and how confident it can be in the reply. This does not mean it will never make mistakes. It is very likely that it will make mistakes, as with every new kind of new and cutting edge AI technology that’s released.”
Since this is an early experiment, Google notes that it will continue to refine the user experience and expand functionality. For instance, the company plans to make the experience more visual and also surface information from a range of different sources, such as user-generated content. Google is teaching the model to determine when to add a hyperlink in a response (e.g. booking tickets) or when to prioritize image or video (e.g. how-to queries).
Google One AI Premium subscribers can access AI Mode by opting into Search Labs and then entering a question in the Search bar and tapping the “AI Mode” tab. Or, they can navigate directly to google.com/aimode to access the feature. On mobile, they can open the Google app and tap the “AI Mode” icon below the Search bar on the home screen.
As part of today’s announcement, Google also shared that it’s launched Gemini 2.0 for AI Overviews in the U.S. The company says AI Overviews will now be able to help with harder questions, starting with coding, advanced math, and multimodal queries. Plus, Google announced that users no longer need to sign in to access AI Overviews, and the feature is now being rolled out to teen users as well.
Aisha is a consumer news reporter at TechCrunch. Prior to joining the publication in 2021, she was a telecom reporter at MobileSyrup. Aisha holds an honours bachelor’s degree from University of Toronto and a master’s degree in journalism from Western University.