News

The company says Gemini 2.5 Deep Think achieved new performance records—for AI, at least—on common knowledge and reasoning tests.
The duration for this one was about 13-minutes. Unfortunately, Gemini’s automatic task chip won’t let you adjust the length, or conversational depth of the audio overview.
The Gemini 2.0 live bidirectional API represents a significant advancement in multimodal interaction technology, allowing seamless communication across voice, text, and video.
Audio Overview is coming to Google’s AI chatbot Gemini, and I think it will change the way we use it for good. You can use Audio Overview to turn documents, slides, and even Deep Research ...
Gemini is not a one-size-fits-all model; it comes in three sizes: Google Gemini Ultra, Pro, and Nano. The Pro version, which is the focus of this article, is now accessible via the Gemini API.
As a reminder, Google updated Deep Research, having the Gemini 2.0 Flash model power the feature a few days ago. Audio Overviews in Gemini will work just like in NotebookLM.
Both Canvas and Audio Overview are available for free to Gemini users worldwide as of Tuesday. Canvas’ code preview feature is only on the web for now, however, and Audio Overview summaries are ...
Starting today, developers using Google’s Gemini API and its Google AI Studio to build AI-based services and bots will be able to ground their prompts’ results with data from Google Search.
Other announcements today: Gemini 2.0 Flash has hit general availability (GA) for developers building apps and features with Google’s API. Pricing details are available here.