Home MarkTechPost Flash 1.5, Gemma 2 and Project Astra
MarkTechPost

Flash 1.5, Gemma 2 and Project Astra

Share
Flash 1.5, Gemma 2 and Project Astra
Share


1.5 Flash excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more. This is because it’s been trained by 1.5 Pro through a process called “distillation,” where the most essential knowledge and skills from a larger model are transferred to a smaller, more efficient model.

Read more about 1.5 Flash in our updated Gemini 1.5 technical report, on the Gemini technology page, and learn about 1.5 Flash’s availability and pricing.

Significantly improving 1.5 Pro

Over the last few months, we’ve significantly improved 1.5 Pro, our best model for general performance across a wide range of tasks.

Beyond extending its context window to 2 million tokens, we’ve enhanced its code generation, logical reasoning and planning, multi-turn conversation, and audio and image understanding through data and algorithmic advances. We see strong improvements on public and internal benchmarks for each of these tasks.

1.5 Pro can now follow increasingly complex and nuanced instructions, including ones that specify product-level behavior involving role, format and style. We’ve improved control over the model’s responses for specific use cases, like crafting the persona and response style of a chat agent or automating workflows through multiple function calls. And we’ve enabled users to steer model behavior by setting system instructions.

We added audio understanding in the Gemini API and Google AI Studio, so 1.5 Pro can now reason across image and audio for videos uploaded in Google AI Studio. And we’re now integrating 1.5 Pro into Google products, including Gemini Advanced and in Workspace apps.

Read more about 1.5 Pro in our updated Gemini 1.5 technical report and on the Gemini technology page.

Gemini Nano understands multimodal inputs

Gemini Nano is expanding beyond text-only inputs to include images as well. Starting with Pixel, applications using Gemini Nano with Multimodality will be able to understand the world the way people do — not just through text, but also through sight, sound and spoken language.

Read more about Gemini 1.0 Nano on Android.



Source link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Experiment with Gemini 2.0 Flash native image generation
MarkTechPost

Experiment with Gemini 2.0 Flash native image generation

In December we first introduced native image output in Gemini 2.0 Flash...

Gemini Robotics brings AI into the physical world
MarkTechPost

Gemini Robotics brings AI into the physical world

Research Published 12 March 2025 Authors Carolina Parada Introducing Gemini Robotics, our...

Google’s new open model based on Gemini 2.0
MarkTechPost

Google’s new open model based on Gemini 2.0

For a deeper dive into the technical details behind these capabilities, as...

Start building with Gemini 2.0 Flash and Flash-Lite
MarkTechPost

Start building with Gemini 2.0 Flash and Flash-Lite

Since the launch of the Gemini 2.0 Flash model family, developers are...