Home MarkTechPost Flash 1.5, Gemma 2 and Project Astra
MarkTechPost

Flash 1.5, Gemma 2 and Project Astra

Share
Flash 1.5, Gemma 2 and Project Astra
Share


1.5 Flash excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more. This is because it’s been trained by 1.5 Pro through a process called “distillation,” where the most essential knowledge and skills from a larger model are transferred to a smaller, more efficient model.

Read more about 1.5 Flash in our updated Gemini 1.5 technical report, on the Gemini technology page, and learn about 1.5 Flash’s availability and pricing.

Significantly improving 1.5 Pro

Over the last few months, we’ve significantly improved 1.5 Pro, our best model for general performance across a wide range of tasks.

Beyond extending its context window to 2 million tokens, we’ve enhanced its code generation, logical reasoning and planning, multi-turn conversation, and audio and image understanding through data and algorithmic advances. We see strong improvements on public and internal benchmarks for each of these tasks.

1.5 Pro can now follow increasingly complex and nuanced instructions, including ones that specify product-level behavior involving role, format and style. We’ve improved control over the model’s responses for specific use cases, like crafting the persona and response style of a chat agent or automating workflows through multiple function calls. And we’ve enabled users to steer model behavior by setting system instructions.

We added audio understanding in the Gemini API and Google AI Studio, so 1.5 Pro can now reason across image and audio for videos uploaded in Google AI Studio. And we’re now integrating 1.5 Pro into Google products, including Gemini Advanced and in Workspace apps.

Read more about 1.5 Pro in our updated Gemini 1.5 technical report and on the Gemini technology page.

Gemini Nano understands multimodal inputs

Gemini Nano is expanding beyond text-only inputs to include images as well. Starting with Pixel, applications using Gemini Nano with Multimodality will be able to understand the world the way people do — not just through text, but also through sight, sound and spoken language.

Read more about Gemini 1.0 Nano on Android.



Source link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

By submitting this form, you are consenting to receive marketing emails and alerts from: techaireports.com. You can revoke your consent to receive emails at any time by using the Unsubscribe link, found at the bottom of every email.

Latest Posts

Related Articles
FACTS Grounding: A new benchmark for evaluating the factuality of large language models
MarkTechPost

FACTS Grounding: A new benchmark for evaluating the factuality of large language models

Responsibility & Safety Published 17 December 2024 Authors FACTS team Our comprehensive...

Updates to Veo, Imagen and VideoFX, plus introducing Whisk in Google Labs
MarkTechPost

Updates to Veo, Imagen and VideoFX, plus introducing Whisk in Google Labs

While video models often “hallucinate” unwanted details — extra fingers or unexpected...

A new AI model for the agentic era
MarkTechPost

A new AI model for the agentic era

A note from Google and Alphabet CEO Sundar Pichai: Information is at...

Google DeepMind at NeurIPS 2024
MarkTechPost

Google DeepMind at NeurIPS 2024

Research Published 5 December 2024 Advancing adaptive AI agents, empowering 3D scene...