Home MarkTechPost Identify content made with Google’s AI tools
MarkTechPost

Identify content made with Google’s AI tools

Share
Identify content made with Google’s AI tools
Share


Advances in generative AI are making it possible for people to create content in entirely new ways — from text to high quality audio, images and videos. As these capabilities advance and become more broadly available, questions of authenticity, context and verification emerge.

Today we’re announcing SynthID Detector, a verification portal to quickly and efficiently identify AI-generated content made with Google AI. The portal provides detection capabilities across different modalities in one place and provides essential transparency in the rapidly evolving landscape of generative media. It can also highlight which parts of the content are more likely to have been watermarked with SynthID.

When we launched SynthID — a state-of-the-art tool that embeds imperceptible watermarks and enables the identification of AI-generated content — our aim was to provide a suite of novel technical solutions to help minimise misinformation and misattribution.

SynthID not only preserves the content’s quality, it acts as a robust watermark that remains detectable even when the content is shared or undergoes a range of transformations. While originally focused on AI-generated imagery only, we’ve since expanded SynthID to cover AI-generated text, audio and video content, including content generated by our Gemini, Imagen, Lyria and Veo models across Google. Over 10 billion pieces of content have already been watermarked with SynthID.

How SynthID Detector works

When you upload an image, audio track, video or piece of text created using Google’s AI tools, the portal will scan the media for a SynthID watermark. If a watermark is detected, the portal will highlight specific portions of the content most likely to be watermarked.

For audio, the portal pinpoints specific segments where a SynthID watermark is detected, and for images, it indicates areas where a watermark is most likely.



Source link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

By submitting this form, you are consenting to receive marketing emails and alerts from: techaireports.com. You can revoke your consent to receive emails at any time by using the Unsubscribe link, found at the bottom of every email.

Latest Posts

Related Articles
Experiment with Gemini 2.0 Flash native image generation
MarkTechPost

Experiment with Gemini 2.0 Flash native image generation

In December we first introduced native image output in Gemini 2.0 Flash...

Gemini Robotics brings AI into the physical world
MarkTechPost

Gemini Robotics brings AI into the physical world

Models Published 12 March 2025 Authors Carolina Parada Introducing Gemini Robotics, our...

Our newest Gemini model with thinking
MarkTechPost

Our newest Gemini model with thinking

Last updated March 26 Today we’re introducing Gemini 2.5, our most intelligent...

Evaluating potential cybersecurity threats of advanced AI
MarkTechPost

Evaluating potential cybersecurity threats of advanced AI

Artificial intelligence (AI) has long been a cornerstone of cybersecurity. From malware...