As AI-generated videos become increasingly sophisticated and pervasive, the line between real and synthetic content blurs, creating a pressing need for verification tools. In a significant step to address this "AI slop" problem, Google has expanded the capabilities of its Gemini AI assistant. The app can now analyze uploaded videos to detect if they were created or edited using Google's own AI models. This move follows a similar feature for images launched in November and represents a growing effort by tech giants to bring transparency to AI-generated media. However, the tool's effectiveness is inherently tied to the adoption of Google's specific watermarking technology, highlighting a broader industry challenge.
Google Expands Gemini's Verification to Video Content
Google has officially rolled out video verification capabilities within its Gemini app, allowing users to upload a video file and ask, "Was this generated using Google AI?" The system then scans both the visual and audio tracks of the video for an imperceptible digital watermark known as SynthID. This proprietary technology is embedded into content created or significantly edited by Google's AI models, such as those within its Nano Banano family. When detected, Gemini provides a detailed response, pinpointing specific timestamps where the watermark appears in the audio or visuals, rather than just giving a simple yes-or-no answer. This granular feedback aims to offer users clearer insight into how AI was used in the content's creation.
Gemini Video Verification Specifications:
- Maximum File Size: 100 MB
- Maximum Duration: 90 seconds
- Detection Target: Google's SynthID watermark in visual and audio tracks
- Availability: All languages and countries where the Gemini app is available
- User Prompt: "Was this generated using Google AI?"
Technical Specifications and Global Availability
The video verification feature is designed with practical limitations in mind. Gemini can process video files up to 100 megabytes in size and with a maximum duration of 90 seconds. This scope covers a wide range of short-form content commonly shared on social media platforms. Google has made the feature available globally, meaning it is accessible in every language and country where the Gemini app itself is supported. The rollout was announced in conjunction with other AI model updates, and its widespread availability underscores Google's commitment to deploying these transparency tools at scale, even as the underlying technology continues to evolve.
The Central Challenge: A Fragmented Watermarking Ecosystem
While the new feature is a technological advancement, its major limitation is one of ecosystem fragmentation. Gemini's detector is exclusively tuned to find Google's own SynthID watermark. It cannot identify content generated by other popular AI tools from companies like OpenAI, Midjourney, or Stability AI, which may use different watermarking methods or none at all. This creates a significant blind spot. As noted in recent reports, the general lack of a coordinated, industry-standard tagging system across social media platforms allows AI-generated deepfakes and misinformation to spread undetected. Google, along with partners like NVIDIA and Hugging Face, is advocating for SynthID, but widespread adoption is not guaranteed, as different companies may have competing incentives or technical approaches.
Context on SynthID & Ecosystem:
- SynthID is Google's proprietary, "imperceptible" watermark for AI-generated content.
- The feature was first launched for images in November 2025.
- Known partners using or supporting SynthID include NVIDIA and Hugging Face.
- A key limitation is that the tool cannot detect AI media generated by tools from other companies (e.g., OpenAI's Sora), which may not use SynthID.
The Imperceptible Watermark and Its Uncertain Future
Google describes its SynthID watermark as "imperceptible" to human senses, designed to be robust against attempts to remove it. This is a critical feature, as visible watermarks used by other services have proven easy to scrub or edit out, as seen with some early implementations. However, the long-term resilience of SynthID against sophisticated removal techniques remains untested in the wild. Furthermore, it is unclear how platforms like YouTube, Instagram, or TikTok will leverage the SynthID metadata. For the tool to have maximum impact, these platforms would need to automatically detect the watermark and label content accordingly, a level of integration that has not yet been announced. The success of this initiative, therefore, depends not only on the strength of the watermark but also on its adoption across the entire digital content lifecycle.
A Step Forward in an Ongoing Battle Against AI Misinformation
The expansion of Gemini's detection capabilities from images to videos marks a necessary and logical step in the fight against AI-generated misinformation. By providing users with a direct tool for verification, Google is empowering individuals to question the media they encounter online. Yet, the launch also starkly illustrates the current state of play: solutions are emerging, but they are siloed and incomplete. The ultimate goal—a universal, reliable method for identifying AI-generated content—requires unprecedented cooperation between competitors. Until that happens, tools like Gemini's video checker will serve as valuable but partial answers to a problem that is growing faster than the solutions designed to contain it.
