As AI-generated media becomes increasingly sophisticated and difficult to distinguish from reality, the race for detection tools is heating up. Google is making its latest move in this arena, expanding the capabilities of its Gemini AI to analyze videos. This new feature, however, comes with a significant and intentional limitation that highlights the fragmented state of AI transparency.
Gemini Expands Its Verification Toolkit to Video
Following the rollout of image verification in November 2025, Google has now extended its content transparency tools to include video analysis. The process is straightforward: users can upload a video file directly into the Gemini app or web interface and ask questions like "Was this created by Google AI?" or "Is this AI-generated?" Gemini then scans the uploaded content, searching for specific digital fingerprints. This expansion represents a logical next step for Google as it attempts to build a suite of tools to help users navigate an increasingly synthetic media landscape. The feature is now available globally in all languages and regions where the Gemini app is supported.
Feature Availability & Specifications
- Launch Date: Rolled out globally on December 18-19, 2025.
- File Limits: Maximum 100 MB file size, maximum 90 seconds duration.
- Access: Available in the Gemini app (web and mobile) in all supported languages and countries.
- Detection Method: Scans for the imperceptible SynthID watermark in audio and visual tracks.
- Primary Limitation: Can only detect content generated by Google AI tools.
The Core Technology: Scanning for Invisible Watermarks
The detection capability hinges on SynthID, Google's proprietary watermarking technology. When Google's AI models, such as those powering video generation in Gemini, create or significantly edit content, they embed an imperceptible SynthID watermark into both the visual and audio tracks. This watermark is designed to be robust, surviving common edits like cropping, filtering, or compression. When a user submits a video for analysis, Gemini scans the file for this specific watermark. The AI then provides a contextual response, specifying if and where the watermark was detected. For instance, it might report, "SynthID detected in the visuals between 5-10 seconds. No SynthID detected within the audio," offering a granular look at which parts of the media originated from Google's AI.
The Major Limitation: A Walled Garden of Detection
The most critical caveat of this new tool is its narrow scope. Gemini can only definitively identify videos generated or edited by Google's own AI tools. If a video is created using AI models from other companies like OpenAI, Midjourney, or Stability AI, Gemini will not find a SynthID watermark. In such cases, the response will state, "The video was not made with Google AI. However, the tool was unable to determine if it was generated with other AI tools." This limitation is not a bug but a fundamental constraint of the underlying technology, which relies on a watermark only Google controls. It underscores a broader industry challenge: without a universal, cross-platform standard for watermarking or labeling AI content, detection tools remain siloed and incomplete.
How the Detection Works A user uploads a video to Gemini and asks "Is this AI-generated?" Gemini scans the file and provides one of three core responses:
- Positive Detection: "SynthID detected in the visuals between [X-Y] seconds and audio between [A-B] seconds."
- No Google Watermark: "The video was not made with Google AI. However, the tool was unable to determine if it was generated with other AI tools."
- Analytical Guess (upon user prompting): Gemini can list common AI video artifacts (e.g., unnatural motion, texture flaws) to suggest if content is likely synthetic, even without a detectable watermark.
Practical Constraints and Workarounds
Beyond the brand limitation, the feature has practical usage boundaries. Uploaded video files must be under 100 MB in size and no longer than 90 seconds in duration, which restricts analysis to short clips. Despite its inability to detect non-Google AI, Gemini can still be a useful investigative partner. When presented with a video of unknown origin, users can prompt it to analyze the content based on known visual hallmarks of AI generation. Gemini can reason about inconsistencies in physics, unnatural textures, odd lighting, or bizarre anatomical details—common flaws in many AI-generated videos—and provide a reasoned assessment of whether the video is likely synthetic, even without a definitive watermark.
The Bigger Picture in the Fight Against Misinformation
Google's rollout of video verification is a step toward greater digital media literacy, but it is a partial solution. It effectively creates a "trusted source" verification for content originating from its own ecosystem, which is valuable for users interacting with Google's AI products. However, for the vast majority of AI-generated content online, the tool offers no definitive answer. This development highlights the urgent need for industry-wide collaboration on standards for watermarking and content provenance. Until such standards are adopted, users will need to rely on a combination of specialized tools like Gemini's detector and critical thinking to assess the media they encounter online. The battle to discern real from synthetic is just beginning, and the tools are still catching up.
