Skip to main content

Google’s AI detection tool is now available for anyone to try

Google announced via a post on X (formerly Twitter) on Wednesday that SynthID is now available to anybody who wants to try it. The authentication system for AI-generated content embeds imperceptible watermarks into generated images, video, and text, enabling users to verify whether a piece of content was made by humans or machines.

“We’re open-sourcing our SynthID Text watermarking tool,” the company wrote. “Available freely to developers and businesses, it will help them identify their AI-generated content.”

Recommended Videos

SynthID debuted in 2023 as a means to watermark AI-generated images, audio, and video. It was initially integrated into Imagen, and the company subsequently announced its incorporation into the Gemini chatbot this past May at I/O 2024.

The system works by encoding tokens — those are the foundational chunks of data (be it a single character, word, or part of a phrase) that a generative AI uses to understand the prompt and predict the next word in its reply — with imperceptible watermarks during the text generation process. It does so, according to a DeepMind blog from May, by “introducing additional information in the token distribution at the point of generation by modulating the likelihood of tokens being generated.”

By comparing the model’s word choices along with its “adjusted probability scores” against the expected pattern of scores for watermarked and unwatermarked text, SynthID can detect whether an AI wrote that sentence.

This process does not impact the response’s accuracy, quality, or speed, according to a study published in Nature on Wednesday, nor can it be easily bypassed. Unlike standard metadata, which can be easily stripped and erased, SynthID’s watermark reportedly remains even if the content has been cropped, edited, or otherwise modified.

“Achieving reliable and imperceptible watermarking of AI-generated text is fundamentally challenging, especially in scenarios where [large language model] outputs are near deterministic, such as factual questions or code generation tasks,” Soheil Feizi, an associate professor at the University of Maryland, told MIT Technology Review, noting that its open-source nature “allows the community to test these detectors and evaluate their robustness in different settings, helping to better understand the limitations of these techniques.”

The system is not foolproof, however. While it is resistant to tampering, SynthID’s watermarks can be removed if the text is run through a language translation app or if it’s been heavily rewritten. It is also less effective with short passages of text and in determining whether a reply based on a factual statement was generated by AI. For example, there’s only one right answer to the prompt, “what is the capital of France?” and both humans and AI will tell you that it’s Paris.

If you’d like to try SynthID yourself, it can be downloaded from Hugging Face as part of Google’s updated Responsible GenAI Toolkit.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Google expands its AI search function, incorporates ads into Overviews on mobile
A woman paints while talking on her Google Pixel 7 Pro.

Google announced on Thursday that it is "taking another big leap forward" with an expansive round of AI-empowered updates for Google Search and AI Overview.
Earlier in the year, Google incorporated generative AI technology into its existing Lens app, which allows users to identify objects within a photograph and search the web for more information on them, so that the app will return an AI Overview based on what it sees rather than a list of potentially relevant websites. At the I/O conference in May, Google promised to expand that capability to video clips.
With Thursday's update, "you can use Lens to search by taking a video, and asking questions about the moving objects that you see," Google's announcement reads. The company suggests that the app could be used to, for example, provide personalized information about specific fish at an aquarium simply by taking a video and asking your question.
Whether this works on more complex subjects like analyzing your favorite NFL team's previous play or fast-moving objects like identifying makes and models of cars in traffic, remains to be seen. If you want to try the feature for yourself, it's available globally (though only in English) through the iOS and Android Google App. Navigate to the Search Lab and enroll in the “AI Overviews and more” experiment to get access.

You won't necessarily have to type out your question either. Lens now supports voice questions, which allows you to simply speak your query as you take a picture (or capture a video clip) rather than fumbling across your touchscreen in a dimly lit room. 
Your Lens-based shopping experience is also being updated. In addition to the links to visually similar products from retailers that Lens already provides, it will begin displaying "dramatically more helpful results," per the announcement. Those include reviews of the specific product you're looking at, price comparisons from across the web, and information on where to buy the item. 

Read more
Google’s Gemini Live now speaks nearly four-dozen languages
A demonstration of Gemini Live on a Google Pixel 9.

Google announced Thursday that it is making Gemini Live available in more than 40 languages, allowing global users (no longer just English speakers) to access the conversational AI feature, as well as enabling the full Gemini AI to connect with additional Google apps in more languages.

Gemini Live is Google's answer to OpenAI's Advanced Voice Mode or Meta's Voice Interactions. The feature enables users to converse with the AI as if it were another person, eliminating the need for text-based prompts. Gemini Live made its debut in May during the company's I/O 2024 event and was initially released for Gemini Advanced subscribers in August before being made available to all users (on Android, at least) in September.

Read more
Microsoft Copilot now has a voice and can ‘see what you see’ on the internet
Microsoft CEO Satya Nadella announces updates to the company's Copilot artificial intelligence (AI) tool.

You might want to start treating your web browser like you're always at work, at least if you want to use Microsoft's new Copilot Vision feature. The feature, which is natively built into Microsoft Edge, is able to "see what you see, and hear what you hear" as you navigate your browser, according to Microsoft's Executive Vice President Yusuf Mehdi.

All of this AI snooping isn't for nothing. Copilot Vision looks at what you're doing online to answer questions, provide recommendations, and summarize content. It can work with the new Copilot Voice feature, for example. Microsoft demoed the capabilities on Rotten Tomatoes, showing a user chatting with Copilot while browsing the website and looking for movie recommendations. Ultimately, Copilot settled on an Australian comedy for the Australian speaker, saying it made the choice because, "well, you're Australian." I guess that's taking personal context into account.

Read more