As deepfakes, misinformation, and AI-assisted cheating continue to spread online and in classrooms, Google DeepMind introduced the SynthID detector on Tuesday. This new tool scans images, audio, video, and text to identify invisible watermarks embedded by Google's growing range of AI models.
Designed to function across multiple formats in one place, the SynthID detector aims to enhance transparency by identifying AI-generated content created with Google’s AI tools, including NotebookLM, Lyria, and the Imagen image generator. It also highlights sections most likely marked by watermarks.
"For text, SynthID examines which words are generated and adjusts the probability of word selection without affecting the overall quality or utility of the text," Google explained during a demonstration.
"If a piece of text contains more instances of preferred word choices, SynthID will detect that it has been watermarked," the company added.
SynthID subtly alters word choice probabilities during text generation, embedding an invisible watermark that does not affect the meaning or readability of the output. This watermark can later be used to identify content generated by Google’s Gemini app or web-based tools.
Google initially launched SynthID as a watermarking tool for detecting AI-generated images in August 2023. With the release of the SynthID detector, the functionality has now expanded to include audio, video, and text formats.
Currently, the SynthID detector is available in a limited release, with a waitlist for journalists, educators, designers, and researchers to try the program.
As generative AI tools become more widespread, educators are finding it increasingly difficult to determine whether students' work is original, even in assignments meant to reflect personal experiences.
AI-Assisted Cheating
A recent report by New York Magazine highlighted this growing issue.
A technology ethics professor at Santa Clara University assigned a personal reflection essay, only to discover that one student had used ChatGPT to complete the task.
At the University of Arkansas Little Rock, another professor found students relying on AI to write course introduction essays and class objectives.
Despite the rise in students using AI models to cheat in classrooms, OpenAI discontinued its AI detection software in 2023, citing low accuracy rates.
"We recognize that identifying AI-written text is an important discussion point among educators, but it is equally critical to acknowledge the limitations and impacts of AI text classifiers in classroom settings," OpenAI stated at the time.
Compounding the issue of AI-based cheating are new tools like Cluely, an application designed to bypass AI detection software. Developed by former Columbia University student Roy Lee, Cluely circumvents AI detection at the desktop level.
Marketed as a way to cheat during exams and interviews, Lee raised $5.3 million to develop the app.
"After I posted a video of myself using it during an Amazon interview, it quickly went viral," Lee previously told Decrypt. "While using it, I realized how engaging the user experience was—no one had explored this seemingly transparent screen overlay that could see your screen, hear your audio, and act like a second player on your computer."
Despite promising tools like SynthID, many current AI detection methods remain unreliable.