Researchers have uncovered evidence suggesting that Waymo is testing the integration of Google's Gemini AI chatbot into its autonomous taxis, positioning it as an in-vehicle companion designed to interact with passengers and respond to their inquiries. The discovery was made by researcher Jane Manchun Wong.
In her blog post, Wong stated, “While analyzing the codebase of Waymo’s mobile application, I came across a complete, unreleased system prompt for Gemini integration. Internally referred to as the ‘Waymo Ride Assistant Meta Prompt,’ this document spans over 1,200 lines and serves as a detailed specification outlining how the AI assistant should behave inside Waymo vehicles.”
This feature has not yet been rolled out in any public release. However, Wong emphasized that the depth and structure of the system prompt indicate this goes far beyond a basic conversational agent. Allegedly, the assistant can answer questions, manage select in-cabin functions such as climate control, and even help calm passengers when necessary.
Julia Ilina, a spokesperson for Waymo, told TechCrunch: “While we don’t have details to share at this time, our team continuously explores new features to make riding with Waymo enjoyable, seamless, and helpful. Some of these may eventually be included in our passenger experience; others may not.”
This isn’t the first time Gemini has been linked to Alphabet’s self-driving initiatives. Waymo previously confirmed using Gemini’s “world knowledge” to train its autonomous driving systems, particularly for navigating complex, rare, or high-risk traffic scenarios.
According to Wong, the assistant is instructed to maintain a clear identity: “a friendly and helpful AI companion embedded within Waymo’s driverless vehicles,” with the primary goal of “enhancing the passenger experience by providing useful information and support in a safe, reassuring, and non-intrusive manner.” It is programmed to use plain, accessible language—avoiding technical jargon—and to keep responses concise, ideally between one and three sentences.
The system prompt reveals that once activated via the in-car display, Gemini can choose from a set of pre-approved greetings and personalize them using the passenger’s name. It also has access to contextual data, such as how frequently a rider has used Waymo services.
Current specifications allow Gemini to access and adjust cabin settings including temperature, lighting, and music playback. Notably absent from the list are volume controls, route modifications, seat adjustments, and window operations. Wong observed that if users request functionality outside its control, the AI is scripted to respond with phrases like, “That’s not something I can do right now.”
Interestingly, the assistant is explicitly directed to distinguish itself from the actual driving system. For instance, when asked questions like “How do you see the road?”, it must avoid claiming ownership of sensor data and instead reply, “The Waymo Driver uses a combination of sensors…” rather than saying “I use…”
The meta prompt includes numerous nuanced directives—such as how to handle questions about competitors like Tesla or the now-defunct Cruise service, and which trigger words would cause the assistant to stop speaking altogether.
The AI is also instructed not to speculate, confirm, deny, or comment on real-time driving maneuvers or specific driving events. For example, if a passenger asks about a viral video showing a Waymo vehicle hitting an object, the assistant is trained to deflect rather than engage directly.
The prompt clearly states: “You are not a spokesperson for the driving system’s performance, and you must not adopt a defensive or apologetic tone.”
The onboard assistant is capable of answering general knowledge queries—ranging from weather forecasts and the height of the Eiffel Tower to local Trader Joe’s closing times or the winner of the most recent World Series. However, it cannot perform real-world actions such as placing food orders, making reservations, or handling emergency situations.