Google LLC has announced the official launch of its Gemini AI model on Google Distributed Cloud, extending its most advanced AI capabilities to enterprise and government data centers.
This release enables Gemini to now be used in an isolated configuration on GDC and is available in preview mode. With GDC connectivity, organizations with strict data residency and compliance requirements can deploy generative AI without compromising control over sensitive information.
By launching this offering and bringing the model on-premises, Google addresses a long-standing challenge for regulated industries: the need to choose between adopting modern AI tools and maintaining full data sovereignty.
Early adopters include Singapore's Strategic Centre for Communications and Information Technology, the Government Technology Agency of Singapore, the Home Team Science and Technology Agency of Singapore, Japanese telecom provider KDDI Corporation, and Liquid C2 LLC. According to Google, these organizations value the ability to innovate with AI securely while meeting local compliance standards.
The integration provides access to Gemini's multimodal capabilities, covering text, images, audio, and video. Google states that this opens up a range of use cases, including multilingual collaboration, automated document summarization, intelligent chatbots, and AI-assisted code generation.
The release also includes built-in security tools that help enterprises enhance compliance, detect harmful content, and enforce policy adherence. Google emphasizes that securely delivering these capabilities requires more than just the model itself, positioning GDC as a complete AI platform that combines infrastructure, model libraries, and prebuilt agents, such as the preview of Agentspace search.
At the infrastructure level, GDC leverages NVIDIA’s Hopper and Blackwell GPUs, paired with automated load balancing and zero-touch updates to ensure high availability. Confidential computing is supported on both CPUs and GPUs, ensuring that sensitive data remains encrypted during processing. Customers also gain access to audit logs and fine-grained access controls for end-to-end visibility into their AI workloads.
In addition to Gemini 2.5 Flash and Pro, the platform supports Vertex AI’s task-specific models and Google’s open-source Gemma family. Enterprises can also deploy their own open-source or proprietary models on managed virtual machines and Kubernetes clusters as part of a unified environment.
According to Google, early users are satisfied with the product. Chee Wee Ang, Chief AI Officer at the Home Team Science and Technology Agency of Singapore, said, “The ability to deploy Gemini on Google Distributed Cloud will allow us to bridge the gap between on-premises data and the latest advancements in AI. Google Distributed Cloud provides us with a secure and manageable platform to innovate with AI without compromising our strict data residency and compliance requirements.”