Following the Linux Foundation’s establishment of the Agentic AI Foundation (AAIF) and its designation of the Model Context Protocol (MCP) as a core standard, Google has moved with remarkable speed. The company has announced full support for MCP and introduced fully managed remote MCP servers, enabling developers to offload the burden of maintaining their own connection layers. As a result, AI agents can now operate Google Maps, BigQuery, and even manage cloud infrastructure such as Google Compute Engine and GKE without developers having to maintain bespoke integrations.
Proposed by Anthropic in 2024, MCP is an open protocol that defines a standardized interface for how external tools should communicate with AI models. Because it provides a unified, plug-and-play format—allowing models to invoke APIs and databases as effortlessly as connecting a USB-C device—the industry has taken to calling it “the USB-C of AI.”
Google noted that developers previously relied on community-maintained MCP servers for Google services, often hosting and maintaining them locally—a process that increased integration costs and made reliability difficult to guarantee. With Google’s fully managed offering, developers simply direct their AI agents to Google’s MCP endpoints to interact seamlessly with Google Cloud services.
The first wave of supported services spans three major domains: data querying, geospatial intelligence, and infrastructure operations:
- Google Maps (Grounding Lite): Supplies authoritative information such as location data, weather forecasts, and travel times—helping AI agents answer travel-planning and local-recommendation queries while dramatically reducing hallucinations.
- BigQuery: Grants AI agents native schema awareness and query execution capabilities. Crucially, data never needs to be injected into a model’s context window; analysis occurs in place, preserving data governance and security.
- Google Compute Engine (GCE): Empowers AI to autonomously manage infrastructure—from initial provisioning to Day-2 operations—dynamically adjusting resources based on workload demand.
- Google Kubernetes Engine (GKE): Provides a structured interface that allows AI to interact directly with Kubernetes APIs. Developers no longer need their models to interpret complex CLI output; agents can diagnose container issues, resolve faults, and optimize costs within a consistent operational environment.
Google offered a concrete application example: using the Agent Development Kit (ADK), a developer could build an agent powered by Gemini 3 Pro. Through MCP, the agent could query sales data from BigQuery to forecast revenue, use Google Maps to analyze nearby commercial activity and delivery routes, and then synthesize these inputs into a retail site-selection recommendation—all autonomously coordinated.
On the security front, Google emphasized centralized tool governance via Cloud API Registry and Apigee API Hub, coupled with Cloud IAM for access control. Model Armor provides protection against indirect prompt injection, ensuring enterprise-grade security.
Google added that Cloud Run, Cloud Storage, Cloud SQL, and other core services will be brought under MCP support in the coming months. This move clearly signals Google’s intent to claim a foundational role in the age of agentic AI. By embracing a universal standard, Google not only lowers the barrier for developers building agents on its cloud platform, but also propels AI from mere content generation into the realm of actionable autonomy. When an AI assistant can directly toggle VMs or query databases on your behalf, the gravitational pull of Google Cloud becomes all the stronger.