AI

Google makes an interesting choice with its new agent-building tool for enterprises

At a glance:

  • Google unveiled Gemini Enterprise Agent Platform at Google Cloud Next, aimed at IT and technical teams.
  • The platform works alongside the Gemini Enterprise app for business users to create scheduling, automation and file‑editing agents.
  • It taps Gemini LLM, Nano Banana 2 image generator and Anthropic Claude models, including the newly released Claude Opus 4.7.

Google announces Gemini Enterprise Agent Platform

On April 22, 2026, Google CEO Sundar Pichai opened the Google Cloud Next conference with a video that introduced the Gemini Enterprise Agent Platform. The service is billed as a turnkey solution for building, deploying and managing AI agents at scale inside large organisations. Google positions the platform as a direct competitor to Amazon’s Bedrock AgentCore and Microsoft’s Foundry, both of which target similar enterprise‑automation use cases.

The announcement highlighted two distinct product tracks. The Agent Platform is tailored for IT and technical teams that need to create agents capable of handling complex, code‑centric tasks such as automated deployments, monitoring, and troubleshooting. In parallel, Google re‑emphasised the Gemini Enterprise app, launched in the fall, which lets business users build or consume agents for everyday workflows like meeting scheduling, trigger‑based processes, shortcut creation and cross‑app file editing.

How the platform fits into Google’s AI portfolio

Google stressed that the new platform is built on its own Gemini large language model (LLM) and the Nano Banana 2 image generation engine, both of which have been in private beta for several months. By bundling these proprietary models with Anthropic’s Claude family, Google offers a tiered stack that spans high‑performance reasoning (Claude Opus), balanced capability (Claude Sonnet) and low‑cost inference (Claude Haiku). Notably, the latest Claude Opus 4.7, released just a week before the conference, is included out‑of‑the‑box.

This multi‑model approach gives enterprises the flexibility to match cost and latency requirements to specific agent workloads. For example, a security‑focused agent that parses logs might run on Gemini for its deep contextual understanding, while a design‑oriented agent that creates mock‑ups could call Nano Banana 2 for rapid image synthesis.

Competitive landscape and differentiation

Amazon’s Bedrock AgentCore and Microsoft’s Foundry both provide cloud‑native agent orchestration, but Google’s differentiator lies in its tighter integration with the broader Google Workspace ecosystem. Agents built on the Gemini platform can invoke native Google Calendar, Docs, Drive and Gmail APIs without additional connectors, reducing integration friction for IT teams.

Furthermore, Google’s emphasis on security—citing “real‑world enterprise concerns”—suggests that the platform includes built‑in policy controls, data residency options, and audit logging that are often add‑ons in rival services. While Amazon and Microsoft have announced similar compliance features, Google’s early focus on IT‑centric use cases may attract organisations that already rely heavily on Google Cloud and Workspace.

Potential impact on enterprise workflows

If adoption mirrors early interest in Google Cloud’s AI offerings, the Gemini Enterprise Agent Platform could accelerate the shift from point‑solution bots to centrally managed, reusable agents. Technical teams would be able to publish a library of vetted agents that business units can consume via the Gemini Enterprise app, fostering a “self‑service AI” culture.

However, the platform’s success will hinge on how quickly Google can deliver robust monitoring, versioning and rollback capabilities—features that enterprises consider mandatory for production‑grade automation. The inclusion of Anthropic’s Claude models also raises questions about data sharing between Google and third‑party providers, a topic that compliance officers will scrutinise.

What’s next for Google and its rivals

Google has not disclosed pricing, but industry analysts expect a usage‑based model similar to Bedrock and Foundry, with discounts for committed spend. The company hinted at a broader rollout beyond the initial beta, targeting additional regions later in 2026. Competitors are likely to respond with tighter Workspace integrations or new pricing tiers to protect their market share.

Stakeholders should watch for updates on model latency, especially for the newly released Claude Opus 4.7, and for any announced partnerships that could extend the platform’s reach into sectors such as finance, healthcare and manufacturing. As AI agents become more embedded in day‑to‑day operations, the balance between agility and governance will define the next wave of enterprise AI adoption.

Editorial SiliconFeed is an automated feed: facts are checked against sources; copy is normalized and lightly edited for readers.

FAQ

What are the two main products Google introduced for enterprise AI agents?
Google launched the Gemini Enterprise Agent Platform for IT and technical teams to build and manage agents at scale, and the Gemini Enterprise app for business users to create or use agents for tasks like scheduling, automation and cross‑app file editing.
Which language models does the Gemini platform support?
The platform integrates Google’s own Gemini large language model, the Nano Banana 2 image generator, and Anthropic’s Claude family—including Claude Opus, Claude Sonnet, Claude Haiku and the newly released Claude Opus 4.7.
How does Google’s offering compare to Amazon Bedrock AgentCore and Microsoft Foundry?
Google differentiates itself with tighter native integration to Google Workspace services, built‑in security controls, and a multi‑model stack that lets enterprises choose between high‑performance and low‑cost models, whereas Amazon and Microsoft focus more on generic cloud‑agnostic agent orchestration.

More in the feed

Prepared by the editorial stack from public data and external sources.

Original article