Microsoft tests openclaw‑style AI bots for Copilot
At a glance:
- Microsoft is experimenting with OpenClaw‑style autonomous agents inside its 365 Copilot suite.
- The always‑on feature could monitor Outlook inboxes, calendars and suggest daily tasks.
- Microsoft plans to demo the capability at its Build conference starting June 2.
What Microsoft is testing
Microsoft’s corporate vice‑president Omar Shahine confirmed to The Information that the company is “exploring the potential of technologies like OpenClaw in an enterprise context.” The test focuses on embedding OpenClaw‑inspired agents into Microsoft 365 Copilot so the assistant can run continuously, handling routine work without user prompting. According to the report, the goal is to create a “safer” version of the open‑source platform that can operate locally on a user’s device while still being managed centrally for enterprise compliance.
The proposed always‑on Copilot would watch a user’s Outlook inbox and calendar, then surface a curated list of suggested tasks each day. By keeping the agent’s permissions limited to specific functions—such as marketing, sales, or accounting—the system aims to silo access and reduce the attack surface that earlier OpenClaw deployments exposed.
How OpenClaw works and the security debate
OpenClaw is an open‑source framework that lets developers build AI‑powered agents that run locally rather than in the cloud. Its popularity surged earlier this year because it promised privacy‑first, on‑device processing. However, security researchers quickly flagged risks: agents could be granted broad system permissions, potentially exfiltrating data or executing unwanted actions.
Microsoft’s approach is to strip down the platform, removing unnecessary privileges and adding enterprise‑grade controls. Sources close to the project say the company believes it can deliver a “safer” incarnation that satisfies corporate IT policies while retaining the convenience of autonomous assistance.
Role‑specific agents and practical use cases
The Information notes that Microsoft is prototyping role‑focused agents. A marketing‑oriented bot might draft campaign copy, pull performance metrics, and schedule posts, while a sales agent could update CRM entries and generate follow‑up reminders. An accounting‑specific assistant would reconcile invoices and flag anomalies, all without needing access to unrelated files or services.
By compartmentalising each agent, Microsoft hopes to limit the permissions each bot requires, thereby reducing the risk of cross‑department data leakage. This granular model also aligns with compliance frameworks that demand strict data segregation across business units.
Implications for Copilot and the broader market
If the demo at Build (kicking off on June 2) showcases a polished, always‑on Copilot, Microsoft could regain momentum lost to rivals that already offer AI‑driven workflow helpers. Last year, Anthropic integrated its Claude chatbot and Claude Cowork tools into Microsoft 365, enabling multi‑step task completion. OpenClaw‑style capabilities would add a locally‑run, continuously active layer, differentiating Copilot from cloud‑only competitors.
Analysts see this move as part of Microsoft’s broader strategy to embed AI deeper into the productivity stack, turning everyday apps into proactive assistants. Success could spur other enterprise software vendors to adopt similar on‑device agent architectures, reshaping how businesses think about AI‑augmented work.
FAQ
What kind of tasks will the always‑on Copilot be able to perform?
How does Microsoft plan to address the security concerns raised about OpenClaw?
When and where will Microsoft showcase these new Copilot capabilities?
More in the feed
Prepared by the editorial stack from public data and external sources.
Original article