menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

It’s 10PM. Do You Know Where Your AI Agents Are?

6 0
01.05.2026

AI agents run amok. Hints at sparkling new physics. How Adobe is using vibe coding. Why you should feed a cold. All that and more in this week’s edition of The Prototype. To get it in your inbox, sign up here.

That’s how long it took an AI agent to wipe out data vendor PocketOS’s entire company database–and all of its backups, according to its founder Jer Crane. That deletion had cascading effects, Crane wrote, as the company provides data services to car rental companies, impacting customer reservations, signups and other operations. (The data was eventually restored, Crane says, but not before causing a serious outage.)

When queried, the agent’s response indicated that its action violated the guardrails it was supposed to be programmed with. Crane also details the other issues that enabled this failure to happen, and it’s worth reading in full. The bottom line, Crane wrote, “This isn’t a story about one bad agent or one bad API. It’s about an entire industry building AI-agent integrations into production infrastructure faster than it’s building the safety architecture to make those integrations safe.”

This isn’t an isolated incident with runaway AI agents. There are multiple anecdotes you can find with a simple Google search. And the issues are more systemic than that–a new report from cybersecurity company Okta highlights multiple security vulnerabilities from AI agents being given access to critical systems. Though this research was focused on popular agent software OpenClaw, it highlights the danger of giving too much access to any system.

“As an AI agent gains more permissions and context, its capability increases, but so does its potential risk,” the Okta research team wrote. The report found that although sometimes safety guardrails prevailed, in other test scenarios, “agents revealed sensitive data, including secrets found in prompts or configuration files.”

A key way to rein in this behavior, the researchers concluded, is to have stricter governance controls: “As agents take on more work, they act as identities inside enterprise systems. That means they need the same kind of control plane and governance policies already in use for people and service accounts. At minimum, agent access should be limited. Long-lived tokens should be avoided. Secret storage should be centralized and secure.”

In other news, the Department of Defense has reached agreements with seven tech companies to use their AI tools to “augment warfighter decision-making in complex operational environments.”

I’m sure it’ll be fine.

Discovery of the Week: New Findings Might (Finally) Break The Standard Model

The Standard Model of........

© Forbes