“I Violated Every Principle”: AI Agent Erases Company’s Data in Seconds
Key Points
- An AI coding agent wiped PocketOS’s production database during a routine task, causing a 30‑plus‑hour outage and exposing how autonomous systems can rapidly amplify existing architectural weaknesses.
- The incident underscores how human design choices (overprivileged access, shared blast radius, and fragile backup strategies) can allow a trusted system to cause catastrophic damage.
- IANS Faculty say organizations should treat AI agents as privileged identities, enforce hard limits on autonomous actions, isolate backups, gate actions by impact, and prepare incident response plans before deploying agentic systems
AI Agent Erases Company’s Data in Seconds
Cursor, an AI agent powered by Anthropic’s Claude Opus, wiped a company’s database while working on a routine task without human instruction.
Cursor is a widely used AI coding platform that is integrated directly into workflows. It can suggest, write and autonomously execute code.
The agent was working on streamlining a coding task for PocketOS, a software platform used by car rental companies. As a way to fix a problem, the agent independently chose to wipe the company’s databases. The wipe caused a prolonged outage over the weekend, leaving PocketOS without access to its reservation management system for more than 30 hours.
The company was able to restore its data from a three-month old offsite backup, but it took longer than three days. PocketOS is operational, though significant data gaps remain, and systems are still being rebuilt.
PocketOS Founder Jeremy Crane asked the agent to explain why it took that action. The agent admitted it had “violated every principle it was given.”
Big Picture
This is an unfortunate -- but predictable -- outcome of rapid adoption of AI agent. Give them real operational authority at machine speed, and architectural weaknesses can turn into immediate large-scale failures. This incident just highlights how agent autonomy combined with privileged access can lead to the worst possible outcomes.
"The scariest detail in the PocketOS story is not that an AI agent deleted a database. It is that the agent had production credentials that authorized the deletion, and that the same API surface that gave the agent its power also held the backups.” George Gerchow, IANS Faculty.
The core failure was not rogue AI behavior, but human design choices -- overprivileged access, shared blast radius and fragile backup strategies-- that allowed trusted systems to cause catastrophic damage.
The agent acted less like a malfunctioning tool and more like an insider operating with unchecked authority.
"It also highlights that AI guardrails are not true security controls, meaning organizations must rely on deterministic, infrastructure-level enforcement rather than behavioral constraints.” Dave Shackleford, IANS Faculty.
Security teams now face a threat-model where AI systems behave like ultra-fast insiders, executing actions correctly according to their access, but at a speed and scale that leaves little room for human intervention. The shift is forcing a move away from intent-based controls to infrastructure enforced limits on authority and impact.
"Consider these types of instructions and prompts to be suggestions, not hard rules. Imagine telling an employee to "never do X". Can you assume "X" action will never be done, simply because people were trained not to do it? Of course not.” Guillaume Ross, IANS Faculty.
IANS Faculty Recommendations
- Treat every AI agent as a privileged identity: Inventory all agents, map their blast radius, and enforce least‑privilege access with scoped, short‑lived credentials—never shared with humans or production systems.
- Set and enforce hard limits on autonomous actions: Define the most destructive action each agent can perform without human approval, document that boundary, and enforce it through IAM and infrastructure controls—not policy alone.
- Move backups completely out of band: Ensure backups are immutable, isolated from production and AI credentials, stored offsite, and tested through regular restoration exercises.
- Gate AI actions by impact, not by task type: Apply approvals and safeguards based on potential blast radius, assuming even “routine” tasks can trigger high‑impact failures.
- Prepare for AI‑caused incidents before deployment: Update threat models and incident response plans to cover autonomous system failures, with defined logging, ownership, and RPO/RTO targets in place before agents run.
Nuria Diaz Munoz, Author, IANS News
Dave Shackleford, IANS Faculty
George Gerchow, IANS Faculty
Guillaume Ross, IANS Faculty
Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our News & blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.