I Asked Claude to Wipe My Laptop — Here’s How I Got It Back
Now I develop with a different kind of YOLO approach.
This wasn’t me being reckless. I wasn’t trying to push the limits… much. I was just tired.
I’ve been preparing to write a series of posts outlining all of the iterations of “vibe-coding” developer environments I’ve experimented with and how I’ve been able to effectively utilize $5k+ in Anthropic token usage in a 30-day period, but given this situation I figured a cautionary tale is more helpful to share.
I was building yet another Rust MCP server idea I wanted to experiment with and Claude had made a small mistake while working on the final touches of an elegant solution getting ready to be pushed to Github — it had created a directory called ~
inside the project repo, like ./~/.cargo
. Harmless enough. I could have deleted the directory myself easily but I’ve gotten so used to staying in the agent chat that I asked Claude to do it instead. Permission system worked just fine, as Claude asked if I wanted to delete ~/
, which seemed right in the moment because most of the time relative paths proposed are based on the project directory, so my fingers almost immediately with muscle-memory hit Enter to approve.
Yup, I just approved Claude to run “rm -rf ~/” —
delete my entire home directory.
That one missing ./
was the difference between cleaning up a local project junk folder and wiping every file that wasn’t protected by some sort of additional UNIX settings permissions.
It only took a second to question myself and realize that probably wasn’t smart. I hit Ctrl-C. Some folders were still there… okay, maybe it only got a few folders and I stopped it in time. I could still see some directories.
Then things started changing:
An IDE refreshed it’s view to show absolutely nothing in it’s workspace.
Terminal commands I had aliased in ZSH started failing.
Screenshots and other icons on my Desktop vanished.
Took a full two minutes until things seemed to stabilize. Claude must have still had a background thread running the deletion and I didn’t think quick enough to know how to stop it in time.
Knowing that if anything was to be recovered using the fact that filesystems simply mark file blocks for eventual garbage collection. I researched recovery options. Disk Drill looked promising and I tried it on the maximum full-disk settings, but after some research I realized that SSD TRIM had already done its work — deleted blocks were gone for good pretty quickly.
At that point, I started to try and inventory what would have been lost. Time Machine in the upper right said the last backup was three days ago. Maybe I’d just lost that MCP server I’d been working on and I’d be able to restore to that point.
Except it wasn’t. It was three days and two years ago — when I’d first bought the laptop.
Why? Because I’d turned backups off while in SF for the YC W24 batch and never turned them back on (I back up to my NAS at home and the latency was pretty bad).
That’s when it hit me: I hadn’t just lost recent code. I’d lost everything.
Root Cause Analysis
I worked on the internal incident management systems at AWS for 4 years — SWIM Team (for you Amazonians) — so it’s only fitting I write up my own RCA. Don’t worry, it won’t be as long and detailed as a real one:
T-00:00 — Approval
Agent asks to delete ~
. I read it as ./~
. Approved.
T-00:01 — Deletion starts
Claude kicks off a background rm -rf
of my home directory.
T-00:05 — Realized what I’d just done
I hit Ctrl-C, thinking maybe I caught it in time.
T-05:00 — Attempted recovery
Research, install recovery tools, run deep scan in kernel extension mode. No recoverable files — TRIM has already zeroed freed blocks.
T-30:00 — False hope
After concluding data was unrecoverable, checked Time Machine detail window (not just hovered over the icon) — no backups since Aug 2023.
Outcome: total loss of home directory — personal files, work projects, SSH keys, dotfiles, etc.
Root Cause: Human Error
5 Whys: (I won’t really need 5.)
Why did I approve it? — I was tired and vibe-coding 4 things at once.
Why didn’t I have backups? — I always thought I was too busy and I’ll get to when I have time.
Guardrails: Incident Management at Scale
If there’s one piece of wisdom I’d like you to take away from this article it’s this:
Mistakes are inevitable. The goal is to make them inconveniences, not catastrophes.
Your responsibility is to make sure that one mistake can’t take the whole system down.
In practice, that means:
Guardrails, not guidelines — “Be careful” doesn’t scale. Habituation is real.
Approvals for judgment calls only — If it’s obviously safe, automate it. Humans should only handle the really ambiguous cases.
Block unsafe by default — Destructive ops should require explicit opt-in and extra checks.
For agent-first workflows, we can translate that to:
Pre-execution sanity checks — MCP hooks that parse and flag risky commands.
Soft deletes by default — Route destructive actions through a “time-delay” delete.
Sandbox isolation — Don’t let agents near your host OS unless necessary.
Out-of-band oversight — A second set of “eyes” adds redundancy.
This isn’t about slowing down — it’s about not betting your entire environment on you spotting one missing ./
after a 14-hour day.
Post-Incident Action Items
Yes, of course I need to set up Time-Machine backups, but I don’t think that’s really enough. Even a Time Machine snapshot from five minutes ago can still miss a significant amount of vibe-coded changes and it’s quite common to exclude git workspaces from it due to the massive number of small files to track.
Layered safeguards are the only real defense:
Push to remote often — Public or private repo, branch hygiene or not. Anything is safer off your local disk. Write a Claude Code Slash Command to commit and push if it helps. I’m going to add a feature to vibe-workspace that automatically pushes repos with dirty branches after a set time of inactivity for safety.
Separate design docs from code — PRDs, diagrams, prompts… keep them somewhere the AI can access later. I’m looking forward to more tools storing these in a way that’s accessible for the Agent, but not tied to your local in-progress code. If you have them backed up you can re-create code pretty quickly.
Develop in sandboxes — Containers or VMs keep some types of accidents contained. I personally avoided it a bit because trying to load each project in VSCode with a dev-container felt slow and doesn’t always let you test multiple projects working together easily, but with new tools like VibeKit, it’s nearly frictionless to launch Claude Code within a local sandbox, so I have no excuse.
Use safety guardrails — Bash safety wrappers like trash-cli, pre-tool hooks, anything that gives you one level of “undo” on dangerous operations. I would strongly advise putting in your agent’s memory: “You MUST provide absolute paths for any destructive command”.
Reduce approval-fatigue —I’ve recently released superego-mcp specifically for this purpose. Claude should always get second opinion before proposing a dangerous call (even if it’s from itself). If those eyes are always you, approvals are frequent, and the approval rate is over 90%, you will train yourself to approve without thinking critically. Spread the repetitive approval load back to other agents such that if an approval comes your way it MUST be a pretty tough call as to whether it’s a good idea or not.
Incident Review
Once I realized what had happened, I actually didn’t panic. Instead I thought:
This is the kind of mistake you read about — whether or not the cause was human error, it’s 100% my fault for not using enough guardrails and having backups.
TRIM meant there’d be no magic recovery. My backups were years out of date.
But after some time to reflect I took another inventory: most of what mattered was safe — code repos mostly in Github, personal stuff on Google Drive or my NAS, old SSH keys happened to be on another laptop. I’d survive.
What I lost were the small, untracked things: downloads, screenshots, local configs. As for everything else? I had a sudden realization: I could recreate most within a day.
For the recent projects I had asked Claude to code it from scratch and that was only about 12 hours of total wall-clock time. I’ve already re-implemented all the code I lost that was truly important. To be honest: it actually might even be better than before. I was able to take the lessons I learned from the first iteration and write it again from scratch without any tech-debt.
If you still have the ideas, you haven’t lost the work — just the output.
You Only Lose Output
I fully support YOLO — just not the acronym you’re thinking of.
For me, YOLO now means You Only Lose Output.
You still move fast. You still experiment. But you build your workflow so the worst case is losing artifacts — never the ideas, architecture, or ability to rebuild.
If you keep the concepts and context intact — and you have the right tools — the rest is reconstruction. In my case, I was lucky that what I lost was code and I was able to have AI rebuild a day’s worth of work in a day. That’s a loss I can walk away from.
And that’s one aspect we focus on at Toolprint.ai: building systems for agents that make it possible to provide a goal and the right tools to generate the desired outcome quickly. You should only need to focus on providing what’s most important: the input.
Move fast. Break things. — but only the things you can rebuild by tomorrow.