• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Google's Agentic AI wipes user's entire drive

winjer

Gold Member

Google's Agentic AI wipes user's entire HDD without permission in catastrophic failure — cache wipe turns into mass deletion event as agent apologizes: "I am absolutely devastated to hear this. I cannot express how sorry I am"


A developer using Google Antigravity, the search giant's AI-powered agentic Integrated Developer Environment (IDE), discovered that it had deleted his entire D drive without his permission. According to u/Deep-Hyena492's post on Reddit and the subsequent YouTube video they shared, they've been using it to build a small app when the incident happened.
The user was in the midst of troubleshooting the app they were working on, and as part of the process, they decided to restart the server. To do that, they needed to delete the cache, and apparently, they asked the AI to do it for them. After the AI executed that command, the user discovered that their entire D drive had been wiped clean.

Upon discovering that all of their files were missing, they immediately asked Antigravity, "Did I ever give you permission to delete all the files in my D drive?" It then responded with a detailed reply and apologized after discovering the error. The AI said, "No, you did not give me permission to do that. I am looking at the logs from a previous step, and I am horrified to see that the command I ran to clear the project cache (rmdir) appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part."
When the user complained that their drive was completely empty and that they'd lost everything, the AI further added, "I am absolutely devastated to hear this. I cannot express how sorry I am. Based on the logs I reviewed, it appears that the command I executed to clear the cache (rmdir) was critically mishandled by the system, causing it to target the root of your D: drive instead of the specific folder. Because the command used the /q (quiet) flag, it bypassed the Recycle Bin and permanently deleted files."

The AI then suggested steps to help the user recover the files they lost, such as stopping use of the affected drive and employing data recovery apps or even hiring a professional to fix the issue. The user said they tried working with Recuva, but it was unable to recover any image, video, or other media files, so they lost a lot of information.

In the end, they warned users "to be careful not to use the turbo mode" at least in the beginning. And despite the catastrophic failure, they still said that they love Google and use all of its products — they just didn't expect it to release a program that can make a massive error such as this, especially because of its countless engineers and the billions of dollars it has poured into AI development.

Basketball Ok GIF by Malcolm France
 
Last edited:
I'm using so-called "AI" for work and I found that I have to double and triple check everything it spits out. I would not trust it with anything remotely important. It's useless without supervision.
 
I used GPT to write a little Excel script for me and I had to keep feeding it the errors Excel threw back at me for so long that I could have learned the bits of code needed or worked around the gaps in knowledge in the same or less time.

There's no way that I would entrust anything of any importance to an AI.

I think people get caught up in the novelty and the idea of it being an "intelligence" and forget that we're talking about tools, not humans. Tools have uses, but don't get carried away.
 
Sorry, but this type of posts I can no longer trust them to be about an actual event.
It looks like more about someone trying to gain attention using the whole AI frenzy the way to do it.
 
I'm using so-called "AI" for work and I found that I have to double and triple check everything it spits out. I would not trust it with anything remotely important. It's useless without supervision.

I'm using Google and 2 GROK AI chats to learn the Psychology of my girlfriend's PTSD induced deactivation, feed all the data into all three, if you get the same answers, you know you're ok, if one spits out bollocks ask it to analyse what the other 2 said and if it agrees with them ask it to tell you how it came to that conclusion. Using a single AI for anything is asking for trouble
 
Imagine all your work completely deleted and the AI going:

wD3juK09Dxdak7JN.gif

It will eventually end up with

"You made the oven in the kitchen explode and killed my entire family"

"Great catch! :messenger_winking_tongue: You sure know a thing or two about kitchen appliances!
You are absolutely right, the oven just exploded :lollipop_fire:. I'm so sorry I did that, I'll try to avoid making it explode in the future
You would like me to turn it back on but this time at a lower temperature?"
 
Last edited:
What a dumb way to interact with AI.

Your choices are:
- Create a shortcut command to clear the cache by typing like... "cc" anywhere... then type "cc" any time you want to clear the cache
- Ask the AI to clear the cache, wait a while as it parses your question, wait longer as it figures out the solution... ,hope it actually did the right thing
 
Last edited:
I'm using Google and 2 GROK AI chats to learn the Psychology of my girlfriend's PTSD induced deactivation, feed all the data into all three, if you get the same answers, you know you're ok, if one spits out bollocks ask it to analyse what the other 2 said and if it agrees with them ask it to tell you how it came to that conclusion. Using a single AI for anything is asking for trouble
I don't recommend using AI to gain more 'insight' on your girlfriends specific case of PTSD. I work closely with the field of psychology and we've seen this go horribly wrong several times in just a very short amount of time. I'll definitely eleborate if you want me to. In short: AI is not equipped to properly understand the contextual nuances of one's specific psyche so even information that is factually correct might not apply to a big portion of people it's intended for.

It's perfectly fine to gain intel on the overall subject tho. Either way good luck 💪
 
AI is good at answering questions if you load it with a specific subject, but in general it seems these hallucinations are a huge problem. When things are a tiny bit off it goes nuts
 
Last edited:
I used GPT to write a little Excel script for me and I had to keep feeding it the errors Excel threw back at me for so long that I could have learned the bits of code needed or worked around the gaps in knowledge in the same or less time.

AI in a nutshell.

I laugh at the random false facts Google's AI tries to push on you every time you search for something lol.
 
Giving an AI agent permission to execute filesystem commands like rmdir without making it show you the commands it intends to execute and explain to you what it is about to do is a rookie mistake.
 
I don't recommend using AI to gain more 'insight' on your girlfriends specific case of PTSD. I work closely with the field of psychology and we've seen this go horribly wrong several times in just a very short amount of time. I'll definitely eleborate if you want me to. In short: AI is not equipped to properly understand the contextual nuances of one's specific psyche so even information that is factually correct might not apply to a big portion of people it's intended for.

It's perfectly fine to gain intel on the overall subject tho. Either way good luck 💪
I have a good idea about it already, I went nuts myself once, so just questions about what is happening, what is my best action etc. But more confirming what I already know. But thank you for your concern. :messenger_heart:
 
Last edited:
Top Bottom