Worldwide Microsoft Outage: Flights Grounded, Sky News Off Air, Workplace Systems Down

This has all been caused by a single content update.

Utterly fucking ridiculous and Crowdstrike will go bankrupt.



Lawyers ordered him not to say sorry :messenger_tears_of_joy:
 
Last edited:
The guy on BBC has wild theory that they rushed it out because they where going on holiday.
No better way to have a great holiday than for your phone to light up on fire and having your boss threaten you with never working again in IT unless you come back to the office immediately.
 
Haven't slept yet, been on a call dealing with this...well at least I get to miss a couple of Teams meetings later today because I need a shit ton of sleep (the night before was overnight server patching too..)

Crowdstrike:

p3LiD5x.jpeg
 
Last edited:
they went with crowdstrike at my place last year unfortch. was never really on board with that whole thing and how invasive it is, it is overkill for normal business unless you are public internet facing or targeted on a regular basis
 
The company that we lease our printer/scanner/copier from stopped by just before I left yesterday to do an update that he said was required because of a big MS update that was happening last night that would cause it to not work. Weird.
 
Our 60+ servers are RHEL9.4 and our staff are all running MacOS/Ubuntu so it was interesting today seeing other tenants' systems just go kaput while we were in this sea of stability.

I mean we have a whole bunch of other CVEs we're still working on fixing but thankfully nothing on the scale of the Crowdstrike stuff affecting other departments (disaster recovery attempts were a disaster too) today.

And this on the heels of a major fuck up a couple of months ago when Google fucking Engineers deleted UniSuper, one of Australia's largest superannuation funds (I'm a member), from Google Cloud. Thankfully UniSuper IT team are based and had their own backups and managed to work with the Google Cloud team to rebuild the site across a couple of weeks. https://www.itnews.com.au/news/unis...ion-traced-to-blank-parameter-in-setup-608286
 
Last edited:
I'm guessing one of their "useful" tech support I have to deal with on a regular basis made this ridiculous suggestion

AlXp4oB.png


I can categorically confirm switching your PC on and off again 15 times in a row is not going to fix this issue if anybody is wondering
 
Last edited:
I went to Speedway for coffee this morning on the way to work and it was closing saying that all of them are closed due to the "company wide system issue." I'm assuming this is related.
 
Last edited:
Work computer is fine, but I didn't get paid, or at least it isn't showing up.
 
I wonder if this issue lead to my Outlook meetings and shit not working and syncing. Had to google how to fix it and it worked. Seems like a total coincidence if unrelated but happened the same time.
 
Wait wait WAIT, you people are telling me that a Microsoft update had a bug? I REFUSE TO BELIEVE THIS
 
It wasn't a MS update, it was an update to a Crowdstrike utility a lot of enterprise customers use that broke MS Windows for a LOT of machines.
 
Smells like a supply chain attack, unless we believe CrowdStrkke are this incompetent with testing and deploying on a Friday. I have my doubts.

Time will tell.

/not impacted
 
Last edited:
When you're being used in millions of computers around the world, this shit should be properly tested on hundreds, if not thousands of computers before it gets put out on a massive scale like this. This stupid companies need to be accountable for this kind of crap. They're probably causing a lot of businesses huge problems today.

As a software tester who was forced into IT because "Testers were no longer needed." *laughs heartily* but seriously they should have a small testbed of machines they roll out to prior to rolling out to the world...
 
As a software tester who was forced into IT because "Testers were no longer needed." *laughs heartily* but seriously they should have a small testbed of machines they roll out to prior to rolling out to the world...
I dont know anything by IT or testing, but all I know is at my company anytime someone sents a ticket to fix something, I get a starter email notice saying a ticket is started. At some point (a few days later), I get reply saying the ticket is "closed" because it's fixed. Then I check the data and it's still wrong. So whomever fixed it didnt check in the morning to see if it even was fixed before closing it.

Then it becomes a 50/50 where IT either says reply back to the ticket and they'll relook at it, or start a new ticket for the same issue and start over again. Pisses me off every time.
 
Isn't the internet supposed to be non-centralised to avoid these kind of problems. It seems everyone routed their traffic through a centralised point and then wonder why it fucked up.
 
I dont know anything by IT or testing, but all I know is at my company anytime someone sents a ticket to fix something, I get a starter email notice saying a ticket is started. At some point (a few days later), I get reply saying the ticket is "closed" because it's fixed. Then I check the data and it's still wrong. So whomever fixed it didnt check in the morning to see if it even was fixed before closing it.

Then it becomes a 50/50 where IT either says reply back to the ticket and they'll relook at it, or start a new ticket for the same issue and start over again. Pisses me off every time.
:(
 
this is absolutely huge also in my field of clinical diagnostics as a lot of systems are down putting on hold life saving procedures
 
I dont know anything by IT or testing, but all I know is at my company anytime someone sents a ticket to fix something, I get a starter email notice saying a ticket is started. At some point (a few days later), I get reply saying the ticket is "closed" because it's fixed. Then I check the data and it's still wrong. So whomever fixed it didnt check in the morning to see if it even was fixed before closing it.

Then it becomes a 50/50 where IT either says reply back to the ticket and they'll relook at it, or start a new ticket for the same issue and start over again. Pisses me off every time.
If that's the case, you either have a bad IT department or a severely underfunded/understaffed one.

As for me, the CrowdStrike snafu downed 54 cloud customers for my product and ruined my Friday. We're just now deploying an automated fix to get everyone back up and running.
 
BTW deleting that file worked for my wife. Thanks!

Lol just when I think I am out of IT forever my wife keeps pulling me back in....
 
Top Bottom