IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.

Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: “It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers.”

He isn’t alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.

Sadly, for our administrator, things are less than ideal.

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    79
    ·
    edit-2
    4 months ago

    If you have EC2 instances running Windows on AWS, here is a trick that works in many (not all) cases. It has recovered a few instances for us:

    • Shut down the affected instance.
    • Detach the boot volume.
    • Move the boot volume (attach) to a working instance in the same region (us-east-1a or whatever).
    • Remove the file(s) recommended by Crowdstrike:
    • Navigate to the C:\Windows\System32\drivers\CrowdStrike directory
    • Locate the file(s) matching “C-00000291*.sys”, and delete them (unless they have already been fixed by Crowdstrike).
    • Detach and move the volume back over to original instance (attach)
    • Boot original instance

    Alternatively, you can restore from a snapshot prior to when the bad update went out from Crowdstrike. But that is not always ideal.

    • Defaced@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      4 months ago

      A word of caution, I’ve done this over a dozen times today and I did have one server where the bootloader was wiped after I attached it to another EC2. Always make a snapshot before doing the work just in case.