NotPetya: The Most Destructive Cyberattack in History | Cybersecurity Deep Dive

On the morning of June 27, 2017, IT teams across Ukraine started getting calls. Computers were rebooting unexpectedly. Some came back up showing a ransom screen asking for $300 in Bitcoin. Others just stopped responding entirely.

At first, it looked like a ransomware outbreak. Pay or restore from backup, get back to work.

But within a few hours, it became clear this was something different. The infection wasn’t staying inside Ukraine. Maersk, the world’s largest shipping company, started losing systems at its Copenhagen headquarters. FedEx’s TNT Express division went dark. Pharmaceutical giant Merck lost access to manufacturing systems. A Mondelez candy factory in Tasmania, about as far from Kyiv as you can get, stopped working.

By the end of the day, researchers were describing the ransom screen as fake. There was no working decryption mechanism. No one who paid got their data back. The malware had one real purpose: permanent destruction.

That attack was NotPetya. It caused an estimated $10 billion in damage and remains the most economically destructive cyberattack ever recorded.

This post is a detailed look at what it was, how it worked technically, why it spread so fast, and what we should still be learning from it today.


Background: What Was Going on Before This

To understand why NotPetya existed, it helps to understand the context.

Ukraine had been under sustained cyberattack from Russian-affiliated groups since at least 2014. The country’s power grid was taken offline twice — once in December 2015 and again in 2016. Those attacks were attributed to a group called Sandworm, which security researchers and Western intelligence agencies linked to Russia’s GRU (military intelligence). Ukrainian government networks, election infrastructure, and media companies were also repeatedly targeted.

Russia was using Ukraine as a test environment for offensive cyber operations.

At the same time, a separate but related story was unfolding. A group called the Shadow Brokers had been leaking what appeared to be NSA hacking tools. In April 2017, they dumped a particularly significant batch. One of those tools was EternalBlue.

EternalBlue exploited a vulnerability in SMBv1 that allowed remote code execution on unpatched machines with no user interaction. Microsoft had released a patch for it (MS17-010) in March 2017, about six weeks before the dump. Many organisations hadn’t applied it yet.

Two months after EternalBlue went public, NotPetya launched. It used EternalBlue as its primary propagation engine.


What NotPetya Actually Was

NotPetya was visually similar to the ransomware strain known as Petya. Petya encrypted your Master Boot Record, showed a skull screen, and demanded payment. It was real ransomware, pay up, and theoretically, you’d get a key.

NotPetya reused Petya’s aesthetic, ransom screen, Bitcoin wallet, and fake “support” email, but the internals were completely different. When researchers analysed the code, they found that the “installation ID” shown on the victim’s screen was randomly generated. It had no relationship to any encryption key. Even if someone had wanted to give you a decryption key, they couldn’t; the information required to reconstruct one was never captured.

The support email was shut down by the hosting provider within hours of the attack going public anyway.

Researchers at Comae Technologies and Kaspersky Lab concluded independently: this was a wiper, not ransomware. The ransom demand was there to create plausible deniability and slow down the initial response, not to generate revenue.

Why You Couldn’t Recover

NotPetya attacked at multiple levels:

File encryption: It encrypted files with 65 specific extensions — documents, spreadsheets, source code, databases, archives. It used RSA and AES correctly, with no implementation bugs that might allow recovery. This wasn’t sloppy ransomware written by a script kiddie.

MBR overwrite: The Master Boot Record is the small section of a disk that tells the computer how to boot. NotPetya replaced it with a custom bootloader that showed the ransom screen. On reboot, the machine couldn’t load Windows at all.

MFT encryption: The Master File Table is the NTFS index that tracks where every file is stored on a drive. NotPetya encrypted this too. Even if you could fix the MBR, you’d have a disk full of data with no map to find any of it.

The combination meant the machine was essentially unrecoverable without a full reimage from a clean backup. If your backups were on a network share that was also encrypted, you had nothing.


How It Got In: The M.E.Doc Supply Chain Attack

NotPetya didn’t arrive in a phishing email. No one had to click on anything.

M.E.Doc is Ukrainian accounting software that most businesses operating in Ukraine are legally required to use for tax reporting. It’s the kind of software that runs quietly in the background on thousands of machines, with auto-updates enabled, because that’s just how you keep it compliant.

The attackers, almost certainly Sandworm, had quietly compromised M.E.Doc’s development infrastructure months before June 27. The exact initial access method was never publicly confirmed, but they had sufficient access to modify the software’s update mechanism. On June 27, a legitimate-looking M.E.Doc update was pushed to users. It contained a backdoor. When installed, it pulled down and executed NotPetya.

This is what’s called a supply-chain attack. Instead of attacking your target directly, you compromise something they already trust and use that as the delivery vehicle.

The reason this is so difficult to defend against is that every security control you have is oriented toward preventing unauthorised access. A supply-chain attack bypasses that entirely because the software running on your machine is whitelisted by your IT team. It has the permissions it needs. It probably runs as a service with elevated privileges. And it’s getting updates from a vendor you’ve already decided to trust.

The M.E.Doc attack was particularly effective because the software was so widely deployed and legally mandatory. If you had a Ukrainian subsidiary or operated in Ukraine at all, there was a good chance you had it.


Technical Deep Dive: How NotPetya Spread

This is the part that makes NotPetya stand out from most malware. Getting into the first machine via M.E.Doc was just the starting point. What happened next is why one infected machine in Ukraine could take down a multinational company in 45 minutes.

EternalBlue: Spreading Without Credentials

Once running on an initial machine, NotPetya started scanning the local network for other machines reachable on port 445 (SMB). For any unpatched Windows machine it found, it used EternalBlue to gain remote code execution with SYSTEM privileges — the highest level of access on the machine.

No username. No password. No user interaction on the target machine. Just a crafted SMB packet.

The patch (MS17-010) had been out for three months. But in large enterprise environments, patch lag is real. Not every machine gets patched on schedule, especially legacy systems, OT-adjacent infrastructure, or anything where downtime is expensive. NotPetya found every unpatched machine it could reach and spread to all of them simultaneously.

It also used EternalRomance, a second NSA-derived SMB exploit, to cover machines that EternalBlue couldn’t reach for whatever reason.

Mimikatz: Stealing Credentials from Memory

EternalBlue alone couldn’t explain the full extent of its reach. Plenty of organisations had applied MS17-010. On those machines, the exploit failed. But NotPetya kept spreading anyway — because it had a second approach.

NotPetya included a modified version of Mimikatz, an open-source post-exploitation tool that extracts credentials from Windows memory. Specifically, it targets LSASS (Local Security Authority Subsystem Service) — the Windows process that handles authentication and caches credentials in memory to support features like single sign-on across network resources.

If any user, especially an admin, had authenticated on the infected machine recently, their credentials were sitting in LSASS. NotPetya scraped them: plaintext passwords where available, NTLM hashes otherwise, Kerberos tickets if it could find them.

It built up a pool of stolen credentials and used them to reach machines that EternalBlue couldn’t touch.

PsExec and WMIC: Moving Laterally with Valid Credentials

With real credentials in hand, NotPetya used two legitimate Windows administration tools to spread:

PsExec is a Microsoft Sysinternals tool. Admins use it to run commands on remote machines over the network. It’s completely normal in most enterprise environments.

WMIC (Windows Management Instrumentation Command-line) is a built-in Windows tool for remote administration. Also completely normal.

NotPetya used these tools with the stolen credentials to authenticate to remote machines and deploy copies of itself. From a monitoring perspective, this looks like an admin doing admin things. PsExec connecting to machines and running a command is not inherently suspicious — it happens hundreds of times a day in most enterprise environments.

This technique is called “living off the land.” It’s effective specifically because it’s hard to distinguish from legitimate activity without good behavioural baselines.

Why This Combination Was So Effective

The two-track propagation model is what made NotPetya so fast and so total:

  • Unpatched machines: vulnerable to EternalBlue regardless of credentials
  • Patched machines: vulnerable to credential reuse via PsExec/WMIC if any admin had ever touched them

In a typical Active Directory environment, domain admins log in to many different machines. Their credentials end up cached in LSASS across dozens or hundreds of systems. Compromising one admin account on one machine can theoretically unlock every machine the admin has ever touched.

The result was that once NotPetya got into a network, it spread to essentially every reachable Windows machine within minutes. You couldn’t outrun it manually. By the time an alert fired, it had already moved.

The Final Payload

After spreading, each infected machine executed the destructive payload:

  1. A Task Scheduler entry was created to trigger a reboot
  2. The MBR was overwritten with the custom bootloader
  3. File and MFT encryption began
  4. On reboot, the machine came up to the ransom screen and stopped there

The whole sequence from initial infection to unbootable machine took about an hour.


The Full Attack Chain

Step 1 — Initial Access
  M.E.Doc update server compromised by Sandworm
  Backdoor inserted into a legitimate software update
  Update delivered to ~1 million M.E.Doc installations in Ukraine

Step 2 — Execution
  Infected update runs → backdoor contacts C2 server
  NotPetya DLL downloaded and executed via rundll32.exe

Step 3 — Credential Harvesting
  Mimikatz-like module reads LSASS memory
  Extracts passwords, NTLM hashes, Kerberos tickets
  Credential pool built for use in lateral movement

Step 4 — Network Scanning
  Scans local subnets for machines on port 445
  Also identifies machines reachable via WMI/admin shares

Step 5 — Lateral Movement (two parallel tracks)
  Track A: EternalBlue/EternalRomance against unpatched machines
  Track B: PsExec/WMIC with stolen credentials against patched machines
  Each newly infected machine immediately starts Steps 3–5

Step 6 — Persistence
  Task Scheduler entry created for system reboot in ~1 hour
  Propagation continues during countdown

Step 7 — Destruction
  On reboot: MBR overwritten, MFT encrypted, files encrypted
  Machine displays ransom screen
  Machine is unrecoverable without full reimage from clean backup

The recursive nature of Step 5 is key. Every machine infected by NotPetya immediately started scanning and spreading. The infection didn’t move sequentially from machine to machine — it exploded outward from every infected point simultaneously.


Why Traditional Security Tools Didn’t Stop It

A question that came up a lot after this: “Where was the antivirus?”

The honest answer is that most endpoint security tools of the era weren’t equipped to stop this:

Signature-based detection: The NotPetya binary was novel enough that AV signatures didn’t catch it immediately. By the time vendors updated signatures, the damage was done.

Fileless execution: Some stages of the attack ran entirely in memory, leaving nothing on disk for scanners to find.

Legitimate tool abuse: PsExec and WMIC are whitelisted by default in most environments. Security tools don’t alert on them because they’re Microsoft-signed tools used constantly for legitimate purposes.

Speed: The time from initial compromise to full network encryption was too short to allow a reactive response. You’d get an alert, and by the time someone looked at it, 10,000 machines were already encrypted.

This is one of the reasons the industry has moved toward behavioural detection and EDR (Endpoint Detection and Response) rather than relying on signatures. Detecting that “LSASS is being read by an unexpected process” or “this machine just scanned 500 hosts on port 445 in 30 seconds” is more useful than waiting for a known-bad hash to show up.


What It Actually Destroyed: Company by Company

OrganizationSectorReported Financial ImpactWhat Happened
MaerskShipping~$300M45,000 PCs, 1,000 servers destroyed. 76 port terminals offline.
MerckPharmaceutical~$870MSubsidiaries across 67 countries are affected.
FedEx / TNT ExpressLogistics~$400MOperations were disrupted for weeks. Some customer data was lost permanently.
MondelezFood & Beverage~$188M1,700 servers and 24,000 laptops destroyed.
Saint-GobainBuilding Materials~$384MManufacturing and supply chain are disrupted.
Reckitt BenckiserConsumer Goods~$129MManufacturing and supply chain disrupted.

Total estimated global damage: $10 billion+

Beyond the named companies, thousands of smaller Ukrainian businesses were completely wiped out. Hospitals lost access to patient records. Pharmacies couldn’t process prescriptions. Government services went offline. The Ukrainian central bank issued a warning that the financial system itself was under attack.

The collateral damage outside Ukraine was largely incidental. The weapon didn’t distinguish between its intended target and anyone else running vulnerable Windows infrastructure connected to the same networks.


The Maersk Story

Maersk deserves its own section because what happened to them is one of the most well-documented examples of what a company-wide infrastructure disaster actually looks like from the inside.

What They Lost

Maersk’s M.E.Doc connection was through a Ukrainian subsidiary. Once NotPetya was on that local network, it found Maersk’s global corporate network and spread across it.

By the end of June 27:

  • 45,000 PCs destroyed
  • 1,000+ server applications gone
  • 3,500 servers needing rebuild or replacement
  • 76 APM terminal locations are offline globally, including Newark, one of the busiest US ports

The terminal software that Maersk uses to track containers, berth ships, and coordinate logistics — all offline. Ships were being turned away at ports. Cargo sat on docks with no tracking. Customs couldn’t be cleared. The disruption started hitting global supply chains within 24 hours.

The communications situation was also bad. Most internal communication tools were down. People were using personal phones and WhatsApp to coordinate a response across 130 countries.

The Active Directory Problem

The single biggest technical obstacle to recovery was Active Directory.

Maersk’s global Active Directory domain manages authentication for the entire organisation. Every domain controller had been wiped. Every single one.

Without a domain controller, you can’t rebuild the domain. You’d have to start from scratch, which in a global enterprise would take months.

Except that one domain controller had survived.

The Ghana Server

In Maersk’s Accra, Ghana, office, the local domain controller was offline due to a power outage at the exact moment NotPetya swept through the network. It never got infected.

When Maersk engineers figured this out, they flew a team to Ghana. The server was physically removed, flown to the UK, and used as the seed to rebuild the entire global domain.

This is not a dramatic exaggeration. Without that one offline server, Maersk’s recovery timeline would have been measured in months, not days.

The Recovery Operation

What followed was one of the largest and fastest IT rebuild operations anyone in the industry had seen:

  • 10 days to restore core infrastructure: 45,000 PCs, 3,500 servers, 1,000 applications
  • 45,000 PCs borrowed from HP — HP’s entire available EMEA inventory
  • Staff working 24-hour shifts across multiple time zones
  • IT personnel flying between offices carrying USB drives with clean OS images
  • Coordination is happening primarily over personal mobile phones

The port disruptions cost an estimated $300M in lost revenue. Customers who depended on Maersk’s logistics had to make alternative arrangements. Some shipments were simply delayed for weeks.

Maersk’s executives later spoke publicly about the experience. The consistent theme was: we thought we were resilient. We weren’t. We had no idea how bad the blast radius would be until it happened.


Who Did This and Why

Attribution in cybersecurity is complicated and sometimes contested. In this case, the evidence that accumulated pointed clearly in one direction.

Code overlap: NotPetya shared significant code with tools previously attributed to Sandworm — a GRU-linked threat actor behind the Ukrainian power grid attacks in 2015 and 2016.

Target selection: Over 80% of infections were in Ukraine, delivered via software specifically designed for Ukraine. This wasn’t a random global attack — Ukraine was the target. Everyone else was collateral damage.

Operational complexity: Gaining persistent access to M.E.Doc’s build infrastructure, staying hidden for months, and deploying at a chosen moment is not something a criminal ransomware gang does. This required intelligence agency resources and patience.

Timing: The attack was launched on Ukrainian Constitution Day, a national holiday, when IT staffing across Ukraine was reduced.

In February 2018, the US, UK, Australia, Canada, and New Zealand formally attributed NotPetya to the Russian GRU. The White House described it as “the most destructive and costly cyberattack in history.” Russia denied involvement, as expected.

The intent was to damage Ukraine’s economy and infrastructure as part of the ongoing conflict. The ransomware disguise was meant to make it look like criminal activity and create deniability. But $10 billion in global damage is hard to spin as a garden-variety cybercrime.

One uncomfortable implication of this: the US and its allies issued a formal condemnation and attribution, and that was essentially the full response. No proportionate retaliation. No meaningful deterrence. That precedent hasn’t been lost on anyone.


What We Should Have Learned (and Still Need to)

1. Patch Management Is Table Stakes

MS17-010 was released 106 days before NotPetya hit. Most of the organisations that took the worst damage hadn’t applied it. That’s not a cutting-edge security failure — it’s a maintenance failure.

Critical patches need to be applied quickly. The window between a patch release and an exploit being actively used in attacks keeps getting shorter. Weeks-long patch cycles aren’t viable for critical vulnerabilities. This should be a board-level conversation about acceptable risk, not just a ticket in the IT backlog.

Also: disable SMBv1. It’s been deprecated for years, and there’s no good reason to have it running. This single change would have significantly reduced EternalBlue’s effectiveness.

2. Flat Networks Are Dangerous

One reason NotPetya spread as far as it did: too many organisations had flat or nearly flat internal networks where everything could reach everything else.

A Ukrainian accounting workstation should not have been able to reach a Danish domain controller. The fact that it could — because the corporate network connected everyone for operational convenience — is what let NotPetya cross continents.

Network segmentation, done properly, limits blast radius. If the Ukrainian accounting subnet can’t reach the core infrastructure segment, the damage stays in the accounting subnet. This isn’t a new idea, but it’s genuinely hard to retrofit into existing environments, and it requires ongoing maintenance as networks evolve.

Zero Trust architecture takes this further: don’t trust any connection by default, require explicit authentication for every resource access, and treat internal network traffic with the same scepticism you’d apply to external traffic.

3. Offline, Tested Backups Are Non-Negotiable

A large portion of NotPetya’s damage came from organisations whose backups were useless, either they didn’t exist, hadn’t been tested, or were stored on network shares that were also encrypted.

The 3-2-1 rule: three copies, two different media types, one offsite. But the rule that matters just as much is that backups must be air-gapped or immutable. A backup on a network share that’s accessible from the infected machine is not a backup in this scenario.

And test your restores. A backup you’ve never actually restored from is a theory, not a capability. Run restore drills. Know your actual RTO (Recovery Time Objective). Maersk’s 10-day rebuild was considered extraordinary; most organisations don’t have the resources or preparation to pull that off. Know where you stand before you need to find out.

4. Credential Hygiene Matters More Than Most People Think

NotPetya’s lateral movement relied on credentials sitting in LSASS memory on infected machines. The reason those credentials were there: admins had logged into those machines, or services were running under domain accounts with broad permissions, and Windows caches those credentials by default.

Practical steps that reduce this exposure:

  • LAPS (Local Administrator Password Solution): Gives each machine a unique local admin password, so one stolen hash doesn’t unlock everything.
  • Protected Users security group: Prevents NTLM authentication and credential caching for members. Useful for privileged accounts.
  • Tiered administration model: Domain admins only authenticate to domain controllers, never to workstations. Workstation admins don’t have access to servers. Cross-tier authentication just doesn’t happen.
  • MFA on everything privileged: A stolen password hash is significantly less useful if MFA is required.

The principle is: assume your network will eventually be compromised somewhere, and design your identity infrastructure so that one compromised machine can’t become the key to everything else.

5. Supply Chain Attacks Are a Real and Growing Threat

The M.E.Doc vector is the same fundamental attack as SolarWinds in 2020 and the 3CX compromise in 2023. The pattern keeps repeating: attackers gain access to a vendor’s build pipeline and use legitimate software updates as the delivery mechanism.

Your security posture is only as strong as your weakest trusted vendor. This means:

  • Know what third-party software is running in your environment with elevated privileges
  • Review vendor security practices as part of procurement
  • Use a software bill of materials (SBOM) where possible to track dependencies
  • Monitor for unexpected network traffic from trusted applications
  • Treat unexpected software behaviour — even from trusted tools — as suspicious

You can’t fully eliminate supply-chain risk, but you can reduce the blast radius by limiting what software can do and monitoring what it does.

6. Incident Response Needs to Be Planned in Advance

Maersk’s recovery team did impressive work, but they were largely improvising. The playbooks didn’t exist. The out-of-band communication plan didn’t exist. They figured it out under pressure.

Before an incident happens, you need:

  • A communication plan that doesn’t depend on the network that might be down
  • Pre-authorised decision frameworks: who can authorise taking down production systems to contain the spread?
  • Documented recovery procedures that have actually been tested
  • Pre-established relationships with IR firms (calling cold at 2am during an active incident is not the time to start vetting vendors)
  • Tabletop exercises that practice realistic scenarios, including “everything is down”

The Ghana domain controller saved Maersk months of recovery time, but it was pure luck. A proper DR architecture would have had isolated backup domain controllers as part of the design.

7. Detection Has to Be Behavioural, Not Just Signature-Based

NotPetya’s use of PsExec and WMIC was largely invisible to the security tools of its era because those tools are legitimate. Signatures don’t help if the malware is mostly living off the land.

The detections that would have caught NotPetya’s behaviour:

  • LSASS is being read by an unexpected process (classic Mimikatz indicator)
  • A single machine scanning hundreds of hosts on port 445 in a short window
  • PsExec is being invoked from an unusual source or at an unusual rate
  • Task Scheduler entries created by non-admin processes
  • Mass file extension changes on a workstation (encryption indicator)

None of these requires exotic tooling. A decent EDR and a SIEM with reasonable rules would surface all of them. The challenge is tuning them to alert at useful rates without alert fatigue drowning out the signal.


If NotPetya Happened Today

The specific exploit chain would be different — EternalBlue isn’t nearly as effective now because SMBv1 is mostly gone and MS17-010 is widely patched. But the underlying principles are just as relevant.

In cloud environments, instead of stealing NTLM hashes, an equivalent attack would target cloud credentials — AWS IAM keys, GCP service account JSON files, Azure managed identity tokens. These often show up in environment variables, CI/CD pipelines, or developer workstations. A compromised build server with cloud credentials could do enormous damage.

In Kubernetes clusters, overly permissive RBAC lets a compromised pod access secrets, query the API server, and potentially pivot to cloud provider credentials. Container-to-container lateral movement is a real attack surface.

In CI/CD pipelines, the M.E.Doc attack was a supply-chain attack targeting a build system. The same attack model maps directly to compromised GitHub Actions, poisoned npm packages, or malicious container images in a private registry. This is exactly what happened with the XZ utils backdoor in 2024.

In SaaS-heavy environments, OAuth tokens and API integrations mean a compromise in one SaaS platform can cascade through connected systems in ways that aren’t obvious until they’re being exploited.

The threat model hasn’t changed. The infrastructure has.


Key Technical Summary

The exploit:

  • EternalBlue (CVE-2017-0144) and EternalRomance targeted SMBv1 with no authentication requiredThe
  • MS17-010 patch was available 106 days before the attack; many organisations hadn’t applied it
  • Exploitation gave SYSTEM-level access on the remote machine automatically

Lateral movement:

  • Mimikatz-style LSASS dumping extracted credentials from any machine where an admin had recently authenticated
  • PsExec and WMIC used stolen credentials to spread to patched machines
  • Both tracks ran in parallel, simultaneously, from every infected machine

The payload:

  • MBR overwritten via scheduled reboot
  • MFT is encrypted, making file recovery impossible even with MBR repair
  • File-level encryption using properly implemented RSA+AES — no crypto bugs to exploit
  • No working decryption mechanism ever existed

Detection:

  • Behavioural signals: anomalous LSASS access, mass port 445 scanning, unusual PsExec/WMIC activity, bulk file modifications
  • The speed of propagation made reactive response nearly impossible without pre-positioned controls

Recovery:

  • Full reimage required for every infected machine
  • Active Directory reconstruction was the critical path
  • Clean, isolated, tested backups were the deciding factor between days and months of recovery

Conclusion

NotPetya happened because many things went wrong at once — an unpatched vulnerability, a flat network architecture, credentials cached in memory, backups on network shares, and no tested recovery plan. None of these failures was unusual. They’re the default state of most enterprise environments.

The attack was deliberately designed to maximise damage to Ukraine, but the design — worm-like propagation through SMB and credential reuse — didn’t care about borders. It spread to whoever was reachable and undefended.

The $10 billion damage figure is often what gets quoted, but the more important number might be this one: 106 days. That’s how long the patch had been available before NotPetya hit. For most of the organisations that took the worst damage, this wasn’t a sophisticated zero-day attack on hardened infrastructure. It was an old, publicly known vulnerability with a patch that hadn’t been applied.

NotPetya didn’t require extraordinary attacker capability to find victims. It just needed ordinary defenders who hadn’t done ordinary maintenance.

The attack is now eight years in the past. EternalBlue isn’t the threat it was. But supply-chain attacks, credential theft, lateral movement through legitimate tools, and destruction masquerading as ransomware — all of that is still happening, more frequently than before, against more complex infrastructure.

The question isn’t whether an attack at this scale is possible again. It’s whether the work we’ve done since 2017 would actually contain it.


Lessons at a Glance

AreaWhat to DoWhy It Matters for NotPetya-Style Attacks
Patch ManagementCritical patches applied within 30 days; disable SMBv1EternalBlue was patched 3 months before the attack
Network SegmentationEnforce isolation between segments; no flat internal networksLimited blast radius across geographies
Backups3-2-1 rule, air-gapped or immutable, tested regularlySignature detection failed; behavioural detection wouldn’t have
Credential HygieneLAPS, Protected Users, tiered admin model, MFAMimikatz-style harvesting drove most lateral movement
Supply-Chain SecurityVendor assessments, SBOM, and monitor trusted software behaviourM.E.Doc update was the initial infection vector
Incident ResponseBehavioural rules: LSASS access, port scans, PsExec anomaliesMaersk improvised; a plan would have helped
Detection EngineeringDefined RTO/RPO, isolated backup DCs, and tested restore proceduresPlaybooks, out-of-band comms, pre-authorised decisions
Disaster RecoverySecrets management, dependency scanning, and image signingOne offline DC in Ghana saved Maersk months
Zero TrustExplicit auth for every resource; don’t trust internal trafficWould have limited credential reuse across the network
DevSecOpsSecrets management, dependency scanning, image signingDirectly addresses supply-chain compromise vectors

Attack Timeline

DateEvent
March 14, 2017Microsoft releases MS17-010 (EternalBlue patch)
April 14, 2017Shadow Brokers publishes EternalBlue publicly
May 12, 2017WannaCry uses EternalBlue; global awareness of the vulnerability increases
Early 2017 (est.)Security researchers confirm Wiper; ransom mechanism declared non-functional
June 27, 2017NotPetya deployed via M.E.Doc update, ~10:30 AM Kyiv time
June 27, 2017Maersk, Merck, FedEx/TNT, Mondelez impacted within hours
June 27–28, 2017US, UK, Australia formally attribute NotPetya to the Russian GRU
~July 7, 2017Maersk completes core infrastructure rebuild (approx. 10 days)
June 27–28, 2017February 2018The
2018–2019Total damage estimates reach $10 billion+

Scroll to Top