What Actually Happens When Ransomware Hits Your Backup Server
What Actually Happens When Ransomware Hits Your Backup Server
Most organizations have a backup strategy. What they don't always have is one that survives a targeted attack โ where the attacker knows your backup infrastructure exists, knows where it lives, and goes after it deliberately before triggering encryption. This article walks through that chain step by step: what the attacker does, what stops them, and what doesn't.
Why Attackers Target Backup Infrastructure First
Encrypting production data alone doesn't guarantee payment. If you have clean, recent backups, you restore and move on โ no ransom needed. Attackers figured this out years ago. The extortion only works when recovery is off the table.
So before triggering encryption, modern ransomware groups spend time inside your network first. The more organized ones โ the ransomware-as-a-service operations โ treat it like a job. Current data puts the median dwell time between 5 and 10 days. That's enough time to find your backup infrastructure, understand it, and gut it without you knowing they're in.
The 2024 Veeam Ransomware Trends Report โ based on 1,200 organizations that experienced an attack in 2023 โ found that backup repositories were targeted in 96% of attacks. Attackers successfully breached them in 76% of those cases. That's not an edge case. That's what a normal ransomware incident looks like now.
The reason backup infrastructure is such an easy target is structural. Backup servers need broad network access โ they have to reach everything they protect. They run with elevated credentials. And most organizations still treat them as internal IT systems rather than security-sensitive targets. That assumption is exactly what gets exploited.
The Scenario: A Mid-Size Organization With "Good" Backups
Here's the target. This isn't an organization that ignored backup. By conventional measures, they were doing it right:
- Veeam Backup & Replication running on a Windows Server VM
- Daily backups of ~200 VMs to a local backup repository on a dedicated Windows file server (SMB share)
- Weekly offsite copies to a cloud storage bucket
- Backup jobs running reliably, retention set to 30 days
- The VBR service account has local admin on the backup repository server
- RDP is enabled on the VBR server for remote administration โ access is "restricted to internal network"
- No hardened repository, no immutability, no MFA on the VBR web console
This is a competent team. They've prioritized availability. They just haven't treated the backup environment as a security surface yet. Here's what that costs them.
The Attack Chain, Step by Step
Initial Access โ Phishing Email with Malicious Macro
Someone in finance gets a convincing invoice email with an Excel attachment. The macro drops a Cobalt Strike beacon that calls back to the attacker's C2 infrastructure. Endpoint protection flags it. The user clicks through the warning. The attacker now has a shell running under a domain account.
Reconnaissance โ Mapping the Environment (Days 1โ5)
The attacker goes quiet for a few days. They run BloodHound to map Active Directory, find privileged accounts, and chart a path to domain admin. They scan for RDP and SMB, enumerate shares, and look for backup software. They find VBR on vbr-server.corp.local and the repository at \\backup-repo\backups\. Both go on the list.
Privilege Escalation โ Credential Dumping to Domain Admin
An unpatched vulnerability in an internal service gets them local admin on the finance workstation. They run Mimikatz against LSASS and pull cached credentials from an IT admin who logged in recently. Domain admin. Every system in the environment is now reachable.
Backup Targeting โ The Kill Shot Before Encryption (Days 6โ8)
Domain admin in hand, the attacker RDPs directly to the VBR server. They disable all backup jobs โ no new restore points from this moment forward. Then they connect to the repository and delete the entire backup file tree. 30 days of restore points, gone in minutes. They delete the VBR configuration database backup too. Then they disable the Veeam services so nothing can auto-restart. Finally, they pull the cloud storage credentials out of VBR's configuration and delete the offsite copies as well. Every recovery option is gone. The visible attack hasn't started yet.
Pre-Encryption โ Volume Shadow Copy Deletion
Before the encryptor runs, vssadmin delete shadows /all /quiet hits every reachable machine. Volume Shadow Copies โ the last-resort quick-recovery option a lot of teams don't realize they're relying on โ are gone. A Group Policy script disables Windows Defender and clears Event Logs across the domain. The environment is now blind.
Encryption โ Simultaneous Domain-Wide Deployment
The ransomware payload deploys via Group Policy to every domain-joined system at once. Encryption starts on all of them simultaneously โ file servers, application servers, domain controllers, the VBR server itself. Within 90 minutes the environment is gone. Ransom notes on every desktop. The note specifically calls out that the backups and cloud copies have been deleted, and quotes a price.
โ Cloud offsite copies โ deleted in Step 4 (credentials were in VBR config)
โ Volume Shadow Copies โ deleted in Step 5
โ VBR configuration database backup โ deleted in Step 4
? Tape copies โ the organization doesn't have any
? Cloud backup vendor snapshots โ cloud provider may have object versioning enabled (unverified)
RESULT: Full environment encrypted, no verified recovery path. Negotiating ransom.
Step 4 is where this was over โ not Step 6. By the time the ransom notes appeared, there was nothing to recover from. The backup infrastructure was destroyed days before the encryption ran. Every alert, every EDR flag, every SIEM notification triggered by encryption was already irrelevant. The fight was already lost.
Defense Mapping: What Stopped What
Here's every defense this organization had, mapped against each phase:
| Attack Phase | Defense Present | Result | Why It Failed |
|---|---|---|---|
| Phishing / initial access | Endpoint protection, email filtering | โ Partial | User clicked through the warning. User awareness training is not a reliable control. |
| AD reconnaissance | None (BloodHound runs as a normal domain user) | โ Exposed | AD over-permissioning is endemic. Most orgs don't detect BloodHound collection. |
| Privilege escalation | Patch management (incomplete) | โ Exposed | One unpatched internal service was enough. Mimikatz succeeded because credential caching wasn't disabled. |
| RDP access to VBR server | "Restricted to internal network" | โ Exposed | With domain admin credentials the attacker was already "on the internal network." Network perimeter means nothing after lateral movement. |
| Backup deletion | None | โ Exposed | The VBR service account had full write access to the repository. No immutability, no MFA, no second account approval. Direct deletion took minutes. |
| Cloud copy deletion | None | โ Exposed | Cloud credentials stored in VBR config. Attacker accessed and deleted them from the already-compromised VBR server. |
| VSS deletion | None | โ Exposed | VSS has no protection against a domain admin running vssadmin. |
| Ransomware encryption | EDR, Windows Defender (disabled in Step 5) | โ Exposed | Defenses were neutralized before the encryptor ran. Encryption was the last step, not the first. |
Every single defense assumed the attacker was coming from outside the perimeter. Once they had domain admin โ which triggered no alerts โ the perimeter model was irrelevant. They were already everywhere.
How Zero Trust Data Resilience Changes the Outcome
Same attack. Same phishing email, same initial foothold, same BloodHound recon, same Mimikatz dump, same domain admin. This time the organization is running Veeam's Zero Trust Data Resilience architecture: a VSA for the management plane, VIA nodes as proxies, and a Veeam Hardened Repository for backup storage.
Steps 1 through 3 play out identically. Step 4 is where it goes differently.
Attacker Reaches the VBR Server
Domain admin in hand, the attacker reaches the VBR server. They disable backup jobs. Then they go looking for the repository โ but there's no Windows SMB share to connect to. The repository is a JeOS Linux appliance. SSH is off by default. RDP doesn't exist. It doesn't speak Windows credentials at all.
๐ก VHR: No Windows Credential Surface, MFA Required
Domain admin gets them nothing here. Accessing the JeOS appliance requires the veeamadmin account credentials plus a valid TOTP token โ a second factor that lives on a phone, not on any domain-joined system. No SSH port. No RDP. The only network-accessible surfaces are port 10443 (Web UI) and the Veeam application port VBR uses for management. Both require independent authentication with MFA.
Attacker Attempts to Delete Backups Through VBR Console
They have VBR admin access. They try removing the Hardened Repository from VBR's inventory, hoping that triggers a deletion of the contents. VBR submits the request. The VHR rejects it. Immutability is enforced at the XFS filesystem level โ by the kernel, not by Veeam software. A compromised management server can't override a filesystem flag.
๐ก VHR: Backup Data Intact โ Immutability Holds
The backup files within the retention window stay exactly where they are. Removing the VHR from VBR's inventory is just deleting a database record. The data on disk is untouched. Even if VBR is completely wiped and reinstalled from scratch, it can re-import the backup files from the VHR and restore from them.
Attacker Attempts to Enable SSH on the VHR to Get a Shell
Say they somehow got the veeamadmin password. They hit the VHR Web UI on port 10443. MFA is mandatory โ they don't have the TOTP seed. Authentication fails. But even if they cleared that, enabling SSH on the VHR requires Security Officer (veeamso) approval โ a separate account, with its own TOTP, on a different person's phone.
๐ก Four-Eyes: One Account Isn't Enough
You can't enable SSH from a single compromised account, no matter how privileged. The Security Officer role exists for exactly this reason โ destructive or access-expanding actions need independent approval from a second account with its own separate authentication. Compromise the backup admin account entirely and you still can't act unilaterally on the things that matter.
โ Backup jobs โ disabled by attacker before encryption
โ Last 2 days of incremental backups โ potentially incomplete due to disabled jobs
โ Hardened Repository backup data โ immutable, intact. 30 days of restore points.
โ Offsite cloud copies โ separate credentials, not accessible from compromised VBR
โ VBR can be reinstalled and repository re-imported without data loss
RESULT: VBR reinstalled from scratch (~2 hours). Backup data re-imported. Full restore initiated within 4 hours of incident declaration. Ransom not paid.
What ZTDR Still Doesn't Protect Against
ZTDR changes the outcome of most ransomware attacks in a real way. But it has limits, and it's worth being straight about them:
Once they're in VBR with admin credentials, they can kill all running jobs. From that moment, no new restore points are being created. Your most recent recovery point freezes at whatever time the attacker hit disable โ which could be days before encryption fires. The VHR protects everything already written to it. It can't retroactively capture backups that weren't running.
Immutability is time-bounded. If your retention window is 7 days and the attacker sat quietly for 10 days before triggering encryption, your oldest restore points already aged out before you needed them. Current data puts median ransomware dwell time at 5โ10 days โ which means a 7-day window is cutting it close. A 30-day immutability window is a safer baseline for most environments.
XFS immutability can't be bypassed by software on a compromised management server โ but it can be bypassed by someone with physical access to the machine, or a hypervisor admin who mounts the VM's disk directly. ZTDR assumes the physical layer and hypervisor are trusted. If they're not, the protection breaks at a layer Veeam doesn't control.
ZTDR protects your ability to recover โ not production itself. Servers still get encrypted. Applications still go down. Users still can't work. The difference is that you can restore without paying. Plan for a recovery window measured in hours to days for a full environment. What you do during that window โ how you keep the business running โ still matters.
Practical Takeaways
The ransom notes, the encrypted desktops, the helpdesk calls โ that's the announcement. The decision about whether you'll recover was made silently, days earlier, in your backup environment. By the time the visible incident started, this was already over.
The Architectural Changes That Actually Matter
- Move repositories off Windows SMB shares. A Windows file share with domain credentials is trivially accessible to anyone holding domain admin. A JeOS Linux appliance with MFA and no Windows credential surface is not in the same threat category.
- Immutability has to live below the application layer. An application-level delete โ including from a compromised VBR server โ can't override an XFS filesystem flag. The protection needs to be independent of Veeam software being intact or trustworthy.
- Don't store cloud credentials in VBR config. If the VBR server is compromised, anything in its configuration is compromised with it. Use IAM roles where the provider supports them, or a separate credential store that's outside the management plane entirely.
- MFA on backup consoles is not optional. Domain admin alone shouldn't be enough to access backup management. A second factor that doesn't live on any domain-joined system is what breaks the credential chain after lateral movement.
- Configure the Security Officer role โ and assign it to someone in security, not IT. Single-account control over destructive operations is the structural weakness. Four-eyes authorization means one compromised account can't unilaterally undo your protection.
- Test recovery, not just backup jobs. Jobs showing green doesn't mean you can restore. In this scenario, everything looked healthy right up until it wasn't. Automated recovery verification โ booting the backups and confirming they actually work โ is what gives you real confidence.
Nobody is promising ransomware prevention. The goal is to make paying the ransom unnecessary. If you can restore, the encryption of production becomes an expensive, painful incident โ not a crisis you negotiate out of. That's the whole point of building the backup environment this way.
The companion article to this one maps the 3-2-1-1-0 rule to specific Veeam v13 products โ what each layer is, which product satisfies it, and how they all connect into a complete architecture.