The 3-2-1-1-0 Rule in Practice: Building It With Veeam v13

Backup Strategy ยท Veeam v13

The 3-2-1-1-0 Rule in Practice: Building It With Veeam v13

๐Ÿ“… March 5, 2026 โฑ ~15 min read ๐Ÿท 3-2-1-1-0 ยท VBR v13 ยท VHR ยท SOBR ยท SureBackup

Most backup engineers can recite the 3-2-1 rule without thinking. It's been the baseline for over two decades. But it was written when ransomware wasn't in the threat model, and it's been extended twice since then for good reasons. This article maps the current 3-2-1-1-0 rule to specific Veeam v13 products - what each number means, which product satisfies it, and why it matters.

How the Rule Evolved - and Why

Peter Krogh coined the original 3-2-1 rule in 2005, in the context of digital photography - three copies, two different media types, one copy offsite. It covers the three most common failure modes: single-copy loss, media-class failure, and site-level disaster. For roughly twelve years, that was the whole conversation.

RuleEraNew Protection AddedWhat Threat Forced It
3-2-1 ~2005 Multiple copies, multiple media, geographic separation Hardware failure, fire, flood, theft
3-2-1-1 ~2017 - 2020 One copy that is offline, immutable, or air-gapped Ransomware targeting networked backup repositories and cloud copies
3-2-1-1-0 ~2022 - present Zero unverified restore points - automated recoverability testing Backup corruption going undetected; organizations discovering bad backups only during incidents

Each extension was driven by real failures at scale. The second "1" came when ransomware groups started routinely targeting and deleting networked backup copies before triggering encryption - exactly what the ransomware attack chain article covers in detail. The "0" came when organizations discovered, during actual incidents, that backups they thought were healthy had been silently corrupted or untested for months.

โ„น๏ธ Veeam's Own Articulation

Veeam formally adopted the 3-2-1-1-0 rule as part of their best practices guidance. In their framing, the "0" specifically means zero errors verified by SureBackup - not zero job failures, but zero VMs that failed automated boot-and-application testing in an isolated lab environment.

The Complete 3-2-1-1-0 Rule at a Glance

3Copies
2Media Types
1Offsite
1Offline / Air-gapped
0Unverified

Here's each number, what problem it's actually solving, and which Veeam v13 product covers it.

The 3: Three Copies of Your Data

3
Three total copies of your data
1 primary (production) + 2 backup copies

Three total copies - and the live production data counts as one. You need at least two backup copies beyond production. One backup is a single point of failure. Two means one has to be destroyed or corrupted before you're in real trouble.

VSA - VBR Server VIA - Backup Proxy

In a Veeam v13 environment: the VSA (Veeam Software Appliance) hosts the VBR server that orchestrates everything. The VIA (Veeam Infrastructure Appliance) nodes are the backup proxies - they read from the VMware or Hyper-V infrastructure and write to the repositories. Those jobs produce the two copies beyond production.

Copy 1 is a local backup repository - fast recovery, shorter retention. Copy 2 is a secondary target: a remote repository, object storage, or the Hardened Repository you'll see under the second "1." Backup Copy jobs in VBR run on an independent schedule from the primary job, which matters - a copy job that runs right after the primary isn't real separation if both get caught in the same event.

๐Ÿ’ก Production Data Counts as One Copy

A common misread is that you need three backup copies. You don't - production counts as one. Production + local backup + offsite backup = 3. If you're running two independent backup jobs to two separate targets, you're at 4, which is fine. The rule is a floor, not a ceiling.

The 2: Two Different Storage Media Types

2
Two different storage media types
Protects against media-class failure or vulnerability

The "2" protects against failures that take out an entire media type at once - a firmware bug that bricks all drives of a specific model, a ransomware variant that specifically targets a storage platform, or the common situation where both "copies" are actually on the same SAN under different folder paths. That last one is more common than people admit.

VHR - Hardened Repository (local disk, XFS) Object Storage - S3/Wasabi/Azure Blob SOBR - Scale-Out Backup Repository

In practice with Veeam v13: local disk - the VHR or a standard repository - is your first media type. Cloud object storage (Wasabi, S3, Azure Blob, Backblaze B2) is your second. No local ransomware strain, drive firmware bug, or datacenter event reaches both at the same time.

The Scale-Out Backup Repository (SOBR) is what ties this together cleanly. You define a performance tier (fast local storage for recent backups) and a capacity tier (object storage for older data), and VBR handles the tiering automatically based on age. New backups land on local storage. Older ones migrate to cloud. You get both media types under a single repository policy without managing the movement manually.

The First 1: One Copy Offsite

1
One copy in a geographically separate location
Survives site-level disasters - fire, flood, power grid failure

If all your copies are in the same building - or even the same city - a site-level event takes everything. Fire, flood, power grid failure, a building you can't access. The offsite copy is the one that survives when the primary site doesn't.

Object Storage - S3/Wasabi/Azure/B2 SOBR Capacity Tier

The cleanest way to handle this in Veeam v13 is the SOBR capacity tier pointed at cloud object storage. Data moves from the local performance tier to cloud automatically based on age - no separate Backup Copy job to configure or schedule, no manual movement. It's part of the SOBR policy.

If you want explicit control, a Backup Copy job targeting a cloud repository gives you an independently scheduled copy operation. That's worth considering if you have tight RPO requirements for the offsite copy - SOBR capacity tiering isn't real-time. It moves data when it ages out of the local tier, not immediately after the backup job runs.

One thing worth calling out: don't store the cloud credentials in VBR's configuration. If VBR is compromised - and the ransomware article shows exactly how that happens - those credentials go with it. Use IAM roles where the provider supports them, or store credentials somewhere outside the management plane.

The Second 1: One Copy Offline or Air-Gapped

1
One copy that is offline, immutable, or air-gapped
Survives a fully compromised management plane

This is the number that got added because of ransomware. Production, local backup, and offsite copy are all network-accessible to a sufficiently privileged attacker - and the ransomware article demonstrates exactly how that privilege gets obtained. This copy has to be protected in a way that doesn't depend on Veeam software, Windows credentials, or network access controls remaining intact.

VHR - Veeam Hardened Repository

The Veeam Hardened Repository is built specifically for this. Backup files written to the VHR get an XFS immutability flag enforced at the Linux kernel level. They can't be deleted or modified - by any software, any user, or any process, including a fully compromised VBR server - for the duration of the retention period. The filesystem rejects the delete. It doesn't matter what VBR instructs.

This is different from immutability settings in object storage or in VBR's own configuration. Those are software-level controls - overridable by software with sufficient permissions. XFS immutability flags are not. They can only be bypassed by someone with physical machine access running OS-level tools, which is outside the threat model for most ransomware attacks.

Tape also satisfies this number - a cartridge physically removed from the library is genuinely air-gapped, full stop. Organizations with existing tape infrastructure often keep it as the offline copy alongside the VHR, which gives them the immutability guarantee of tape combined with the recovery speed of disk.

โš ๏ธ Object Storage Object Lock โ‰  The Same Thing

Object Lock (S3 Object Lock, Azure immutable storage) is often cited as satisfying this requirement. It provides real protection - but it's worth being clear about the difference. Object Lock is enforced by the cloud provider's software and APIs. That's strong in practice, but it depends on the cloud account not being compromised and the provider's systems working correctly. XFS immutability on a local VHR is enforced by a kernel. No API, no account, no provider. Both are valid layers of defense; they're not the same thing.

The 0: Zero Unverified Backups

0
Zero unverified restore points
Automated recoverability testing - not just job success indicators

This is the number that separates organizations that have backups from organizations that can actually restore from them. A job showing "Success" in VBR tells you the backup completed without errors. It does not tell you the VM will boot, the application will start, the database will be consistent, or that the backup files haven't silently corrupted over the past three months.

SureBackup - Automated Recovery Verification

Veeam's SureBackup is how you get to zero. It takes backup images, boots the VMs in a completely isolated virtual lab - no connectivity to production - and runs automated tests: does the VM power on, does the OS boot, do the specified services start, do application-level checks pass. For VMs with defined roles like domain controllers or database servers, Veeam runs role-specific tests automatically.

If a VM fails, you get an alert. You find out during the scheduled test - not at 2 AM on the first night of a ransomware recovery when you discover the backup is unusable. The difference in outcome between those two moments of discovery is enormous.

The isolated lab is created and torn down automatically - no risk of bleeding into production. You configure which VMs to test, how often, and what tests to run. For the most critical systems - domain controllers, primary database servers, anything that would block a broader restore - weekly SureBackup runs is a reasonable starting point.

๐Ÿ’ก SureBackup Doesn't Have to Test Everything

You don't have to test every VM. The storage and compute overhead of booting everything would be significant. Start with the systems that matter most: domain controllers, primary SQL servers, anything that would block a broader restore. Get that working reliably, then expand. Some verified coverage is vastly better than none.

Putting It All Together: The Full Architecture

Veeam v13 - 3-2-1-1-0 Reference Architecture

SOURCE
Production
VMware / Hyper-V VMs
Copy #1 of your data
Management
VSA - VBR Server
Orchestration only, no data
Transport
VIA - Backup Proxies
Data mover, no storage
โ†“ Backup jobs write to โ†“
LOCAL
Copy #2 ยท Immutable ยท Media type 1
VHR - Hardened Repository
XFS immutability ยท Fast recovery ยท SOBR performance tier
โ†“ SOBR capacity tiering / Backup Copy jobs โ†“
CLOUD
Copy #3 ยท Offsite ยท Media type 2
Object Storage
S3 / Wasabi / Azure Blob / B2 ยท With Object Lock
โ†“ SureBackup automated testing โ†“
VERIFY
Verification ยท The "0"
SureBackup Jobs
Isolated virtual lab ยท Boot testing ยท App-level checks
Rule coverage: 3 copies โœ“ 2 media types (disk + object) โœ“ 1 offsite (cloud) โœ“ 1 immutable (VHR) โœ“ 0 unverified (SureBackup) โœ“

Practical Notes on Implementation

Start with the second "1"

If you're building this from scratch, deploy the Hardened Repository first. In a ransomware scenario, the VHR is the single component that determines whether you recover without paying or whether you're negotiating. The offsite copy and SureBackup are important, but they can come in subsequent phases. Getting the VHR deployed and receiving backup data is the highest-value first step.

Set the immutability window longer than feels necessary

Current data puts median ransomware dwell time - the gap between initial access and encryption - at around 5 to 10 days. A 7-day immutability window is cutting it close if the attacker sat quietly for 8 days before triggering. A 30-day window is a safer baseline for most environments. Yes, it costs more storage. It costs less than a ransom.

The SOBR removes most of the operational complexity from two-tier storage

Without a SOBR, satisfying "2 media types" means separate backup copy jobs, independent retention configuration, and monitoring two distinct systems. With a SOBR, you configure one logical repository - VHR as the performance tier, object storage as the capacity tier - and VBR handles tiering, retention, and monitoring as a single policy. For most environments that's a significant reduction in ongoing operational work.

SureBackup alerts are only useful if someone responds to them

Running SureBackup and ignoring the alerts is worse than not running it. You end up with false confidence and a backlog of unresolved failures nobody looked at. Before you turn it on: define who gets the alerts, what the response process is for a failed test, and what your target is for resolving a failed VM. Without that, you have an alert system generating noise that nobody acts on until an actual incident surfaces the problem.

โš ๏ธ The Rule Describes a Minimum, Not a Target

3-2-1-1-0 is a floor, not a finished product. If you have critical data, tight RTOs, or regulatory obligations around recovery objectives, treat this as the baseline to build from - not the destination. Meeting the rule means you've covered the basics. What you add on top depends on your environment and what you can't afford to lose.

Veeam v13 - What Covers Each Number

  • 3 copies - VSA (VBR Server) orchestrates jobs ยท VIA proxies transport data ยท Two independent backup targets
  • 2 media types - VHR on local disk (XFS) + Object storage (S3/Wasabi/Azure) via SOBR capacity tier
  • 1 offsite - SOBR capacity tiering to cloud object storage, or explicit Backup Copy jobs to cloud repository
  • 1 immutable - Veeam Hardened Repository ยท XFS immutability enforced at kernel level ยท Independent of VBR software
  • 0 unverified - SureBackup automated recovery testing ยท Isolated virtual lab ยท Boot + application-level verification
Veeam Backup & Replication v13 ยท Backup Strategy ยท 3-2-1-1-0 Reference Architecture
Published March 5, 2026

Read more