The "Invisible" Repository: Building a Budget Air-Gap for SMB Stacks

veeam v13 - immutability - smb infrastructure - air gap

What you will get out of this

A production-viable architecture for a disconnected, immutable Veeam repository using commodity hardware. You will see exactly why one path is lab-only and the other is legitimate production, and you will walk away with a step-by-step build for the machine that actually works.

💡 Why the 3-2-1-1-0 rule needs a physical air gap

The 3-2-1 rule got a mandatory fourth digit. Veeam's 3-2-1-1-0 framework requires at least one copy that is offline, air-gapped, or immutable, plus zero errors verified by SureBackup. S3 Object Lock handles the cloud immutability case. A Veeam Hardened Repository handles on-prem immutability. But neither of those is truly air-gapped if they sit on your production network.

True air-gap means the backup target has no active network path to your production environment during normal operation. It receives data during a scheduled backup window, then goes dark. That requires a physical machine you control. Not a cloud bucket. Not a NAS on VLAN 99. For SMB stacks where cloud egress costs are real and a dedicated tape library is overkill, a purpose-built low-power x86 machine hits the sweet spot: cheap, local, auditable, and genuinely disconnected between windows.

This is how you get the offline copy without a second data center, a second lease, or a $15k appliance budget.

⛔ The hard constraint: block storage only

Before choosing hardware, you need to understand the immutability mechanism. Veeam's Hardened Repository uses the Linux chattr +i command to set immutable flags on backup files at the filesystem level. This requires a filesystem that supports immutable files and extended attributes, specifically XFS or ext4. Veeam recommends XFS for the additional benefit of fast cloning via reflinks during synthetic full operations.

Hard stop The Veeam Hardened Repository requires block storage. You cannot use an NFS-mounted volume or an SMB/CIFS-mounted volume as the repository path. The Veeam helpcenter is explicit: NFS and SMB file systems do not support immutable files. If your hardware forces you toward a network share, you do not have a Hardened Repository. You have a standard Linux repository with no immutability.

This eliminates several otherwise attractive architectures. You cannot run the hardened repo on a NAS and mount it as NFS. You cannot use a shared SMB path. The disk must be locally attached: internal SATA, NVMe, or USB 3.x direct-attach. That constraint shapes everything about the hardware selection.

🥧 Option A: Raspberry Pi 5 honest assessment

The RPi 5 is genuinely impressive hardware. PCIe 2.0 x1 exposed via the FFC connector, USB 3.0, a real RTC, a 64-bit ARM Cortex-A76 CPU. For a Veeam Hardened Repository, it has two problems that are not fixable by creative configuration.

The ARM architecture problem

Veeam's Linux agent and Data Mover are compiled for x86-64. The official support matrix is explicit: ARM is not a supported architecture. The community has gotten Veeam binaries running on RPi hardware using box64, a userspace x86-64 emulation layer for aarch64. The throughput numbers from those projects are respectable. It is genuinely impressive engineering.

The problem is not whether it works. The problem is what happens when it stops working. Veeam support will not help you. An OS update can break the box64 shim. A Veeam version bump can introduce a binary that does not play nicely with the wrapper. You are running unsupported code on a platform that was not designed for it, protecting data that may be your only recovery path after a ransomware incident. That risk profile does not hold up for anything you call production.

The storage connectivity problem

Even with the software running cleanly, the RPi 5's storage options are constrained. microSD is unsuitable for a backup target: write endurance is poor and I/O bandwidth is inadequate. USB 3.0 to an external drive is workable but introduces enclosure failure modes and limited power headroom for spinning drives. The PCIe 2.0 x1 interface via hat adapter can host an NVMe SSD, but capacity at a reasonable price point is limited. None of this is fatal for lab work. All of it matters for production.

Lab use only Raspberry Pi 5 as a Veeam Hardened Repository is a great learning exercise. You will understand the hardened repo mechanics deeply by getting it working under box64. Do not deploy this for production backups. The unsupported software stack creates a liability that negates the value of the immutability layer you are trying to build.

💻 Option B: N100 mini-PC the right answer for production

The Intel N100 (Alder Lake-N, 2023) is a 6W TDP quad-core processor that ships in dozens of fanless and near-silent mini-PC form factors from Beelink, Minisforum, CWWK, and others. Under $200 barebones. It replaced the aging J-series Celeron as the go-to platform for low-power x86 builds. The community confirmed years ago that SFF x86 Celeron-class machines work well for exactly this use case. The N100 is a significant step up from that baseline.

It hits every checkbox the RPi 5 misses. x86-64 native architecture means full Veeam support with no workarounds. A 2.5" SATA bay or M.2 NVMe slot means local block storage that satisfies the Hardened Repository requirement. A 2.5GbE NIC, common on N100 platforms, means the backup window is not bottlenecked by the link. At 6W idle and roughly 12W under sustained backup load, running a 4-hour nightly window costs essentially nothing.

Hardware selection checklist

Not all N100 mini-PCs are equal for this build. Confirm the unit has an internal 2.5" SATA bay or M.2 NVMe slot. Units with only eMMC storage are not suitable as the primary backup drive. For the backup volume itself, a 4TB or 8TB 2.5" HDD is the typical choice: capacity matters more than IOPS for a sequential write workload, and HDDs offer cost-effective density. If the enclosure supports both M.2 and 2.5", pair an SSD for the OS with an HDD for backup data. Verify 2.5GbE NIC Linux driver support before buying.

🔍 Head-to-head comparison

Attribute Raspberry Pi 5 N100 Mini-PC
Architecture ARM64 (aarch64) x86-64
Veeam agent support Unsupported Officially supported
Workaround required box64 emulation layer None
Block storage options USB 3.0 DAS or M.2 via PCIe hat 2.5" SATA + M.2 NVMe internal
Network 1GbE onboard 2.5GbE typical
Idle power ~4W ~6-8W
Under backup load ~8-10W ~12-15W
Typical hardware cost $80-100 board plus accessories $150-220 barebones plus RAM plus storage
Suitable for production No Yes
Best use case Lab learning SMB air-gap production repository

🛠 Building the N100 hardened repository

Ubuntu 22.04 LTS is the recommended OS. Well-supported by Veeam, long support lifecycle, mature package ecosystem. Rocky Linux 9 is a solid alternative if your environment is RHEL-aligned.

Storage layout

If the unit has both M.2 and 2.5" bays, use the SSD for the OS and the HDD for backup data. A 120-240GB SSD for the OS is more than adequate. Format the backup volume as XFS with the reflink option enabled at format time. This is what enables fast clone for synthetic full operations.

mkfs.xfs -m reflink=1,crc=1 /dev/sda1

Mount at a fixed path and add to /etc/fstab with noatime to reduce unnecessary write amplification on the HDD.

/dev/sda1 /backup xfs defaults,noatime 0 2

User and privilege configuration

Create a non-root service account that Veeam uses for initial deployment. This account must not be in the sudo group. During server registration, VBR uses these credentials once to deploy the Data Mover, then discards them. Even a compromised VBR server cannot subsequently authenticate to the repository because the credentials are gone.

Step 1

Create the Veeam service account: useradd -m -s /bin/bash veeamsvc && passwd veeamsvc. Do not add to sudo group.

Step 2

Verify no sudo membership: run groups veeamsvc. Output should return only veeamsvc.

Step 3

Set ownership on the backup path: chown veeamsvc:veeamsvc /backup

Step 4

Configure firewall: allow TCP 6160, 6162, and 2500-3300 inbound from your VBR server IP only. Allow TCP 22 for initial setup. Deny all other inbound.

SSH: needed once, then disabled

SSH is required for initial server registration in VBR. Veeam uses SSH to deploy the Data Mover components on first contact. After that initial setup is complete, SSH is no longer needed for backup operations. Disable it. The Data Mover communicates over ports 6160, 6162, and the dynamic data range 2500-3300. SSH running on a hardened repo is extra attack surface you do not need.

systemctl disable --now ssh
Keep this in mind SSH is required again if you upgrade VBR and need to push updated Data Mover components to the repository. When you do a major VBR upgrade, re-enable SSH temporarily, complete the upgrade, and disable it again. This is a known operational pattern for hardened repo management.
Recovery access planning With SSH disabled, management access is physical console only. For remote access without SSH, a USB KVM extender keeps you out-of-band without exposing a shell over the production network.

🔒 Network isolation and the air-gap discipline

The Hardened Repository configuration gives you immutability. It does not give you a true air gap. That requires deliberate network architecture. A machine sitting on your production LAN, accessible from any host on the segment, is not air-gapped even if the files are immutable. An attacker who compromises your network can still reach the machine. They cannot delete the immutable backups, but they can encrypt new incoming data, fill the disk, or map out your protected assets.

Practical air-gap discipline means the repository machine is isolated from the production network except during the approved backup window. Two approaches work well for SMB. The simpler one uses a managed switch port on a dedicated VLAN, routed only to the VBR server, with ACLs blocking everything else. The backup window opens and closes via switch policy. The stronger approach is physical isolation: VBR gets a dedicated second NIC on a separate subnet, and the repository connects only to that segment. No default route to the internet on the repository host. No DNS beyond what VBR needs.

💾 Connecting to VBR and configuring immutability

Add the machine under Backup Infrastructure as a Managed Server. Select Single-use credentials for hardened repository and enter the veeamsvc account credentials. VBR deploys the Data Mover, uses those credentials once, and discards them. Authentication is certificate-based from that point forward.

When adding the Backup Repository on this server, select the /backup path and enable two settings. First, enable Use fast cloning on XFS volumes for the reflink synthetic full optimization. Second, enable Make recent backups immutable for and set a retention period. The immutability window should be slightly longer than your backup chain retention. If you keep 14 days of retention, set 16-18 days of immutability. This prevents the immutability window from expiring before the backup chain gets cleaned up by retention logic.

Backup copy jobs and GFS Immutability for backup copy jobs requires the GFS (Grandfather-Father-Son) retention policy on the copy job. Standard non-GFS copy jobs do not support immutability on a Hardened Repository. If you are targeting this repository via a copy job, which is the recommended pattern, enable GFS retention on that job.

⚙️ Operational realities: power, scheduling, monitoring

Scheduled power management via Wake-on-LAN

Here is something that does not get enough attention: you can schedule the machine to be completely powered off between backup windows. An N100 with Wake-on-LAN enabled can be brought up before the backup window opens, accept the job, and shut down when the window closes. A machine with no active NIC is about as air-gapped as you can get short of physically disconnecting cables.

WOL timing and job scheduling need coordination. A pre-job script that sends the WOL packet and waits 60-90 seconds before the backup job starts is the reliable pattern. The community has documented this approach well on N100 hardware.

Verifying immutability flags are actually set

After your first successful backup copy job, verify the immutability flags at the physical console:

lsattr /backup/YourBackupJobName/*.vbk

Files protected by immutability will show the i attribute flag. The .vbm metadata file will not have this flag. That is by design: Veeam must update it on every job pass. Verify once after initial setup and again after any VBR version upgrade.

What you end up with A fully built N100 hardened repository: Ubuntu 22.04, XFS with reflinks, SSH disabled post-setup, isolated VLAN, WOL-managed power window. Under $250 in hardware. 10-15W during backup windows. Fully supported Veeam configuration. No cloud dependency. No ongoing subscription cost. For an SMB stack protecting 1-10TB of production VMs, this is the most cost-effective legitimate air-gap architecture available.

Read more