Setting Up Veeam v13 with Proxmox VE: A Complete Integration Guide

Veeam v13 · Proxmox VE · Integration Series
📅 March 2026  ·  ⏱ ~15 min read  ·  By Eric Black
Veeam v13 Proxmox VE VBR Workers CBT VMware Migration

The VMware Exodus and What It Means for Backup

If you've spent the last couple of years watching VMware licensing costs climb after the Broadcom acquisition, you probably already know where a lot of those workloads landed. Proxmox VE absorbed a huge chunk of the VMware refugee population, and honestly, for good reason. It's open source, it's mature, it runs KVM under the hood, and it does the job without a per-socket subscription renewing on you every year.

The backup question always comes right after the migration question. You moved your VMs. Now how do you protect them? If you're already running Veeam for Windows, Linux, or any remaining VMware infrastructure, the answer in v13 is cleaner than it's ever been. Proxmox VE is a native platform in VBR now. Same console, same job wizard, same repository targets. This guide walks through the complete setup from scratch.

Fair warning up front: the Proxmox integration has some real architectural differences from what you might be used to with VMware or Nutanix, and a longer limitations list than either of those platforms. I'll call out the important ones as we go so you're not caught off guard after you're already mid-deployment.

How It All Fits Together

Like the Nutanix AHV integration, Veeam uses a worker VM model for Proxmox. A worker is a lightweight Linux VM that Veeam deploys onto your Proxmox host to handle the actual data movement. VBR coordinates the jobs and manages the infrastructure side of things. The worker is what reads VM data and transfers it to the repository.

The most important thing to understand about Proxmox workers is that they are not persistent. Veeam powers the worker on when a backup or restore session starts and shuts it down automatically when the session ends. On AHV, workers stay running between jobs. On Proxmox, they spin up, do the work, and shut back down every time. This is by design. The worker VM will appear in your Proxmox inventory during backup windows and sit stopped between sessions. That's normal, not a problem.

Changed Block Tracking is supported for Proxmox VMs and uses QEMU Dirty Bitmaps. During the first full backup, Veeam creates a bitmap for each disk attached to the VM. On subsequent runs, it reads those bitmaps to find only what changed since the last session, which is how you get fast incrementals instead of full reads every time.

There's a real gotcha here you need to know about: for VMs with disks in RAW or VMDK format, QEMU automatically removes the bitmaps whenever those VMs are powered off or restarted. If a VM rebooted between backups, Veeam has no bitmap to read and falls back to processing the full disk for that session. This is a Proxmox technical limitation, not a Veeam issue. It affects disk format decisions and is worth factoring into your design if you have VMs that reboot regularly.

VBR connects to each Proxmox node directly over SSH on port 22. There is no cluster-level management endpoint in Proxmox the way Prism Central works in the Nutanix world. Every node is its own connection. If you're running a Proxmox cluster, you add each node individually to VBR.

Before You Start: Requirements and Hard Stops

Work through this table before you open any wizard. A few of these are hard stops that will block you if you hit them mid-setup instead of before.

RequirementDetail
Veeam Backup & Replication v13, build 13.0.1 or later.
Proxmox VE version 8.2, 8.3, 8.4, or 9.0. Must be installed from the official Proxmox ISO. No exceptions to the ISO requirement.
Shell requirement Veeam uses /bin/bash for all management operations on the Proxmox server. Standard ISO installs have this by default.
Credentials Root account or root-equivalent SSH access. Standard Proxmox web UI roles are not sufficient. SSH private key credentials are not supported. Must be username and password. MFA accounts are not supported.
Server name Must not contain an FQDN. Use a hostname or IP address. Each node must also have a unique Proxmox system UUID before you add it.
Network connectivity VBR to each Proxmox node: TCP 22. Workers to VBR server: TCP 10006. Proxmox nodes must have a direct IP path back to the backup server. NAT gateways are not supported.
Cluster nodes Every node must be added to VBR individually. Cluster endpoints are not supported. VMs on unregistered nodes will be skipped by jobs.
VM networking Open vSwitch (OVS) networking is not supported. Linux Bridge networking is required.
VM storage BTRFS and custom storage types are not supported for VM disks. All other Proxmox VE storage types are supported. S3 buckets connected as Proxmox storage are also not supported.
Worker storage The default local storage must be enabled on each host where a worker is deployed. The storage hosting worker system files must support Proxmox snapshots.
License Veeam Universal License (VUL) required. Per-VM instance consumption. Socket-based licensing does not cover Proxmox workloads.
⚠️ Official ISO Install Is Non-Negotiable
If your Proxmox environment was built from a custom image, a cloud template, or anything other than the official Proxmox ISO, Veeam does not support it. It doesn't matter what PVE version is running. Verify your install method before you go any further with this setup.
⚠️ Do Not Rename the Cluster After Adding Nodes to VBR
After you add Proxmox cluster nodes to VBR, changing the cluster name in the Proxmox admin portal breaks the integration. You'll need to remove and re-add every node. Name your cluster before you start and don't touch it afterward.

A Word on Licensing

Same deal as Nutanix: Proxmox workloads require Veeam Universal License (VUL). Each protected VM consumes one instance. If you're running a mixed environment with some VMware VMs on socket-based licensing and new Proxmox VMs alongside them, those Proxmox VMs need VUL instances. The socket license doesn't cover them.

For shops that came from VMware specifically, this is usually the first licensing conversation. Your old VMware socket licenses don't convert to VUL automatically. Talk to your Veeam account team before assuming you're already covered.

Step 1: Add the Proxmox VE Server to VBR

Unlike Nutanix where you add a single Prism Central connection and get visibility across all clusters, Proxmox requires adding each node individually. Three-node cluster means three additions. Budget the time accordingly.

Step 1.1

Navigate to Managed Servers

Open the VBR console and switch to Backup Infrastructure. Click Add Server in the ribbon. From the dropdown, select Proxmox VE.

Step 1.2

Enter the server address

In the New Proxmox VE Server wizard, enter the hostname or IP address of the Proxmox node. Use the actual node address. No NAT gateway addresses, no FQDNs. The connection must be direct node to VBR server.

Step 1.3

Enter SSH credentials

Provide root credentials or credentials for an account with root-equivalent SSH access. Click Add to store new credentials or select saved ones from the dropdown. Username and password only. SSH private key credentials and MFA accounts are not supported. If your Proxmox nodes require key-only SSH, add a dedicated password-based account specifically for Veeam.

ℹ️ Use a Dedicated Service Account
Root access is required, but that doesn't mean handing Veeam your actual root password. Create a dedicated service account on each node with root or sudo access and store those credentials in VBR's credential manager. Makes auditing cleaner and password rotation much easier down the road.
Step 1.4

Finish and repeat for all cluster nodes

VBR connects via SSH, verifies access, and adds the node to the Managed Servers inventory. If the node is part of a Proxmox cluster, VBR detects cluster membership automatically. Repeat for every node in the cluster. Any VM on an unregistered node is invisible to backup jobs.

ℹ️ Inventory Changes Take Up to 15 Minutes to Sync
After changes in your Proxmox environment, like migrating a VM between cluster nodes, it can take up to 15 minutes for those changes to appear in VBR. If you need to see it now, right-click the server in the inventory tree and select Rescan.

Step 2: Configure Your Backup Repository

Proxmox backup jobs support the same repository types as other VBR workloads. Standard Windows and Linux repositories, CIFS shares, deduplication appliances, and Scale-Out Backup Repositories all work. Two things worth calling out before you pick your target:

HPE Cloud Bank Storage cannot be used as a primary backup job target for Proxmox VMs. It can be used for backup copies. If that's part of your architecture, route primary Proxmox jobs to a standard repository and use a backup copy job to move data there.

Repository-level encryption is supported. Job-level encryption is not supported for Proxmox backup jobs. Set encryption at the repository and it applies to everything landing there automatically.

Also think about the network path between your Proxmox nodes and your repository. Workers run directly on Proxmox hosts and transfer data straight to the repository target. The throughput on that path determines whether you hit your backup window.

Step 3: Deploy Worker VMs

Workers on Proxmox are ephemeral per session. Veeam powers them on when a job starts and shuts them down when it ends. The worker VM sits in your Proxmox inventory as a stopped VM between sessions. That's normal behavior, not a deployment failure.

Step 3.1

Open the worker deployment wizard

In the Backup Infrastructure view, find a Proxmox node in the Managed Servers tree. Right-click the node and select Add Worker, or use the ribbon button. The New Worker wizard opens.

Step 3.2

Select the host and storage

Select the Proxmox node where the worker will live. Choose the storage for the worker VM disk. The default local storage must be enabled on the host, and the storage you pick must support Proxmox snapshots. BTRFS storage cannot be used here. Name the worker something that makes it obvious which node it belongs to.

Step 3.3

Configure network settings

Select the network the worker will use for backup traffic. Keeping that traffic off your production VM network is good practice where your architecture allows it. Click Advanced to set the update behavior. By default the worker checks for updates online each time it powers on. If your environment has no direct internet access or you want to control update timing yourself, uncheck "Check for updates online" here.

⚠️ VLAN-Tagged Environments Need Manual Assignment
If your Proxmox environment uses VLAN tagging, the VLAN ID must be manually assigned to worker VMs after they are deployed or redeployed. This is not handled automatically by Veeam. If workers can't communicate after deployment and you're running VLANs, that's the first thing to check.
Step 3.4

Set max concurrent tasks and deploy

Set the Max concurrent tasks value. The default worker configuration supports up to 4 concurrent tasks. Each additional task beyond that requires 1 additional vCPU and 1 GB RAM allocated to the worker VM. Start at the default and tune based on what you observe during backup windows. Click Apply. VBR deploys the worker VM to the selected Proxmox node and manages its lifecycle automatically going forward.

💡 Deploy at Least One Worker Per Node
VBR selects the worker local to the VM being backed up. If there's no worker on the node where a VM lives, performance suffers. In clustered environments where VMs can move between nodes with HA, having a worker on every node is the right call.
ℹ️ Firewall Rules Are Managed Automatically
Veeam creates the firewall rules needed between the Proxmox node, the worker, and the backup server automatically. You don't need to open ports on the worker VM manually. You do need TCP 22 open from VBR to Proxmox nodes, and TCP 10006 open from workers to the VBR server.

Step 4: Create a Backup Job

Servers added, workers deployed. The job wizard should feel familiar at this point.

Step 4.1

Start a new Proxmox VE backup job

In the Home view, click Backup Job in the ribbon. Select Virtual machine then Proxmox VE from the sub-menu. Give the job a name that makes it immediately clear what it's protecting.

Step 4.2

Add VMs to the job

Click Add to open the object browser. Expand your Proxmox nodes down to individual VMs, or add at the node or cluster level to pick up new VMs automatically as they appear. Just make sure every node those VMs could live on is registered with VBR first.

⚠️ Several VM Types Are Not Supported
LXC containers cannot be backed up. VM templates cannot be backed up. VMs created as linked clones from templates are not supported (full clones are fine). VMs with the same BIOS UUID as another VM cannot be backed up. Know what's in your environment before you assume everything in scope will process cleanly.
Step 4.3

Select the backup repository

Choose your target. HPE Cloud Bank Storage cannot be a primary target for Proxmox backup jobs. Use a standard repository or SOBR. If repository-level encryption is configured, it applies automatically to everything landing there.

Step 4.4

Configure guest processing

Enable Guest Processing and provide guest OS credentials if you want application-aware backups for SQL, Active Directory, Exchange, or similar workloads inside your VMs. Windows VMs need an administrator account. Linux VMs need root or root-equivalent. Guest processing needs a direct network path from the VBR server to each guest VM's IP, not just to the Proxmox node.

Step 4.5

Set the schedule and finish

Set your schedule and retention, then click Finish. Run the job immediately for the first pass. The first run is always a full backup. Veeam creates QEMU dirty bitmaps for each disk during this session. Subsequent runs use those bitmaps to back up only what changed.

Watch for this on VMs with disks in RAW or VMDK format: QEMU removes their bitmaps whenever the VM is powered off or restarted. If a VM rebooted between job runs, Veeam can't use CBT for that session and processes the full disk. That's a Proxmox technical limitation and not something Veeam can work around. If you see unexpected full processing on incremental runs, check whether the VM was restarted and what disk format it's using.

Step 5: Verify and Monitor

After the first job run, open Home > Last 24 Hours and work through this:

  • Job status: Success or Warning. Read every warning on the first run. Most first-run warnings point to guest processing credential issues or VMs on nodes without workers deployed.
  • From run two onward, confirm VMs are processing incrementally. If you see full backup mode on a VM that didn't reboot, dig into the job log to understand why the bitmap wasn't held.
  • Confirm restore points appear in Backups > Disk for all protected VMs.
  • The worker VM should show as stopped (not missing) between backup windows. A completely absent worker VM means the deployment failed and needs to be redeployed.

Set up email notifications in Options > E-mail Settings if you haven't already. In the first few weeks of a new Proxmox integration, you want to be watching every job result closely while credentials, scope, and storage behaviors settle in.

Known Limitations Worth Knowing

These are all confirmed from the official Veeam v13 documentation. Read through them before you finalize your protection design:

  • IPv6 is not supported. Everything communicates over IPv4.
  • Open vSwitch networking is not supported. Linux Bridge is required. Workers and guest processing will fail in OVS environments.
  • BTRFS storage is not supported for VM disks or worker deployment.
  • Custom storage types are not supported. All other Proxmox VE storage types are supported for VM protection.
  • S3 buckets connected as Proxmox storage are not supported. VMs residing on S3-connected Proxmox storage cannot be backed up.
  • SSH private key credentials are not supported. Username and password only.
  • MFA accounts cannot be used for the Proxmox server connection.
  • NAT gateway connections are not supported. VBR needs a direct path to each node.
  • FQDNs in the server name field are not supported. Use hostname or IP.
  • Renaming the Proxmox cluster after adding nodes to VBR breaks the integration. All nodes must be removed and re-added.
  • OCSP certificates are not supported for accessing the Proxmox VE server.
  • Job-level encryption is not supported. Set encryption at the repository level.
  • LXC containers, VM templates, and linked clones cannot be backed up. Full clones are supported.
  • iSCSI disks attached to VMs are automatically skipped. Passthrough (directly attached) disks are also skipped.
  • VMs with duplicate BIOS UUIDs cannot be backed up.
  • VM permissions (user, group, API token grants) are not backed up.
  • VM replication is not supported for Proxmox VE workloads in the current release.
  • QEMU dirty bitmaps for RAW and VMDK disks are removed on VM power off or restart. Veeam falls back to a full read for that session when bitmaps are gone.
  • Concurrent backup operations per storage are limited to 4 by default. Contact Veeam Support to raise this limit.
  • VLAN-tagged environments require manual VLAN ID assignment on worker VMs after deployment or redeployment.
  • HPE Cloud Bank Storage cannot be used as a primary backup job target for Proxmox VMs. Backup copies to Cloud Bank Storage are supported.

Closing Thoughts

The Proxmox integration in Veeam v13 is genuinely solid. But the limitations list is longer than what you'll see with Nutanix or VMware, and that's worth being honest about. Some of those limitations, like the RAW/VMDK CBT behavior and the per-storage concurrency cap, have real operational implications that you want to discover before deployment rather than during your first major backup window.

The three questions that determine whether this works in your environment at all: was Proxmox installed from the official ISO, are you running Linux Bridge or OVS, and what storage types are your VMs sitting on. Get clear answers to those first. Everything else in the setup is straightforward once the prerequisites are confirmed.

And if you're running a mixed environment with VMware still in play alongside Proxmox, VBR handles both from the same console without issue. That consolidated management is genuinely one of the best things about the platform, and it works exactly as advertised.

What You've Covered

  • Architecture understood: ephemeral workers, per-node registration, QEMU dirty bitmap CBT and the RAW/VMDK caveat
  • Prerequisites confirmed: PVE version, ISO install, Linux Bridge, storage types, unique system UUID, credential format
  • All cluster nodes added to VBR individually with dedicated service accounts
  • Backup repository configured with supported type, encryption at repository level
  • Workers deployed per node with correct storage, network, VLAN awareness, and update settings
  • First backup job run, bitmaps created on run one, incremental confirmed from run two
  • Full limitations list reviewed and factored into protection design

Read more