Adding and Configuring Backup Proxies (VMware and Hyper-V)

Veeam v13 - Installation Series
📅 March 2026  ·  ⏱ ~15 min read  ·  By Eric Black
Backup Proxy VMware Hyper-V VBR v13 Transport Mode HotAdd

What the Backup Proxy Does

The backup proxy is the component that does the actual data movement work. It reads VM data from the source datastore, applies deduplication and compression, and sends the processed data to the backup repository. The backup server coordinates and schedules - the proxy is where the CPU and I/O work happens.

By default, the VBR backup server itself acts as the default proxy. This is fine for small deployments or testing, but in any production environment with more than a handful of VMs, you deploy dedicated proxies. Dedicated proxies distribute the I/O load, allow processing closer to the source data, and let you scale throughput independently of the backup server.

Veeam distributes backup workload across all available proxies automatically. When a backup job runs, Veeam evaluates which proxies can reach the source data and assigns each VM disk to the least busy eligible proxy. You do not need to pin VMs to specific proxies - the automatic load balancing handles it unless you have a specific reason to override it.

VMware and Hyper-V use different proxy architectures. VMware uses VMware backup proxies with four transport modes. Hyper-V uses on-host proxies (built into the Hyper-V host) and off-host backup proxies for more demanding environments.

VMware Transport Modes Explained

Transport mode determines how the VMware backup proxy reads data from the source VM's virtual disks. Veeam selects the mode automatically based on proxy configuration and datastore connectivity, but understanding the modes helps you deploy proxies in the right place to get the best mode selected.

Direct Storage Access (Direct SAN or Direct NFS) is the fastest and most efficient mode. The proxy reads VM data directly from the storage, bypassing the ESXi host entirely. For SAN datastores, the proxy must be a physical machine or a VM with RDM access to the SAN LUNs. For NFS datastores, the proxy must have NFS client access and root permissions to the NFS export. When direct storage access is available, Veeam prioritizes it above all other modes.

Virtual Appliance (HotAdd) is the recommended mode for virtual proxy machines on vSphere. The proxy VM is on the same ESXi host as the VM being backed up. Veeam uses VMware's HotAdd disk capability to attach the source VM's virtual disks to the proxy VM as temporary SCSI devices, reads the data through the ESXi I/O stack, then detaches the disks. This keeps backup traffic on the storage fabric rather than the network and works with all datastore types that support HotAdd.

Network (NBD/NBDSSL) reads VM data from the ESXi host over the network using VMware's NBD protocol. It works everywhere but is the slowest mode - backup traffic travels over the management network, putting load on both the ESXi host CPU and the network. NBDSSL adds TLS encryption to the transfer at an additional CPU cost on the ESXi host. Use Network mode only when Direct Storage Access and HotAdd are not available for a given proxy/datastore combination.

Veeam's automatic selection priority is: Direct Storage Access, then Virtual Appliance, then Network. For proxies configured with Automatic selection, Veeam scans the proxy's connectivity and picks the best available mode. You can override this per proxy if needed, but Automatic selection is the right default.

Transport Mode Proxy Type Best For Limitations
Direct Storage Access (SAN) Physical or VM with SAN RDM access FC/iSCSI SAN datastores, highest throughput Requires SAN zoning/masking to the proxy
Direct Storage Access (NFS) Windows or Linux with NFS access NFS datastores, avoids ESXi host CPU load Linux proxy needs nfs-common/nfs-utils installed
Virtual Appliance (HotAdd) VM on vSphere cluster All shared datastores (SAN, NFS, VSAN) Proxy VM and source VM must be in same vCenter scope
Network (NBD) Any Windows or Linux machine Local datastores, fallback mode Slowest, adds load to ESXi host and network

Adding a VMware Backup Proxy

VMware backup proxies are added in Backup Infrastructure > Backup Proxies. Right-click and select Add VMware Backup Proxy. The machine must be added as a managed server first.

Step 1

Select the Server

Choose the managed server to assign the proxy role to from the dropdown. Provide a description. The default description includes the user who added the proxy and the date - replace this with something more useful like the proxy's intended role or location.

On this screen, set the number of upload streams if you are also assigning WAN acceleration capabilities, and configure the transport mode. Leave Automatic selection unless you have a specific reason to pin the mode. If you select a specific mode manually, you are overriding Veeam's ability to fall back to a working mode if the primary mode fails.

Step 2

Configure Transport Mode and Connected Datastores

If using Direct Storage Access, click Choose next to Connected Datastores and switch from Automatic Detection to Manual Selection. Add the specific datastores this proxy can access directly via SAN or NFS. This tells Veeam exactly which datastores this proxy can reach in Direct Storage Access mode, enabling more accurate mode selection and load balancing.

If using Virtual Appliance (HotAdd) mode, ensure the proxy VM is already deployed on the vSphere cluster it will protect. The proxy VM must be in the same vCenter inventory as the source VMs. For VSAN environments, deploy one proxy VM per ESXi host in the cluster to minimize cross-host backup traffic and reduce VSAN network load during backup windows.

For Network mode, you can optionally enable NBDSSL by checking Enable host to proxy traffic encryption in Network mode. This encrypts the backup data stream at the cost of additional ESXi host CPU usage.

Step 3

Set Maximum Concurrent Tasks

Configure the number of tasks (VM disks) this proxy can process simultaneously. Veeam creates one task per VM disk, not per VM - a VM with four disks uses four concurrent task slots. The recommended calculation is two tasks per CPU core: a 4-core proxy handles up to 8 concurrent tasks, an 8-core proxy handles up to 16. Reduce this on proxies that share roles with other services.

Step 4

Configure Traffic Rules (Optional)

The Traffic Rules step lets you configure bandwidth throttling between this proxy and specific backup repositories. If your proxy and repository are separated by a WAN link or a limited-bandwidth network segment, configure a rule that limits how much bandwidth the proxy uses for that path. Leave this at default (no throttling) for same-site proxy-to-repository traffic.

Step 5

Review and Apply

Review the summary and click Apply. Veeam deploys the Veeam Data Mover Service on the proxy machine. Once deployment completes, the proxy appears in the Backup Proxies list and is immediately available for job assignment.

💡 Tip

If you check Failover to network mode if primary mode fails, or is unavailable, Veeam automatically falls back to NBD if Direct Storage Access or HotAdd fails during a job session. This is enabled by default and should stay enabled for most environments. Disabling it means a proxy mode failure causes job failure rather than a graceful fallback. Only disable it if you need strict enforcement of a specific transport mode for compliance or performance reasons.

Proxy Sizing and Concurrent Tasks

Proxy sizing depends on the transport mode in use and the number of concurrent tasks expected. For Virtual Appliance and Network mode proxies, the bottleneck is typically CPU and network throughput. For Direct Storage Access proxies, storage I/O bandwidth and CPU are the constraints.

A practical starting point for a virtual proxy in HotAdd mode serving a mid-size vSphere environment: 4 vCPU, 8 GB RAM, 8 concurrent tasks. Scale up CPU and tasks as you add VMs or increase backup frequency. Monitor actual CPU utilization during backup windows and adjust task count based on observed headroom.

For environments with multiple backup windows running simultaneously - for example, a large environment where backups run in waves throughout the day - deploy multiple proxies rather than one large proxy. Multiple proxies give you redundancy in addition to throughput. If one proxy goes offline, the others continue processing. A single large proxy with high task count is a single point of failure for backup throughput.

Hyper-V On-Host Backup

In Microsoft Hyper-V environments, the default backup mode is on-host backup. The Hyper-V host itself acts as the backup proxy - Veeam deploys a transport service on the Hyper-V host, and VM data is read directly on the host and sent to the repository. No separate proxy machine is needed for on-host backup.

On-host backup uses VSS (Volume Shadow Copy Service) to create a consistent snapshot of the VM, reads the data directly from the Hyper-V host's storage, and streams it to the backup repository. This works for all Hyper-V configurations and is the path of least resistance for most Hyper-V deployments.

The tradeoff with on-host backup is that backup processing load lands on the Hyper-V host alongside the production VM workload. For hosts that are already heavily utilized, this can cause performance impact during backup windows. In those cases, off-host backup proxies move the processing off the host.

Adding a Hyper-V Off-Host Backup Proxy

Off-host backup proxies for Hyper-V are dedicated Windows machines that handle backup processing independently of the Hyper-V hosts. They require access to the same shared storage that the Hyper-V hosts use - typically iSCSI or FC SAN, or SMB 3.0 storage accessible by both the host and the proxy.

Off-host backup works by having the proxy directly access the same VM data that lives on the shared storage, without going through the Hyper-V host's CPU and I/O stack. The Hyper-V host still handles the VSS snapshot creation to get a consistent point-in-time, but the actual data reading happens on the proxy against the storage directly.

Step 1

Prepare the Off-Host Proxy Machine

The off-host proxy must be a Windows Server machine added to the Veeam managed server list. It must have access to the same shared storage as the Hyper-V hosts - iSCSI initiator configured and connected to the same targets, FC HBA zoned to the same LUNs, or SMB 3.0 access to the same file shares. The machine does not need to be a Hyper-V host itself.

Step 2

Add the Off-Host Proxy in the Console

In Backup Infrastructure > Backup Proxies, right-click and select Add Hyper-V Off-Host Backup Proxy. Select the managed server. Specify the shared storage paths this proxy has access to. Veeam maps the storage visibility to determine which VMs this proxy can process in off-host mode.

You can also add off-host proxies via the VBR web UI in v13, which follows the same steps through a browser-based wizard.

Step 3

Configure Concurrent Tasks

Set the number of concurrent tasks as with VMware proxies. The task count for off-host Hyper-V proxies should account for the storage I/O throughput available on the shared storage path between the proxy and the SAN or file share. Do not exceed the storage's read throughput capability with the configured task count.

ℹ️ Note

When a job runs and no off-host proxy is available or eligible for a given VM, Veeam automatically falls back to on-host backup for that VM. Off-host proxy failure does not cause job failure - Veeam degrades gracefully to on-host mode. Monitor job session logs to confirm whether jobs are actually using the off-host proxy or falling back to on-host mode.

Linux-Based VMware Proxies

In v13, Linux machines can serve as VMware backup proxies, including the hardened Linux repository machines. This is particularly useful when you want to assign the proxy role to a Linux machine that also serves as a hardened repository, providing combined proxy and immutable storage in a single Linux footprint.

Linux proxy limitations in v13 to be aware of: Linux proxies can only use Network (NBD) mode when added with single-use credentials (as is required for hardened repositories). Other transport modes - Direct SAN with iSCSI access, Direct NFS, Virtual Appliance HotAdd - require the persistent presence of Veeam's transport service, which is not compatible with the single-use credential model. If you need Direct NFS access on a Linux proxy, that proxy must be added with full credentials, not single-use.

Linux proxies cannot be used with VMware Cloud on AWS. For VMware Cloud on AWS environments, use Windows-based proxies or the VMware Cloud-specific proxy deployment approach documented at helpcenter.veeam.com.

💡 Tip

For Linux proxies using Direct NFS access, ensure the NFS client package is installed before assigning the proxy role: nfs-common on Debian/Ubuntu systems, nfs-utils on RHEL/CentOS systems. If the package is missing, the proxy will be assigned the role but NFS-based Direct Storage Access will fail silently and fall back to Network mode.

Deploying Proxies on the Infrastructure Appliance

In v13, VMware backup proxies can be deployed on the Veeam Infrastructure Appliance - the same JeOS-based hardened Linux appliance used for the VBR server and Enterprise Manager. Appliance-based proxies use certificate-based authentication, receive centrally managed updates from the VBR server, and have a minimal attack surface compared to a general-purpose OS.

To deploy a proxy on the Infrastructure Appliance, add the appliance as a managed component in Backup Infrastructure, selecting the Infrastructure Appliance option. During appliance setup, select the backup proxy role. The transport service is pre-configured on the appliance, and the proxy appears in the Backup Proxies list after deployment completes.

The Infrastructure Appliance proxy supports the same transport modes as a standard Linux proxy: Direct NFS, Virtual Appliance (HotAdd), and Network. For Direct SAN access, you need either the iSCSI initiator configured on the appliance or a physical machine with FC connectivity - the appliance does not change the underlying storage access requirements.

Upgrading from v12

Existing proxy configurations carry forward from v12 to v13 without changes. The backup server upgrade process updates the Veeam Data Mover Service on all managed proxy machines as part of the upgrade sequence. After the backup server is upgraded, Veeam pushes updated transport components to all registered proxies automatically.

The Infrastructure Appliance proxy deployment is the primary new option in v13 for proxy infrastructure. If your v12 environment uses Windows-based or standard Linux proxies and you want to migrate to appliance-based proxies, the process is: deploy new appliance-based proxies, verify they are working correctly, then remove the old proxy assignments from the existing machines. Data migration is not required - the new proxies start handling new job sessions immediately.

Decision Reference

Scenario Recommended Proxy Configuration
VMware with shared SAN (FC or iSCSI) Physical proxy or VM with SAN RDM, Direct SAN transport mode
VMware with NFS datastores Linux proxy with NFS client package, Direct NFS transport mode
VMware with VSAN Virtual proxy VM per ESXi host, Virtual Appliance (HotAdd) mode
VMware with local datastores only Physical proxy with NBD, Network mode (only option for local storage)
Hyper-V, lightly loaded hosts On-host backup (default, no proxy needed)
Hyper-V, heavily loaded hosts with shared SAN storage Windows off-host backup proxy with shared storage access
Greenfield v13 VMware deployment Infrastructure Appliance proxy for minimal attack surface and centralized management
What You've Completed
  • Added VMware backup proxies with transport mode configured for your datastore types (SAN, NFS, VSAN, or local)
  • Configured concurrent task limits based on CPU core count and expected workload
  • Verified that Automatic selection picks the correct transport mode by checking job session logs after the first backup run
  • For Hyper-V: confirmed on-host backup is working, and optionally deployed off-host proxies for heavily loaded hosts
  • For Linux proxies on NFS: confirmed NFS client package is installed and Direct NFS access is working
  • Deployed proxies close to the source data to minimize backup traffic crossing slow or limited network paths

Proxy deployment is where backup performance is either won or lost. Proxies configured for the wrong transport mode - or placed on machines that cannot reach the source data in the ideal mode - will fall back to NBD and push backup traffic across the management network. Check the transport mode in job session statistics after the first few backup runs. If you see NBD where you expected HotAdd or Direct SAN, look at connectivity from the proxy to the storage. The session log shows exactly which mode was used for each VM disk and why.

Read more