Veeam v13: WAN Accelerator Deployment
What WAN Acceleration Actually Does
WAN acceleration is Veeam's technology for reducing how much data actually travels across the wire during off-site backup copy jobs and replication jobs. It is not a general-purpose network optimizer - it specifically sits in the data path for those two job types and uses global deduplication plus compression to shrink what gets sent over the WAN link.
The mechanism requires a pair of WAN accelerators: one on the source site, one on the target site. The source-side accelerator analyzes outgoing data blocks and compares them against a digest store of previously transferred blocks. Blocks that have already been sent are referenced rather than retransmitted. The target-side accelerator maintains a global cache of received data that all source accelerators feeding into it can draw from. The net effect is that after the first full transfer, subsequent cycles send only genuinely changed data blocks - not just changed blocks relative to the previous job run, but changed blocks relative to everything that accelerator pair has ever transferred for similar workloads.
This is different from incremental backup, which already only captures changed blocks. WAN acceleration adds a second layer: cross-VM, cross-job global deduplication across everything the accelerator pair has processed. In environments where VMs share common OS base images, common application binaries, or common data patterns, the savings can be substantial.
Do You Actually Need It?
WAN acceleration is worth deploying when your off-site backup copy or replication jobs are constrained by WAN bandwidth, and when your backup data contains a significant amount of similar content across VMs. The more homogeneous your VM population - shared OS images, common application stacks - the better the dedup ratios. In environments with highly diverse or incompressible data, the overhead of running WAN accelerators may not pay off.
The Veeam documentation draws a clear line at 100 Mbps: connections faster than that should use High bandwidth mode, which disables global cache and relies only on cross-restore-point deduplication. For connections below 100 Mbps where WAN capacity is the real constraint, Low bandwidth mode with full global cache is the right choice.
Skip WAN acceleration if: your WAN link is fast enough that backup copy jobs complete within their window without it, your data is highly incompressible (databases with encryption, already-compressed media), or you are using Veeam Cloud Connect where the service provider manages WAN acceleration on their side.
WAN acceleration is only applicable to off-site backup copy jobs and replication jobs. It does not apply to initial backup jobs, restore operations, or any other job type. If you are only running primary backups to a local repository, WAN acceleration does not factor into your design at all.
How It Works: Low Bandwidth vs High Bandwidth Mode
Veeam WAN acceleration operates in two distinct modes, and the mode affects both disk space requirements and what deduplication actually happens.
Low bandwidth mode is the full deduplication path. The source accelerator maintains a digest file for every data block it has ever processed. On each job run, it computes digests for outgoing blocks and compares them against the digest store. Matching blocks are not retransmitted - instead, the target accelerator pulls the block from its global cache. This mode requires the target accelerator to maintain a global cache on disk sized at roughly 10 GB per distinct OS type present across all processed VMs. Source disk requirements scale with the number of VMs and their provisioned size.
High bandwidth mode drops the global cache entirely and performs deduplication only by referencing previously copied restore points that already exist on the target repository. There is no global cache on the target, so no additional disk space is needed beyond the backup data itself. Deduplication ratios are lower than Low bandwidth mode, but so is the processing overhead. Veeam recommends High bandwidth mode for WAN connections faster than 100 Mbps. Both the source and target WAN accelerators must have High bandwidth mode enabled for it to take effect - if either side has it disabled, the pair falls back to Low bandwidth mode.
If you switch an accelerator pair between modes - from High bandwidth to Low bandwidth or vice versa - Veeam deletes the digest data for the previous mode and rebuilds it for the new mode. During that rebuild cycle, the first job run after the mode switch will behave like a new transfer with no deduplication benefit. Plan mode changes accordingly and do not make them immediately before a large scheduled job window.
Requirements and Sizing
WAN accelerators can run on Windows or Linux machines. In v13, they can also be deployed on the Veeam Infrastructure Appliance (the same JeOS-based appliance used for backup proxies and repositories), which gives you certificate-based authentication and centrally managed updates. The machine must already be added to the Veeam console as a managed server before you can assign it the WAN accelerator role.
The minimum RAM recommendation is 8 GB. WAN acceleration operations are CPU and memory intensive - the source accelerator in particular requires significant resources during digest computation. Do not put a source WAN accelerator on a machine that is already under heavy load from other roles.
| Component | Requirement / Formula | Notes |
|---|---|---|
| OS | 64-bit Windows or Linux. 32-bit not supported. | Can also use Veeam Infrastructure Appliance (JeOS) in v13. |
| RAM | 8 GB minimum recommended | Source accelerator is compute-intensive. More is better for large environments. |
| Source disk (Low bandwidth) | Provisioned VM size x 0.02 = required digest space (GB) | Example: 10 VMs at 2 TB provisioned = 40 GB for digest data. |
| Source disk (High bandwidth) | Provisioned VM size x 0.01 = required digest space (GB) | Example: 10 VMs at 2 TB provisioned = 20 GB. |
| Target global cache (Low bandwidth) | 10 GB per distinct OS type across all processed VMs | Default allocation is 100 GB. Adjust based on actual OS type count. |
| Target disk (High bandwidth) | No global cache needed | High bandwidth mode does not use global cache. No extra disk required. |
Veeam recommends sizing disk space as if you plan to use Low bandwidth mode, even if you intend to use High bandwidth mode. If conditions change and you switch modes later, you will not need to scramble to add disk. Configure the larger allocation upfront and treat it as insurance.
Adding WAN Accelerators
You deploy WAN accelerators through the Backup Infrastructure view in the VBR console. The process is the same for both source and target accelerators - what differs is the machine you point the wizard at and the cache size you configure.
Add the Server to the Managed Server List
Before assigning the WAN accelerator role, the machine must be added as a managed server. In the VBR console, go to Backup Infrastructure > Managed Servers and add the Windows or Linux machine you plan to use. Provide credentials with administrator rights on the target machine. Veeam deploys its transport service during this step.
If you are using the Veeam Infrastructure Appliance, the appliance is added differently - via the Add Component workflow in Backup Infrastructure, selecting the Infrastructure Appliance option. The appliance uses certificate-based authentication rather than username and password.
Launch the New WAN Accelerator Wizard
In the Backup Infrastructure view, right-click WAN Accelerators in the inventory pane and select Add WAN Accelerator. The New WAN Accelerator wizard opens. Select the managed server you added in Step 1 from the server list.
On this same screen, configure the number of upload streams. More streams increase throughput but also increase CPU load on the accelerator. The default is typically adequate for most environments - adjust if you have measured throughput limitations and have spare CPU headroom on the accelerator machine.
Configure Cache Location and Size
Specify the folder where the WAN accelerator will store its data (the VeeamWAN folder). This path must be on a disk with adequate free space per the sizing formulas above. For a source accelerator, this folder holds digest files. For a target accelerator, this folder holds the global cache data.
For the target accelerator, set the global cache size in GB. The default is 100 GB. Adjust this based on how many distinct OS types exist across all VMs you plan to process. If you have Windows Server 2019, Windows Server 2022, and Ubuntu 22.04 VMs, that is three OS types, so 30 GB minimum. Give it headroom beyond the minimum - cache eviction under pressure degrades dedup ratios.
Enable High Bandwidth Mode (Optional)
If your WAN link exceeds 100 Mbps and you want to use High bandwidth mode, check the High bandwidth mode option in the accelerator properties. Remember: both the source and target accelerators in the pair must have this enabled. If only one side has it on, the pair uses Low bandwidth mode.
High bandwidth mode can be configured at any time by editing the accelerator properties. Changing it triggers the digest rebuild process on the next job run - plan accordingly.
Review Components and Apply
The wizard shows the Veeam components that will be installed on the machine. Review and click Apply. Veeam deploys the WAN Accelerator Service on the target machine. Repeat this entire process for the other site to deploy the paired accelerator.
The WAN accelerator role can be assigned to a machine that is already serving as a backup proxy or backup repository. Sharing roles is supported but not recommended for production environments where WAN acceleration is expected to handle significant throughput. The CPU and memory demands of digest computation can interfere with proxy or repository performance on the same machine.
Configuring a Backup Copy Job to Use WAN Acceleration
WAN acceleration is enabled per job at the job configuration level, not globally. Both new and existing backup copy jobs can be configured to use accelerators.
Open or Create the Backup Copy Job
In the VBR console, go to Home > Backup Copy and either edit an existing job or create a new one using the Backup Copy Job wizard. Navigate to the Target step of the wizard.
Enable WAN Acceleration on the Target Step
On the Target step, check the Enable WAN acceleration checkbox. Two dropdown menus appear: Source WAN accelerator and Target WAN accelerator. Select the WAN accelerators you configured in the previous section - source accelerator from the site where this VBR server lives, target accelerator at the remote repository site.
Complete and Save the Job
Finish the remaining job wizard steps normally. Save the job. On the next scheduled run, data transfer will route through the WAN accelerator pair instead of the direct path. Check the job session statistics after the first accelerated run - the session log shows data read, data transferred, and the deduplication ratio achieved. The first run will show a low ratio because the global cache is being populated. Subsequent runs show the actual steady-state savings.
Do not assign a single source WAN accelerator to multiple jobs that run simultaneously. The source accelerator does not process multiple tasks in parallel and requires significant CPU and RAM when active. If you have multiple backup copy jobs that need to run concurrently to the same remote site, either stagger their schedules or create one job that covers all the VMs rather than multiple jobs. The target WAN accelerator, by contrast, can serve multiple source accelerators in parallel.
Cache Management
The global cache on the target accelerator requires occasional maintenance. Veeam populates and manages the cache automatically during normal operations, but there are two situations where you will need to intervene manually.
The first is cache corruption. If a cache becomes corrupt - which can happen after an unclean shutdown or storage failure - the safest path is to clear it and let Veeam repopulate it from the next job run. Corrupted cache data causes failed or degraded transfers. Clear it rather than troubleshoot it.
The second is when you switch workloads. If you retire a set of VMs and replace them with VMs running entirely different OS types, the existing cache data is no longer useful. Clearing the cache removes data for OS types that no longer exist in your environment, freeing space and allowing the cache to be rebuilt with data relevant to the new workload.
To clear the cache: open Backup Infrastructure, click WAN Accelerators in the inventory pane, right-click the target accelerator, and select Clear cache. The cache is cleared immediately. The next job run repopulates it from scratch, meaning the first run after clearing will have no deduplication benefit on the target side.
Many-to-One WAN Acceleration
A single target WAN accelerator can serve multiple source accelerators from different sites simultaneously. This is the standard architecture for hub-and-spoke deployments where multiple branch offices or remote sites all replicate back to a central data center.
The target accelerator maintains a shared global cache that all source accelerators can benefit from. When source accelerator A sends a block of data, that block goes into the global cache. When source accelerator B later sends the same block - from a different source site but with the same underlying data - it is pulled from cache rather than transmitted again. This cross-site global deduplication is the main advantage of the many-to-one topology.
The practical constraint is that the target accelerator's CPU and RAM must be scaled to handle the concurrent load from all connected source accelerators. For each additional source site, add compute capacity to the target accelerator. Monitor CPU and memory utilization on the target accelerator during peak job windows and scale accordingly.
In a many-to-one topology, the global cache size on the target should account for all OS types across all connected source sites, not just one. If three source sites each have three distinct OS types but share some in common, count unique OS types across all sites to calculate the target cache size requirement.
Upgrading from v12
If you have existing WAN accelerators from a v12 deployment, they carry over to v13 without reconfiguration. The accelerator role remains assigned to the same machines, existing cache data is preserved, and backup copy jobs continue using the same accelerator pair after the VBR upgrade completes.
The new capability in v13 is the Infrastructure Appliance deployment option for WAN accelerators. If you want to migrate an existing WAN accelerator running on a Windows or Linux machine to the JeOS appliance, this requires deploying a new appliance, adding the new WAN accelerator, updating your backup copy jobs to reference the new accelerator, and then removing the old accelerator from the infrastructure. There is no in-place migration from an existing OS install to the appliance - it is a new deployment with job reconfiguration.
For most teams upgrading from v12, leaving the existing WAN accelerators in place on their current machines is the right call. The appliance option is a consideration for new greenfield deployments or when you are already rebuilding accelerator infrastructure for other reasons.
Decision Reference
Use this table to match your scenario to the right WAN accelerator configuration.
| Scenario | Recommendation | Notes |
|---|---|---|
| WAN link under 100 Mbps, homogeneous VMs | Deploy WAN accelerators, use Low bandwidth mode | Maximum dedup benefit. Size global cache at 10 GB per OS type. |
| WAN link over 100 Mbps | Deploy WAN accelerators, enable High bandwidth mode on both sides | No global cache overhead. Dedup from previous restore points only. |
| Multiple branch sites copying to one central repo | One target WAN accelerator, one source per site | Target global cache is shared across all sources. Scale target compute for concurrent load. |
| Single source WAN accelerator, multiple backup copy jobs | Stagger job schedules, or consolidate VMs into one job | Source accelerator is single-threaded. Parallel jobs queue, not process in parallel. |
| Highly incompressible or already-encrypted data | Skip WAN acceleration | Dedup ratios will be negligible. Overhead not worth the benefit. |
| Veeam Cloud Connect target | Configure per service provider instructions | SP manages target accelerator. Tenant configures source only. |
- Added a source WAN accelerator at the primary site using the New WAN Accelerator wizard
- Added a target WAN accelerator at the remote site with global cache sized for your OS type count
- Configured High bandwidth or Low bandwidth mode based on your WAN link speed
- Enabled WAN acceleration on backup copy jobs by selecting the source and target accelerator pair
- Verified deduplication ratios in job session statistics after the first accelerated run
- Understood cache management and when to clear global cache on the target accelerator
WAN acceleration pays off when the data going across your WAN link is genuinely similar across jobs and VMs, and when your WAN link is the bottleneck. The first run after deploying accelerators will not show dramatic savings - global cache is empty and digests are being built. Give it two or three job cycles before evaluating the actual dedup ratio. Once the cache is populated, the steady-state ratio is the number that matters. If it is consistently below 1.1:1 on data that should be deduplicable, review whether your data is compressible and whether the VM population is as homogeneous as you expected.