The End of the Windows Proxy? Performance Scaling on Linux

Veeam v13 - Proxy Architecture and Security

For most of Veeam's history, a Windows server was the default choice for a backup proxy - familiar, well-documented, easy to justify. v13 changes that calculus. The Veeam Infrastructure Appliance delivers a fully Linux-based proxy on hardened JeOS, speaking gRPC instead of RPC and WMI, and scaling NBD throughput with multi-TCP NFS. The question is not whether Linux proxies can perform. It is whether there is still a reason to put a Windows server in your DR site data path.


What Changed in v13 - The Protocol Layer

The most architecturally significant change in v13 is not the appliance form factor - it is the transport protocol. v13 eliminates Microsoft RPC and WMI for communication between backup infrastructure components and replaces them with gRPC. This is not an incremental improvement. RPC and WMI are Windows-native protocols that required Windows on both ends of the communication path, carried a wide port exposure (RPC uses dynamic high port ranges above 49152 in addition to port 135 for endpoint mapping), and have a long history of exploitation in lateral movement attacks. NTLM authentication goes with them - deprecated in v13 in favor of Kerberos.

gRPC runs over HTTP/2, uses TLS by default, and operates on a single well-defined port. It is cross-platform - the same protocol runs identically on a Linux proxy, a Windows proxy, or the VSA itself. The practical result is that a Linux proxy in v13 is a first-class infrastructure component with the same communication model as everything else in the stack, not a second-class citizen patched into a Windows-centric architecture.

Why This Matters for DR Sites Specifically

Your DR site proxy is in the data path for replication traffic and instant recovery. It is also typically a more loosely managed environment than primary infrastructure - DR sites accumulate technical debt. A Windows proxy in a DR site is a Windows server that needs patching, AV management, Windows licensing, and has RPC and WMI exposed on the internal network. A VIA-based Linux proxy has none of that surface. SSH is off by default. The attack surface reduction is not marginal.

The Windows Proxy Attack Surface - What You Are Actually Running

A Windows-based backup proxy in a pre-v13 environment carries a substantial attack surface that most backup engineers do not think about explicitly because it is just part of the background noise of Windows infrastructure management.

RPC dynamic port ranges require broad firewall rules between the backup server and proxy - either an open high-port range or a stateful inspection rule set that tracks RPC port negotiation. WMI runs on top of RPC and adds its own exposure. NTLM authentication is active by default and is vulnerable to relay attacks. The Windows proxy needs a service account with local administrator rights on the proxy itself. It needs AV exclusions configured correctly or backup performance degrades. It needs Windows Update managed so patches do not break proxy operation mid-backup window. It needs the Windows Firewall configured to allow Veeam's port list. Each of those is a management surface and a potential misconfiguration.

None of this makes Windows proxies inherently insecure - well-managed Windows infrastructure is fine. The point is that the management overhead and attack surface are real costs that accrue at every site where you deploy a proxy, and your DR site is typically the least-managed site in your environment.

Surface Windows Proxy (pre-v13 model) VIA Linux Proxy (v13)
Transport protocol RPC / WMI (dynamic ports) gRPC over HTTP/2, TLS, single port
Authentication NTLM (deprecated in v13) / Kerberos Kerberos / certificate-based
Remote shell exposure RDP, WinRM enabled by default SSH disabled by default
Firewall rule complexity Dynamic RPC port range + Veeam ports Veeam transport ports only (2500-3300 TCP)
OS patch management Windows Update, manual or WSUS Automated via VIA update mechanism
AV management required Yes - exclusions required for performance No - minimal package set, no AV needed
Windows licensing cost Windows Server license per proxy None
SELinux / kernel hardening N/A SELinux enforcing mode, JeOS baseline

Performance: What the Numbers Actually Say

The performance story in v13 is driven by two changes that apply equally to Windows and Linux proxies, plus one Linux-specific improvement that changes the NBD scaling equation.

30%
CPU reduction on proxies and agents from BLAKE3 hashing vs. the previous MD5-based algorithm
2x
Agent backup throughput improvement on the same hardware
~25%
Linux proxy NFS v3 backup speed improvement via multi-TCP connections
50%
Instant Recovery I/O throughput improvement - VMs running directly from backup

The BLAKE3 and agent throughput gains are platform-agnostic - a Windows proxy gets them too. The 30% CPU reduction matters most when the proxy is CPU-bound during deduplicated backup streams, which is common in environments running many concurrent tasks on undersized proxy hardware.

The ~25% NFS v3 improvement via multi-TCP is Linux-specific. It applies to NBD mode proxy operations where the proxy is pulling VM data over the network from ESXi hosts using NFS. In environments where NBD is the primary transport mode - which is the majority of deployments that cannot use HotAdd or Direct SAN - this is a direct throughput improvement for every backup job running through the Linux proxy.

The proxy sizing formula also changed between v12 and v13. v13 Linux proxies need 1 core per 2 concurrent tasks, where v12 required 1 core per task. The same hardware runs twice as many concurrent tasks at the proxy level. Combined with the BLAKE3 CPU reduction, a proxy that was running at 80% CPU utilization under v12 workloads may have significant headroom under v13 without any hardware change.

Scaling the Linux Proxy Pool at the DR Site

The VIA form factor changes how you think about proxy scaling at a DR site. A Windows proxy required planning - you provision a Windows VM, license it, harden it, configure AV exclusions, add it to the domain or configure a local service account, then register it in VBR. The lead time is hours to days depending on your provisioning process. A VIA proxy deploys from an OVA or ISO, boots into a preconfigured JeOS baseline, and registers in VBR with no post-deployment OS configuration. The provisioning time is minutes.

That changes the scaling model. Instead of sizing a single large Windows proxy VM and hoping it handles peak replication and restore concurrency at the DR site, you can deploy multiple smaller VIA proxies and let Veeam's built-in load balancing distribute tasks across them. Veeam dispatches backup and restore tasks to proxies using an automatic load balancing algorithm - adding a proxy to the pool increases available concurrent task capacity immediately. Adding a second VIA proxy at the DR site to handle instant recovery load during a failover event is a ten-minute operation, not a provisioning project.

# VIA proxy sizing reference - v13 # Minimum: 2 vCPU, 4GB RAM # Per concurrent task: +0.5 vCPU, +500MB RAM (1 core per 2 tasks) # Example: DR site proxy handling 8 concurrent restore tasks vCPU: 6 # 2 base + 4 (8 tasks / 2) RAM: 8GB # 4GB base + 4GB (8 tasks * 500MB) # Scale-out alternative: 2x VIA proxies at 4 tasks each # Same total capacity, better fault tolerance, faster to provision proxy_1: 4 vCPU / 6GB # 4 concurrent tasks proxy_2: 4 vCPU / 6GB # 4 concurrent tasks # VBR load balances across both automatically

The DR Site Security Argument

DR sites are where backup infrastructure security gets sloppy. The primary site has change management, regular patch cycles, and security team visibility. The DR site has infrastructure that "just works" and gets touched twice a year for DR tests. The Windows proxy installed three years ago may be running an outdated patch level, have RDP enabled, and be using a service account password that has not been rotated since initial deployment.

Ransomware operators understand this. DR infrastructure is a valuable lateral movement target - compromise the backup proxy and you are in the data path for replication and recovery. A VIA-based Linux proxy with SSH disabled, gRPC-only communication, automated patching, and a JeOS baseline that contains nothing except what Veeam needs is a fundamentally harder target than a general-purpose Windows server.

The gRPC transport layer removes the RPC dynamic port exposure from your DR site firewall rules. Instead of maintaining a stateful RPC inspection rule or a broad high-port allow, you have a specific TCP range for Veeam transport (2500-3300) and the gRPC control port. That is a firewall rule set you can audit, explain to a security team, and maintain across DR test cycles without it drifting.

The VMware vSphere Guest Interaction Proxy Requirement

v13 added Linux support for the guest interaction proxy role across most platforms - but for VMware vSphere specifically, the guest interaction proxy must still be a Windows machine. If your environment runs vSphere workloads with application-aware processing enabled, you need at least one Windows machine in the guest interaction proxy role for those jobs. Hyper-V and other platforms can use a Linux guest interaction proxy. For mixed environments with vSphere in scope, a hybrid model (VIA proxies for transport, a single minimal Windows guest interaction proxy for vSphere guest processing) remains the practical architecture.

The Migration Path

Moving from Windows proxies to VIA proxies does not require a cutover event. VBR supports mixed proxy pools - Windows and Linux proxies coexist and both receive tasks from the load balancer. The migration path is additive: deploy VIA proxies alongside existing Windows proxies, validate they pick up tasks correctly in the load balancing pool, then decommission Windows proxies once you have confirmed the Linux proxies are handling the workload. No backup window disruption, no job reconfiguration.

At the DR site specifically, the case for leading with VIA on new deployments is straightforward. Any proxy you add to the DR site today should be a VIA proxy. The Windows proxy that is already there can stay in the pool until it comes up for refresh, at which point it does not get replaced with another Windows server.

Key Takeaways
  • v13 replaces RPC and WMI with gRPC across the entire backup infrastructure communication layer. gRPC runs over HTTP/2 with TLS on a single port - a fundamentally smaller firewall footprint and attack surface than the RPC dynamic port model.
  • NTLM is deprecated in v13 in favor of Kerberos. Combined with the protocol change, the Windows proxy's inherited authentication exposure is eliminated on the Linux path.
  • Performance improvements are real and documented: up to 30% CPU reduction from BLAKE3 hashing, approximately 25% NBD throughput improvement on Linux proxies via multi-TCP NFS, and a proxy task density improvement (1 core per 2 tasks vs. 1 core per task in v12).
  • The VIA proxy deploys from OVA or ISO with no post-deployment OS configuration. Adding proxy capacity at the DR site is a minutes-long operation, not a Windows provisioning project.
  • The one remaining vSphere-specific Windows requirement: for VMware vSphere workloads, the guest interaction proxy role must be a Windows machine. v13 added Linux guest interaction proxy support for other platforms. Mixed vSphere environments need one Windows machine in that role - everything else in the proxy pool can be VIA.
  • DR sites accumulate security debt. A VIA proxy with SSH disabled, automated patching, and gRPC-only transport is a structurally harder target than a general-purpose Windows server that has been running untouched for two years.

Read more