Veeam + HPE Morpheus VM Essentials: Multi-VLAN Architecture and Integration Guide

You finished installing HPE Morpheus VM Essentials and now you're staring at a network diagram with VLANs drawn all over it. VBR lives on one segment. StoreOnce sits on another with a dedicated data IP. HVM clusters are on a third. The question is not whether Veeam can protect HVM workloads -- it went GA in March 2026 and the integration is clean. The question is whether your inter-VLAN routing and firewall rules will let the right traffic flow between the right components. That's what this article is actually about.

What the Veeam Plug-in for HPE Morpheus VM Essentials Actually Does

The Veeam plug-in for HPE Morpheus VM Essentials went GA with VBR 13.0.1. It provides agentless, host-based, image-level backup for VMs running on the HVM hypervisor -- the KVM-based engine that powers all HPE Morpheus tiers from VM Essentials through Private Cloud Enterprise. No guest agents. No separate virtual appliance. The plug-in installs natively on your VBR server and communicates directly with the HPE Morpheus Manager control plane to enumerate VMs, coordinate checkpoints, and drive backup and restore workflows.

Two components do the actual work:

Plug-in Module (on VBR Server)
Registers HVM infrastructure with VBR
Orchestrates backup and restore jobs
Syncs application-consistent checkpoints
Manages the worker lifecycle via Morpheus Manager API
HVM Workers (ephemeral Linux VMs)
Launched per-host for the duration of a job
Run Veeam Data Mover services
Handle data path to the repository
Configured as backup proxies in VBR

Workers are the key concept to understand before you think about multi-VLAN design. Each worker is a temporary Linux VM that VBR spins up on an HVM cluster node for the duration of a backup or restore job. Workers are spawned through the Morpheus Manager API -- not directly via SSH -- and they need to reach VBR on your management VLAN and reach the backup repository on your data VLAN simultaneously. That dual-network requirement is where most multi-VLAN deployments get tripped up.

Version Requirement

The HPE Morpheus plug-in requires VBR 13.0.1 or later. If you're still on 13.0, update first. The plug-in is not backported and is installed separately from the main VBR installer.

Planning Your Multi-VLAN Architecture

The scenario Subhadip raised in the Veeam community -- VBR in one VLAN, StoreOnce on a separate data network, HVM clusters on a third -- is a realistic and common enterprise layout. Here's how to think through it before you touch a single firewall rule.

Mapping Your Components to VLANs

Start by placing each component. This is the reference architecture you're building toward:

Management VLAN (e.g., VLAN 10)
VBR Server
HPE Morpheus Manager
Worker management NIC
Backup Data VLAN (e.g., VLAN 20)
HPE StoreOnce Data IP
Worker data NIC
StoreOnce Catalyst store
HVM Cluster VLAN (e.g., VLAN 30)
HVM hypervisor hosts
Production VM traffic
Worker spawn point
VM Workload VLAN(s)
Guest VMs being protected
Application traffic only
No backup infra here
Critical Design Point

Workers are spawned inside the HVM cluster but need to talk outward on two separate paths: back to VBR for control traffic, and forward to StoreOnce for data. This means each worker VM requires two network interfaces -- one connected to the management VLAN and one connected to the backup data VLAN. VBR lets you configure multiple NICs per worker and assign VLAN IDs directly in the worker settings. Don't skip this step.

How VBR Configures Workers for Multi-VLAN

When you add an HVM worker in VBR under Backup Infrastructure > Backup Proxies > Add Proxy > HPE Morpheus VM Essentials Worker, the network configuration step lets you add multiple network interfaces to each worker template. For each interface you can specify:

  1. 1The HVM network to attach the interface to (this maps to your VLAN-backed virtual network in Morpheus Manager).
  2. 2A VLAN ID if the target network is trunked rather than access-mode at the hypervisor layer.
  3. 3Static IP configuration if DHCP isn't available on that segment -- and on dedicated backup data VLANs, it often isn't.
  4. 4Interface order, which controls the routing preference when the worker has multiple paths available.

For the StoreOnce scenario, attach the first worker NIC to the management VLAN (so the plug-in can communicate with the worker on TCP 443 and 19000), and the second to the backup data VLAN (so data flows directly from the worker to the StoreOnce data IP without hairpinning through management).

Required Traffic Flows

This is the section that saves you from a 3 AM firewall debugging session. Every arrow here needs to be an explicit permit rule if you're running stateful ACLs or a next-gen firewall between VLANs.

VBR to HPE Morpheus Manager (Control Plane)

VBR Server HPE Morpheus Manager TCP 443 (HTTPS)

VBR talks to Morpheus Manager over HTTPS to enumerate clusters, hosts, VMs, and datastores. The plug-in also uses this connection to orchestrate worker creation and teardown -- workers are not deployed via SSH. If Morpheus Manager is on your management VLAN alongside VBR, this traffic is local. If it's on a separate management or out-of-band VLAN, open 443 between them.

Plug-in to Workers (Management)

Plug-in (VBR Server) Worker (management NIC) TCP 443
Plug-in (VBR Server) Worker (management NIC) TCP 19000
Plug-in (VBR Server) VBR Server (internal) TCP 6172

The plug-in maintains management connectivity with each worker over TCP 443 and 19000. TCP 6172 is the management channel between the plug-in module and the VBR server itself -- this is internal to the VBR host and doesn't cross VLANs, but your local firewall on the VBR server needs to allow it. Because workers are ephemeral, your inter-VLAN rules need to permit TCP 443 and 19000 from the management VLAN source to wherever workers materialize. If you assign static IPs to workers, you can tighten these rules to those specific hosts.

Workers to Repository (Data Path)

Worker (management NIC) Repository TCP 2500-3300

Workers use TCP 2500-3300 as the data transmission channel range when pushing backup data to a repository. One port from this range is assigned per active job connection. For a StoreOnce Catalyst repository, data also flows over the Catalyst protocol ports -- see the StoreOnce section below.

Workers to HPE StoreOnce (Catalyst Data Path)

Worker (data NIC) StoreOnce Data IP TCP 9387 (command)
Worker (data NIC) StoreOnce Data IP TCP 9388 (data)

This is the Catalyst protocol pair. 9387 is the command channel; 9388 carries the actual backup data. Both must be open from the worker data NIC subnet to the StoreOnce data IP. If StoreOnce has a dedicated backup data interface -- and it should -- target that IP specifically, not the management IP. Deduplication happens source-side in Catalyst, so the data hitting 9388 is already deduplicated before it leaves the worker. That keeps your data VLAN traffic lean.

VBR to StoreOnce (Repository Management)

VBR Server StoreOnce (mgmt IP) TCP 9387, 9388
VBR Server StoreOnce (mgmt IP) TCP 443

VBR needs to reach StoreOnce on 9387/9388 for repository operations and on 443 for the management REST API. If VBR can't reach StoreOnce at all, you won't get past the Add Repository wizard. Route this through your inter-VLAN firewall from the VBR management VLAN to the StoreOnce management VLAN, which may differ from the StoreOnce data IP subnet.

Port Reference Summary

Source Destination Port(s) Purpose
VBR Server Morpheus Manager TCP 443 Control plane API + worker orchestration
Plug-in (VBR Server) Worker (mgmt NIC) TCP 443 Worker management connectivity
Plug-in (VBR Server) Worker (mgmt NIC) TCP 19000 Worker management connectivity
Plug-in (internal) VBR Server (local) TCP 6172 Plug-in to VBR server communication (local)
Worker (mgmt NIC) Repository TCP 2500-3300 Data transmission channels
Worker (data NIC) StoreOnce Data IP TCP 9387 Catalyst command channel
Worker (data NIC) StoreOnce Data IP TCP 9388 Catalyst data channel
VBR Server StoreOnce (mgmt IP) TCP 9387, 9388, 443 Repository management + API

Configuring the StoreOnce Repository for HVM Backups

StoreOnce Catalyst is not a standard NFS or CIFS share. It's a deduplication protocol with its own client identity model. When you create a Catalyst store for Veeam, you set a client identifier -- Veeam uses Veeam as the client name by default. If you're using Client Access Permission Checking on StoreOnce, make sure an entry exists for that client name with the password you'll enter in the VBR Add Repository wizard. Mismatches here fail in ways that look like network problems, not auth problems. Check this first before you start opening firewall tickets.

Datastore Requirement

The Veeam plug-in for HPE Morpheus VM Essentials requires at least one datastore of type Directory Pool, NFS Pool, or GFS2 Pool configured in Morpheus Manager. If no qualifying datastore is present, the plug-in cannot enumerate storage and the integration will fail at the add-server stage. Verify this before you start the VBR configuration wizard.

For the multi-VLAN design, add StoreOnce to VBR using the management IP. Then, in the repository's advanced settings, point the data transfer path to the StoreOnce data IP on your backup VLAN. This splits control traffic from data traffic and keeps backup I/O off your management network.

Worker Sizing and Concurrent Job Design

Workers are ephemeral but they consume real HVM cluster resources while they're running. VBR deploys one worker per host in the cluster for the duration of each job. If you have a 4-node HVM cluster running a backup job, you will have up to 4 live worker VMs -- one per node. Each worker needs enough vCPU and RAM to saturate the backup data path without starving the production VMs sharing that host.

When you configure the worker in VBR, the default is 4 concurrent tasks per worker VM. When you change that value, VBR automatically adjusts the vCPU and RAM allocated to the worker -- you don't have to do the math manually. If you want to override that and set resources explicitly, use the advanced settings option in the worker wizard. If you see backup throughput plateau early, check whether the worker is CPU-bound on compression or network-bound on the data NIC. CBT (Changed Block Tracking) is supported natively and will substantially reduce how much data each worker processes after the initial full backup completes.

Adding HPE Morpheus VM Essentials to VBR

Once your network is in place, the registration sequence in VBR is straightforward:

  1. 1In VBR, go to Backup Infrastructure > Managed Servers and click Add Server. Select Virtualization Platforms, then choose HPE Morpheus VM Essentials from the list.
  2. 2Enter the Morpheus Manager FQDN or IP and provide credentials for an account with the System Admin role. VBR connects over HTTPS to pull the cluster and host inventory. If this step fails, it's almost always a firewall rule missing on TCP 443 or a certificate trust issue.
  3. 3At the snapshot storage step, choose whether to keep VM snapshots in a specific datastore or the largest available file-level datastore. Then wait for the manager to be added to the backup infrastructure. Verify that HPE Morpheus VM Essentials appears in the Managed Servers inventory tree.
  4. 4Go to Backup Infrastructure > Backup Proxies, click Add Proxy, and select HPE Morpheus VM Essentials Worker. Choose the target cluster and configure the worker template -- name prefix, resource allocation, and network interfaces. Add both NICs here before you save.
  5. 5Add the StoreOnce Catalyst store as a backup repository under Backup Infrastructure > Backup Repositories. Use the management IP for the initial connection, then set the data IP in advanced settings if you're splitting traffic across VLANs.
  6. 6Create a backup job, select Virtual machines, and browse to your HVM cluster in the inventory. Select the VMs you want to protect and configure storage, guest processing, and scheduling options as you would for any other hypervisor job.

Application Consistency on HVM

The plug-in supports application-consistent image-level backups with VSS integration for Windows guests. That means full Veeam Explorer support for Microsoft Exchange, SQL Server, Active Directory, and Oracle is available for HVM-hosted Windows VMs -- the same depth of application recovery you get from a vSphere job. Linux guests get filesystem-consistent quiescing rather than VSS.

Application-aware processing communicates with the guest OS during the backup job over TCP 135, 445, and 6162 for Windows VMs. If your HVM cluster VLAN doesn't have routes open from VBR to the guest workload VLANs on those ports, application-aware processing will fall back to crash-consistent. Add those rules to your inter-VLAN policy if you need application-consistent recovery for workloads on the HVM segment.

Recovery Options

Recovery with HVM follows the same patterns as other platforms. Full VM restore brings a backed-up HVM VM back into an HVM cluster -- and it works in both directions. You can also restore VMs originally backed up from VMware, Hyper-V, Nutanix AHV, Proxmox VE, oVirt KVM, and public cloud into HVM. If you're mid-migration from vSphere, that means the same Veeam job that protects your HVM workloads can also serve as your migration mechanism going the other way.

Instant Recovery is supported, but with one important constraint: the recovery destination for Instant Recovery is vSphere, Hyper-V, or Nutanix AHV -- not HVM itself. You get an immediate running VM on one of those platforms while you sort out the underlying problem. If you need to recover back into HVM specifically, use full VM restore, which does support HVM as the target. Disaster recovery to AWS or Azure is also supported, so HVM doesn't lock you into a single recovery path.

Roadmap Note

Storage snapshot integration for HPE Morpheus environments is on Veeam's roadmap but is not included in the initial GA release. The current integration covers image-level backup via the plug-in and worker architecture described in this article.

Key Takeaways
The Veeam plug-in for HPE Morpheus VM Essentials requires VBR 13.0.1 or later and is installed separately from the main VBR installer. It communicates with Morpheus Manager over HTTPS to drive the entire backup workflow, including worker creation and teardown.
Workers are ephemeral Linux VMs spawned one per host via the Morpheus Manager API. In a multi-VLAN environment, they need dual NICs -- one on the management VLAN for plug-in control traffic (TCP 443 and 19000), one on the backup data VLAN for StoreOnce Catalyst data transfer.
The data transmission channel range for this integration is TCP 2500-3300. The Catalyst protocol uses TCP 9387 (command) and 9388 (data). Both Catalyst ports must be open from the worker data NIC subnet to the StoreOnce data IP. VBR also needs 9387/9388 and 443 from the management VLAN to the StoreOnce management IP.
StoreOnce Catalyst provides source-side deduplication, so data is already deduplicated before it leaves the worker. This keeps backup data VLAN traffic substantially lower than a non-deduplicating repository would.
Application-consistent processing with VSS is supported for Windows guests and covers Exchange, SQL Server, Active Directory, and Oracle. Guest processing requires TCP 135, 445, and 6162 open from VBR to the VM workload VLANs.
Full VM restore supports HVM as both source and destination -- you can restore HVM backups back into HVM, and restore VMware, Hyper-V, AHV, and other workloads into HVM. Instant Recovery does not support HVM as a recovery destination at this release; it targets vSphere, Hyper-V, or AHV. For rapid recovery back into HVM specifically, use full VM restore.

Read more