Setting Up Veeam v13 with Scale Computing HyperCore: A Complete Integration Guide
- Background: HyperCore and the Veeam Integration
- Architecture: How the Plug-in Works
- System Requirements and Prerequisites
- Important Note for Linux VBR 13.0.0 Upgrades
- Step 1: Verify the Plug-in Is Installed
- Step 2: Configure a Backup Repository
- Step 3: Connect the HyperCore Cluster
- Step 4: Deploy Workers
- Step 5: Create a Backup Job
- Step 6: Configure Backup Copy Jobs
- Restore Operations
- Limitations Worth Knowing
- Closing Thoughts
Background: HyperCore and the Veeam Integration
Scale Computing HyperCore is a hyperconverged infrastructure platform built for simplicity and edge deployments. It runs KVM under the hood but presents a tightly integrated management layer where storage, compute, and virtualization are managed as a single system through the SC//HyperCore web interface and API. No separate vCenter-equivalent, no separate storage management plane. That design makes it attractive for remote offices, retail environments, manufacturing floors, and any scenario where full-time IT staff aren't on site to manage a complex infrastructure stack.
Veeam's plug-in for HyperCore brings the same worker-based backup architecture to these environments that's used for AHV, Proxmox, and oVirt KVM. The integration point is the HyperCore cluster API, not individual nodes, and everything flows through Veeam workers deployed directly into the cluster. The result is agentless VM backup and recovery managed entirely from the VBR console, with the full Veeam restore catalog available for HyperCore workloads.
Architecture: How the Plug-in Works
The plug-in connects VBR to the HyperCore cluster via the cluster's REST API. Veeam deploys Linux-based worker VMs into the HyperCore cluster. Those workers handle snapshot orchestration and data transfer during backup and restore sessions. Workers communicate with backup repositories using the standard Veeam Data Mover protocol. Firewall rules between the cluster, workers, and backup server are created automatically by Veeam during worker deployment.
One architectural detail that's specific to HyperCore workers: each concurrent backup task requires a dedicated VIRTIO virtual drive on the worker VM, plus one additional VIRTIO drive used by the worker system itself. This means a worker configured for 4 concurrent tasks needs 5 VIRTIO drives attached (4 for tasks, 1 for the system). The maximum number of parallel tasks across a HyperCore deployment is 25. Workers must be configured to handle a minimum of 2 concurrent tasks. Setting a worker to 1 concurrent task causes it to sporadically power on and off when a job contains 2 or more VMs, which is a confirmed known issue in v13.
System Requirements and Prerequisites
| Component | Requirement |
|---|---|
| VBR version | Veeam Backup & Replication v13.0.1 or later |
| HyperCore version | All HyperCore versions supported by the underlying KVM version in use. Consult the SC//HyperCore Software Support Guide for the specific HyperCore version you're running. |
| Network connectivity | The HyperCore cluster must have a direct IP connection to the VBR backup server. Connections through NAT gateways are not supported. TCP 443 from VBR to the HyperCore cluster API. TCP 10006 from workers to VBR backup server. |
| Credentials | Local HyperCore cluster administrator account. OpenID Connect (OIDC) authentication accounts are not supported. Must be a local account with admin privileges. |
| Worker storage | Workers are deployed on the HyperCore cluster itself. The cluster must have available storage for the worker VM disks. Each concurrent task requires 1 VIRTIO drive on the worker plus 1 for the worker system disk. |
| Repository | Any VBR-supported repository type. Veeam Cloud Connect repositories cannot be used as the primary backup target for HyperCore jobs (they can be used for backup copies). If a repository currently storing HyperCore backups is added as an extent to a SOBR after the fact, jobs targeting it will fail. Target the SOBR directly instead. |
| License | VUL per protected VM instance. |
| Disk size | Maximum VM disk size supported for backup is 16 TB. Disks larger than 16 TB are not supported. |
Important Note for Linux VBR 13.0.0 Upgrades
Step 1: Verify the Plug-in Is Installed
Open the VBR console and check Help > About to confirm the HyperCore plug-in is listed. If it is absent, install it manually from the VBR installation media or the Veeam downloads page. The plug-in ships pre-installed with fresh VBR v13.0.1 installations. Linux appliance upgrades from 13.0.0 are the exception noted above.
Once the plug-in is present, you will see Scale Computing HyperCore as an available server type under Backup Infrastructure > Managed Servers, and HyperCore will be available as a workload type in the backup job wizard.
Step 2: Configure a Backup Repository
Configure the backup repository where HyperCore VM backups will be stored. Any VBR-supported repository works. Two things to plan for:
Do not use a Veeam Cloud Connect repository as your primary target. Cloud Connect repos cannot serve as the primary backup target for HyperCore jobs. You can use them for backup copy jobs. Primary backups must go to a local or network repository.
If you plan to use a SOBR, target it directly from the start. If you add an existing repository to a SOBR as an extent after HyperCore backup jobs are already writing to it, those jobs will fail. The fix is to edit the jobs to target the SOBR directly. Avoid the problem by targeting the SOBR from job creation if your storage architecture uses one.
Step 3: Connect the HyperCore Cluster
In the VBR console, go to Backup Infrastructure > Managed Servers, right-click, and select Add Server. Choose Scale Computing HyperCore from the list.
Specify the cluster address
Enter the DNS name or IP address of the HyperCore cluster. This is the cluster's management IP, not an individual node IP. VBR connects to the HyperCore REST API to enumerate the cluster inventory, manage snapshots, and orchestrate worker deployment.
Provide credentials
Enter the local HyperCore administrator account credentials. The account must be a local HyperCore account with administrator privileges. OpenID Connect (OIDC) authentication accounts are explicitly not supported. Do not use an OIDC-authenticated account here even if your HyperCore environment uses OIDC for day-to-day management logins.
Confirm direct network path
Before completing the wizard, confirm that the HyperCore cluster has a direct routable IP path to the VBR backup server and that no NAT gateway sits between them. The plug-in requires direct IP connectivity from the cluster to the backup server on TCP 10006. NAT gateways in the path are not supported and will cause worker communication failures.
After the wizard completes, the HyperCore cluster appears under Managed Servers with a green status. The VM inventory populates in the VBR console under Inventory > Virtual Infrastructure > Scale Computing HyperCore. Inventory changes in the HyperCore environment take up to 15 minutes to sync. Rescan the cluster to force an immediate update.
Step 4: Deploy Workers
Right-click the HyperCore cluster in Managed Servers and select Manage Workers. Click Add to define a worker.
Select the host node
Choose which HyperCore node the worker will be deployed on. For multi-node clusters, you can deploy workers on specific nodes to control which node handles backup traffic. Deploying a worker per node distributes load and ensures that if one node is in maintenance, other workers remain available.
Set concurrent tasks: never set to 1
Configure the number of concurrent backup tasks this worker will handle. The default is 4. There are two hard constraints here: the minimum is 2 (not 1), and the maximum across your entire HyperCore deployment is 25 parallel tasks total.
Do not configure a worker to handle 1 concurrent task. Workers set to 1 concurrent task will sporadically power on and off when a job contains 2 or more VMs. This is a confirmed known issue in v13 and is not resolved at the time of writing. The minimum effective setting is 2.
Each concurrent task requires 1 VIRTIO drive on the worker VM plus 1 for the worker's own system disk. A worker with 4 concurrent tasks needs 5 VIRTIO drives total. Ensure the node where the worker is deployed has sufficient available storage for the worker VM and its attached drives.
Disable online update
In the worker advanced network settings, uncheck Check for updates online. This prevents the worker from attempting to reach external Veeam update servers during deployment, which causes delays or failures in environments without internet access from the HyperCore network.
Step 5: Create a Backup Job
In the VBR console, click Backup Job on the ribbon, select Virtual Machine, and choose Scale Computing HyperCore as the workload type.
Add VMs to the job
Browse the HyperCore inventory and add VMs, tags, or the full cluster. Adding at the cluster level includes all VMs currently in the cluster and automatically picks up newly created VMs at subsequent job runs. Adding individual VMs gives explicit control at the cost of manual updates when new VMs are added.
Select the repository and retention
Select your configured repository. Set restore point retention per your RPO requirements. Note that VeeamZIP backups created for HyperCore VMs do not support retention policies. Retention-managed backup points only apply to standard backup job restore points.
Note the guest processing limitation
Application-aware processing is not supported for HyperCore backup jobs. You cannot enable VSS-based application consistency or pre/post guest scripts through the Veeam job. Backups are crash-consistent. For applications requiring transaction-consistent backups (SQL Server, Exchange, Oracle), use Veeam Agents installed directly in the guest OS running on HyperCore VMs, or application-native backup methods, alongside the VM-level backup.
Check snapshot settings on VM disks
If snapshot creation is disabled on specific VM disks in HyperCore, those disks will be skipped during backup job sessions without causing the overall job to fail. The job log will note the skipped disks. Before the first backup run, verify that snapshot creation is enabled for all disks on VMs you intend to protect fully.
Set the schedule
Schedule the job for outside production hours. Configure retries for failed VMs (3 retries, 10-minute wait is a solid default). Enable email notifications so failed sessions are visible without requiring manual log review.
Step 6: Configure Backup Copy Jobs
HyperCore deployments are frequently used in edge locations where the backup server and primary repository are also on-site. For these environments, an offsite backup copy is particularly important: a site-level failure (power, flooding, theft) that takes down the HyperCore cluster will also take down any local-only backup data.
Create a backup copy job targeting a secondary location: a repository at a central data center, an object storage target, or a Veeam Cloud Connect cloud repository (which is a valid backup copy target even though it cannot serve as the primary target). Configure the copy job to run on a schedule after the primary backup job completes.
Restore Operations
Veeam Plug-in for Scale Computing HyperCore supports a broad set of restore operations in v13.
Entire VM restore recreates the VM in the HyperCore environment from a selected restore point. You can restore to the original cluster and node or to a different cluster. If you restore a VM from a VMware, Hyper-V, oVirt KVM, or Proxmox VE backup to HyperCore, install Scale Guest Tools on the restored VM to resolve potential network connection issues that arise from the cross-platform restore.
Instant Recovery of HyperCore VMs is supported to VMware vSphere, Microsoft Hyper-V, and Nutanix AHV. Note that Instant Recovery back to HyperCore itself is not listed as a supported target in the current v13 documentation. For rapid recovery back to HyperCore, use entire VM restore.
File-level restore mounts the VM's backup disks and allows individual file and folder recovery. Available for supported Windows and Linux guest filesystems.
Application item restore for Microsoft Active Directory, Exchange, SharePoint, Oracle Database, and SQL Server is supported from HyperCore VM backups. This requires the relevant Veeam Explorer to be installed on the backup server.
Disk export to VMDK, VHD, and VHDX formats is supported for cross-platform migration and lab recovery use cases.
Restoring from archive tier: if the restore point is stored in the archive tier of a SOBR, you must first retrieve the backup data before restoring. Entire VM restore from backups stored in an Amazon S3 Glacier Instant Retrieval archive extent is not supported. For those backups, use Instant Recovery instead.
Limitations Worth Knowing
| Limitation | Detail |
|---|---|
| Application-aware processing | Not supported. All HyperCore VM backups are crash-consistent. Use Veeam Agents in-guest for application consistency. |
| Worker concurrent tasks minimum | Cannot be set to 1. Workers with 1 concurrent task sporadically power on/off when jobs contain 2 or more VMs. Minimum effective setting is 2. |
| Maximum parallel tasks | 25 across the entire HyperCore deployment. Each task requires 1 VIRTIO drive on the worker plus 1 for the worker system disk. |
| NAT gateway | Not supported. Direct IP connection required from the HyperCore cluster to the VBR backup server. |
| OpenID Connect credentials | Not supported for cluster connection. Local admin account required. |
| Replicated VMs | Not visible in Veeam inventory. Cannot be backed up by the plug-in. Protect the source VM on its originating cluster instead. |
| Disk size | Maximum 16 TB per VM disk. Larger disks are not supported. |
| SureBackup | Virtual lab boot verification is not available for HyperCore VMs. Backup verification and content scan only mode (integrity check, AV scan, YARA) is supported. |
| Cloud Connect as primary target | Not supported as the primary backup job target. Valid as a backup copy target. |
| SOBR extent conversion | If a repository storing HyperCore backups is added as a SOBR extent after jobs already target it, those jobs will fail. Target the SOBR directly from job creation. |
| Snapshot persistence | Due to HyperCore API limitations, inactive backup snapshots may persist on the cluster in some circumstances. These can be safely removed manually from the HyperCore console. Expected to be corrected in a future release. |
| VeeamZIP retention | Retention policies are not supported for VeeamZIP backups of HyperCore VMs. |
| Backup move | Not supported for HyperCore backups. |
Closing Thoughts
HyperCore's appeal is its simplicity, and the Veeam integration extends that simplicity to the backup side: one plug-in, one cluster connection, workers deployed automatically, full restore capability managed from the same VBR console you use for everything else. For organizations running HyperCore at the edge alongside VMware or Hyper-V at a central data center, being able to manage protection for all of it from a single VBR instance is a genuine operational advantage.
The limitations table above is longer than most, but most of those limitations are edge cases that don't affect typical deployments. The ones worth keeping front of mind are the application-aware processing gap (plan your application backup strategy separately), the NAT gateway requirement (no NAT, direct IP only), and the concurrent task minimum (never set workers to 1). Get those three right at the start and the rest of the deployment is straightforward.
The VIRTIO drive requirement per concurrent task is unique to HyperCore compared to the other non-VMware plug-ins. It's not difficult to work around, but it does mean your worker sizing math needs to account for available VIRTIO drive slots on the worker VM, not just CPU and RAM. Check that before deploying workers into clusters where storage attachment limits might be a constraint.
What You've Covered
- HyperCore worker architecture understood: VIRTIO drives per task, 25 task maximum, 2 task minimum per worker
- Linux VBR 13.0.0 to 13.0.1 upgrade noted: HyperCore plug-in must be installed manually post-upgrade
- Plug-in verified installed on backup server
- Backup repository configured with SOBR targeting confirmed from job creation
- Cloud Connect excluded as primary target, valid for backup copies
- HyperCore cluster connected via local admin credentials (not OIDC)
- Direct IP path from cluster to VBR backup server confirmed, no NAT in path
- Workers deployed with minimum 2 concurrent tasks, online update disabled
- Backup job created with VM scope confirmed, snapshot-disabled disks noted
- Application-aware processing gap documented, Veeam Agent strategy in place for app-consistent workloads
- Replicated VMs excluded from inventory understood, source VM protection confirmed
- Backup copy job configured to offsite or object storage target