Setting Up Veeam v13 with oVirt KVM: A Complete Integration Guide
- Background: What the oVirt Ecosystem Looks Like in 2026
- How the Integration Works: Plug-in v7 Architecture
- Supported Platforms and Version Requirements
- Prerequisites and Credentials
- Step 1: Verify the Plug-in Is Installed
- Step 2: Configure a Backup Repository
- Step 3: Connect the oVirt KVM Manager
- Step 4: Deploy Workers
- Step 5: Create a Backup Job
- Step 6: Configure Backup Copy Jobs
- Restore Operations
- Limitations Worth Knowing
- Upgrading from the Standalone Appliance Model
- Closing Thoughts
Background: What the oVirt Ecosystem Looks Like in 2026
The oVirt story in enterprise infrastructure has gotten more interesting since Red Hat formally ended active development of Red Hat Virtualization in 2026. What that means in practice: RHV 4.4 SP1 is the terminal release for customers on the Red Hat side, and Oracle picked up the oVirt development baton with Oracle Linux Virtualization Manager (OLVM). For shops running either platform, the management question is the same: how do you protect these workloads natively through Veeam without falling back to agent-based protection on every VM.
The answer is Veeam Plug-in for oVirt KVM, which shipped in v12.2 and reached plug-in version 7 with VBR v13. Plug-in v7 is a significant architectural change from earlier versions, dropping the standalone backup appliance entirely and moving to the same worker-based model used by AHV and Proxmox. If you're coming from an earlier version, that transition is covered at the end of this article. If you're doing a fresh v13 deployment, everything below applies directly.
How the Integration Works: Plug-in v7 Architecture
The oVirt KVM integration in v13 follows the same architectural pattern as the AHV and Proxmox plug-ins: a plug-in installed on the VBR backup server connects to the oVirt KVM Manager, Veeam deploys Linux-based worker VMs into the oVirt cluster to handle data movement, and backup data flows from the VMs through those workers to your backup repositories. The Manager is the single connection point for the entire cluster, not individual hosts.
The key components:
oVirt KVM Manager: The Linux-based management server that administers your oVirt resources including VMs, hosts, clusters, storage domains, and networks. Veeam connects to the Manager using its REST API. One Manager connection covers the full cluster it manages. You do not add individual oVirt hosts to Veeam separately.
Veeam Plug-in for oVirt KVM: The plug-in installed on the backup server enables VBR to communicate with the Manager and deploy and manage workers. In v13 it ships pre-installed with the standard VBR installation package. You only need to install it separately if you're applying an out-of-band plug-in update.
Workers: Linux-based VMs that Veeam deploys into your oVirt cluster. Workers are spun up at the start of a backup session and shut down when the session completes. They sit between the VMs being backed up and the backup repository, handling data transfer. The default worker configuration supports 4 concurrent backup tasks. Each additional task beyond the default requires 1 additional vCPU and 1 GB RAM.
Gateway server (optional): If your backup repository cannot host the Veeam Data Mover service directly (which most modern repositories can), a gateway server bridges communication between workers and the repository. In most environments you won't need this.
Supported Platforms and Version Requirements
| Component | Requirement |
|---|---|
| VBR version | Veeam Backup & Replication v13.0.1 (build 13.0.1 or later) |
| oVirt KVM Plug-in version | Version 7 (KVMPlugin_13.7.0.473). Ships pre-installed with VBR v13.0.1. Separate download only needed for manual plug-in updates. |
| Red Hat Virtualization | RHV 4.4 SP1 (Red Hat Virtualization Manager 4.5.0 or later). Note: RHV 4.4 SP1 is the terminal release. No further RHV versions are planned. |
| Oracle Linux Virtualization Manager | OLVM is supported. Cluster compatibility version must be set correctly per OLVM documentation. Specific supported OLVM versions are listed in the Veeam plug-in system requirements page. |
| Backup server OS | Windows Server 2016 or later, or Veeam Software Appliance (Linux) |
| Network | TCP 443 from VBR backup server to oVirt KVM Manager (REST API). TCP 10006 from workers to VBR backup server. Workers require direct IP access to backup repositories. |
| Credentials | oVirt KVM Manager admin account or account with sufficient permissions to read VM inventory and create/delete snapshots. |
| License | VUL (Veeam Universal License) per protected VM instance. Socket-based licensing does not cover oVirt KVM workloads. |
Prerequisites and Credentials
Before starting configuration, confirm the following:
- VBR v13.0.1 is installed and licensed with VUL covering the number of VMs you plan to protect.
- The oVirt KVM Manager is reachable from the backup server on TCP 443.
- You have admin credentials for the Manager, or an account with permissions to enumerate VMs, access storage domains, and create and delete VM snapshots.
- At least one storage domain in your oVirt cluster is accessible and has sufficient space to host worker VMs. Workers are deployed into the cluster storage you specify during the worker deployment wizard.
- A backup repository is configured in VBR and reachable from the oVirt network segment where workers will be deployed.
Step 1: Verify the Plug-in Is Installed
In the VBR console, open Help > About and confirm the installed plug-in versions. You should see Veeam Plug-in for oVirt KVM version 7 (KVMPlugin_13.7.0.473) listed. If it is missing, download the plug-in ZIP from the Veeam downloads page, extract it, and run the KVMPlugin_13.7.0.473.exe installer on the backup server with local administrator privileges.
Once installed, the Managed Servers section in VBR's Backup Infrastructure view will include an oVirt KVM category, and the New Job wizard will include oVirt KVM as a workload type.
Step 2: Configure a Backup Repository
If you already have backup repositories configured in VBR, you can use them directly for oVirt KVM backups. No oVirt-specific repository configuration is required. The workers that Veeam deploys into your cluster will connect to whatever repository you specify when creating the backup job.
For new deployments, add your repository under Backup Infrastructure > Backup Repositories before proceeding. Any VBR-supported repository type works. Scale-Out Backup Repositories work and are the right choice for environments where you want automated capacity tier tiering to object storage.
Step 3: Connect the oVirt KVM Manager
In the VBR console, navigate to Backup Infrastructure > Managed Servers, right-click, and select Add Server. Choose oVirt KVM from the server type list.
Specify the Manager address
Enter the DNS name or IP address of your oVirt KVM Manager. This is the Manager host, not an individual hypervisor host. VBR will connect to the Manager's REST API on port 443 to enumerate the cluster inventory and manage backup operations.
Provide credentials
Enter the credentials for the oVirt KVM Manager. Credentials must be in the format username@domain for oVirt accounts, for example admin@internal for the default admin account. The account needs permissions to read VM inventory, create and remove snapshots, and access storage domains for worker deployment.
Review and confirm the TLS certificate
VBR will connect to the Manager and present its TLS certificate for review. Verify the certificate thumbprint matches your Manager's certificate before accepting. Once accepted, VBR will enumerate the cluster and populate the inventory. This may take a minute in larger environments.
After the wizard completes, the oVirt Manager appears under Managed Servers with a green status. The full VM inventory is now visible in the VBR console under Inventory > Virtual Infrastructure > oVirt KVM.
Step 4: Deploy Workers
Workers are the Linux VMs that Veeam deploys into your oVirt cluster to handle backup data transfer. In v13, workers are deployed on demand at the start of a backup session and shut down when the session completes. They are not persistent running VMs between sessions.
Workers are configured per oVirt Manager connection. In the Managed Servers view, right-click your oVirt Manager and select Manage Workers. Click Add to define a worker configuration.
Select the host and storage domain
Choose the oVirt cluster host where the worker will be deployed and the storage domain where its virtual disk will reside. For multi-host clusters, Veeam can deploy workers to specific hosts or let the cluster scheduler place them. For larger deployments with high concurrent task requirements, pin workers to specific hosts where you've allocated the appropriate compute resources.
Configure concurrent tasks
The default worker handles 4 concurrent backup and restore tasks. Each additional concurrent task requires 1 additional vCPU and 1 GB RAM. Set this based on the number of VMs you expect to process simultaneously during your backup window. For a single backup job with 20 VMs, a worker configured for 4 concurrent tasks will process 4 VMs at a time, cycling through the job until complete.
For large deployments, deploy multiple workers rather than configuring one worker with a very high task count. Multiple workers distribute load across cluster nodes and provide redundancy if a worker deployment fails during a session.
Disable online update (recommended)
In the worker network settings, click Advanced and uncheck Check for updates online. This prevents the worker from attempting to reach external Veeam update servers during deployment, which causes delays or failures in air-gapped environments and adds unnecessary external connectivity from your backup infrastructure.
Step 5: Create a Backup Job
With the Manager connected and workers configured, you're ready to create backup jobs. In the VBR console, click Backup Job on the ribbon and select Virtual Machine, then choose oVirt KVM as the workload type.
Add VMs to the job
Browse the oVirt inventory and add the VMs, clusters, or storage domains you want to protect. You can add individual VMs, entire clusters (all VMs currently in the cluster), or specific storage domains. Adding at the cluster level means new VMs added to the cluster are automatically included in the job at the next run. Adding individual VMs gives you explicit control but requires manual job updates when new VMs are provisioned.
Select the repository
Choose the backup repository where this job's data will be stored. Set the restore point retention: the number of restore points to keep per VM. The default is 7 restore points (daily recovery points for one week). Adjust based on your RPO and RTO requirements and repository capacity.
Configure guest processing
Guest processing enables application-consistent backups for VMs running VSS-compatible Windows applications or Linux applications with quiescing support. Enable it and provide guest OS credentials for the VMs where you need application consistency. For Linux VMs without quiescing requirements, crash-consistent backups (guest processing disabled) are appropriate and simpler to manage.
Set the schedule
Schedule the job to run daily outside of production hours. Configure retry settings for failed VMs: 3 retries with a 10-minute wait is a reasonable default. VMs that fail on the first attempt often succeed on a retry once a transient snapshot or network issue resolves.
Step 6: Configure Backup Copy Jobs
A single backup job to a single repository is not a complete data protection strategy. Configure at least one backup copy job to move a copy of your oVirt VM backups to a secondary location: a second repository on different storage, an object storage target, or an offsite location. Backup copy jobs run independently of the primary backup job and maintain their own restore point chain at the target.
Create backup copy jobs under Home > Jobs > Backup Copy. Select your oVirt KVM backup jobs as the source. The copy job will process the latest restore point from the source job each time it runs. Stagger the copy job schedule to start after the primary backup job is expected to complete.
Restore Operations
Veeam Plug-in for oVirt KVM supports a full range of restore operations from oVirt VM backups in v13.
Entire VM restore recreates the VM in your oVirt environment from a selected restore point. You can restore to the original VM location (overwriting the existing VM) or to a different cluster, host, or storage domain. The wizard lets you remap networks if the target environment has different network names than the source.
Instant Recovery boots the VM directly from the compressed backup file, using the backup storage as the live disk source with a redo log capturing writes. The VM is available within minutes rather than waiting for a full restore. Note that Instant Recovery for oVirt KVM VMs can target VMware vSphere, Microsoft Hyper-V, and Nutanix AHV as the recovery platform in addition to oVirt itself. Cross-platform Instant Recovery is useful when you need to get a workload running quickly on a different infrastructure while a full oVirt recovery proceeds in parallel.
File-level restore mounts the VM's backup disks on the mount server and lets you browse and restore individual files and folders without restoring the entire VM. Available for Windows and Linux guest filesystems supported by VBR's mount service.
Disk restore attaches a restored virtual disk from a backup to any running VM in your oVirt environment without restoring the full VM. Useful for recovering a single application data disk or replacing a corrupted disk on a running VM.
Cross-platform restore to oVirt: you can restore VMs from VMware vSphere, Microsoft Hyper-V, Nutanix AHV, Proxmox VE, and cloud platform backups directly to your oVirt KVM environment. Conversely, oVirt VM backups can be restored to Azure, AWS, Google Cloud, Nutanix AHV, and Proxmox VE.
Limitations Worth Knowing
| Limitation | Detail |
|---|---|
| Hosted-engine VMs | Cannot be backed up by the plug-in. Use oVirt native engine-backup for the hosted-engine VM. |
| RHV host maintenance mode | An RHV host cannot be switched to maintenance mode during active data transfer, even when no backup operations appear to be in progress. This is a known RHV bug. Contact Red Hat support if this affects your maintenance workflows. |
| 2-NIC cluster routing | For RHV clusters with 2 network adapters, manual configuration of network routing may be required for workers to reach the backup repository correctly. |
| SureBackup | Virtual lab boot verification is not available for oVirt KVM VMs. Backup verification and content scan only mode (integrity check, AV scan, YARA) is supported. |
| Cloud Connect | Veeam Cloud Connect repositories cannot be used as the primary backup target for oVirt KVM backup jobs. They can be used as backup copy targets. |
| Inventory sync delay | After changes in the oVirt environment (VM migrations, new VMs, storage changes), the inventory in VBR may take up to 15 minutes to reflect the change. Rescan the Manager to force an immediate update. |
| Concurrent tasks per worker | Default 4 per worker. Each additional task requires 1 vCPU and 1 GB RAM. Network throughput in your cluster is the practical ceiling, not just worker compute. |
Upgrading from the Standalone Appliance Model
If you are running an earlier version of the oVirt KVM plug-in that used the standalone backup appliance (versions 2a through 6), upgrading to v7 in VBR v13 requires specific steps.
If you are on plug-in version 2a, 3.0, 3a, 3b, 4.0, 4.1, or 5: you must first upgrade to version 6 before you can upgrade to version 7. You cannot jump directly from these versions to v13. Follow the upgrade path documented in the Veeam Backup for OLVM and RHV 6 User Guide to reach v6 first, then proceed with the VBR v13 upgrade.
If you are on plug-in version 6 or 7: upgrade directly to VBR v13.0.1. During the upgrade, the configuration settings from the backup appliance are automatically migrated to the VBR configuration database. When the migration completes, the backup appliance VM is removed from the cluster and a worker is deployed in its place. The appliance-to-worker migration happens automatically. You do not need to manually remove the appliance or configure workers from scratch.
Closing Thoughts
The oVirt KVM plug-in in v13 is a mature integration that covers the full protection and recovery surface for RHV and OLVM workloads without requiring per-VM agents. The worker architecture keeps it consistent with the AHV and Proxmox integrations, which matters if you're running a mixed environment and want uniform operational patterns across your hypervisors.
The hosted-engine limitation is the one that catches people off guard. If your oVirt environment is self-hosted (the Manager runs as a VM inside the cluster it manages), that VM cannot be backed up by this plug-in and needs separate protection via the native engine-backup tool. Document that gap explicitly in your protection policy so it doesn't get missed during a real recovery event.
For RHV shops in particular: RHV 4.4 SP1 is the end of the line. If you haven't evaluated OLVM or another platform as the migration target for when RHV reaches end of support, now is a good time. The Veeam integration for OLVM is the same plug-in and the same configuration steps, which removes at least one variable from that migration planning.
What You've Covered
- Plug-in v7 architecture understood: no backup appliance, worker-based model, Manager as single connection point
- RHV 4.4 SP1 and OLVM both supported with same plug-in and same configuration
- VUL licensing confirmed per VM, socket-based does not cover oVirt
- Plug-in installed and verified on backup server
- Backup repository configured and sized for per-machine chains
- oVirt KVM Manager connected via REST API on TCP 443
- Workers deployed with correct concurrent task count and online update disabled
- Backup job created with appropriate VM scope, retention, guest processing, and schedule
- Backup copy job configured to secondary location
- Hosted-engine VM limitation documented and engine-backup protection confirmed separate
- Upgrade path from appliance model understood: v6 required before v13 if coming from v2a-v5