oVirt and RHV End-to-End Setup -- Self-Hosted Engine, Storage Domains, Networking, and HA

oVirt 4.5 RHV KVM Self-Hosted Engine VDSM Storage Domains GlusterFS

Standalone Infrastructure | Component: oVirt 4.5 / RHV 4.4 | Audience: Enterprise Architects, Senior Linux Admins

oVirt is the upstream open source project that Red Hat Virtualization (RHV) is built on. They're close enough that most of this article applies to both, and where they diverge it's worth knowing exactly where. oVirt runs on CentOS Stream or RHEL derivatives. RHV runs on RHEL and comes with a support subscription. The architecture is identical: the oVirt Engine manages KVM hypervisor hosts running VDSM, with shared storage providing the VM disk and metadata layer across the cluster.

This article covers a complete oVirt/RHV deployment: hardware requirements, the self-hosted engine versus standalone engine decision, installation, host registration, storage domain configuration for NFS, iSCSI, FC, and GlusterFS, logical network design, VM and template management, hosted engine HA behavior, and migration affinity rules. The most current stable release at time of writing is oVirt 4.5.7.


1. oVirt vs RHV: Where They Diverge

Both platforms share the same codebase. oVirt 4.5 is the upstream. RHV 4.4 is the Red Hat downstream, rebased on RHEL and supported under a Red Hat subscription. For most configuration and operational procedures, the commands and interfaces are identical. The differences are:

AspectoVirt 4.5RHV 4.4
Base OSCentOS Stream 8/9 or RHEL derivativesRHEL 8.6+ (required)
Repositorycentos-release-ovirt45 from CentOS reposRHEL subscription channels via subscription-manager
SupportCommunity (forums, IRC, mailing lists)Red Hat support subscription
Host OS optionsoVirt Node (minimal CentOS based image) or full RHEL/CentOS installRHEL only for hosts
GlusterFS integrationSupported via centos-release-glusterSupported via Red Hat Gluster Storage subscription
Engine setup commandengine-setup (same)engine-setup (same)
Hosted engine deployhosted-engine --deploy (same)hosted-engine --deploy (same)

Everything in sections 2 through 8 applies to both unless explicitly noted.


2. Hardware Requirements and Pre-Install Prep

oVirt hosts run KVM. The hardware requirements reflect that.

ComponentMinimumProduction RecommendedNotes
CPUDual core with Intel VT or AMD-VDual socket, 8+ cores per socketAll hosts in a cluster must share a compatible CPU family for live migration. Mixed CPU generations require setting a cluster CPU type to the lowest common denominator.
RAM2 GB (host OS only)64 GB+ per hostThe Engine VM itself needs at minimum 4 GB RAM reserved. Budget host RAM for VMs plus Engine overhead on whichever host currently runs it.
OS disk25 GB100 GB SSDLogs, coredumps, and swap consume more than you'd expect. Give each host a real local SSD, not a USB drive.
Network1 GbETwo 10 GbE NICsNetwork teaming is NOT supported in oVirt. Use bonding only. The official docs are explicit: teaming causes errors and will break hosted engine deployment.
Shared storageNFS share, iSCSI LUN, or FC LUNDedicated storage network, 10 GbE iSCSI or FCAll hosts in a data center must be able to reach all storage domains. Storage connectivity failures cause host fencing.

Pre-Install Checklist

  • DNS: Forward and reverse DNS records for all hosts and the Engine VM. oVirt resolves hostnames extensively during deployment and ongoing operation. If your /etc/hosts has entries but DNS doesn't, you'll get inconsistent behavior. Both DNS and /etc/hosts must agree.
  • NTP: All hosts must be synchronized. Large clock skew breaks VDSM communication and storage fencing. Configure chrony on each host before adding it to the Engine.
  • Firewall: VDSM requires port 54321 (VDSM) and 16514 (libvirt TLS). The Engine communicates to hosts on these ports. If you're running firewalld, the ovirt-vdsm service firewall rules install automatically with the VDSM package. Check they're active after installation.
  • Network bonding (not teaming): If you need bonded NICs, configure them as bond devices before running hosted engine deployment. Network teaming is unsupported and explicitly breaks the hosted engine deployer on the management network.
  • Subscription/repo: For oVirt on CentOS Stream, enable the ovirt45 repo. For RHV, register with subscription-manager and attach the RHV subscription before running any install commands.

3. Self-Hosted Engine vs Standalone Engine

This is the first real decision and it's worth making deliberately because migrating between them later, while possible, is an operational procedure with risk.

FactorSelf-Hosted EngineStandalone Engine
What it isThe Engine runs as a VM on the cluster it manages. It lives on the same shared storage and hosts as your workload VMs.The Engine runs on a dedicated physical or virtual server outside the cluster it manages.
Hardware requirementNo dedicated server needed for the Engine. The first host serves double duty.Requires a separate server or VM for the Engine, outside the managed cluster.
HABuilt-in. The hosted engine HA daemon monitors the Engine VM and migrates it to another host if the current host fails.Manual. If the Engine server fails, management is unavailable until you restore it. Running VMs continue but you can't manage them.
ComplexityHigher initial complexity. The deployment sequence (first host, then Engine VM, then shared storage, then additional hosts) is specific and must be followed exactly.Simpler. Deploy Engine on a server, point hosts at it. The sequence is straightforward.
Best forMost environments. The HA benefit and the saving of a dedicated server make this the default recommendation for any deployment with two or more hosts.Environments where the Engine must be isolated from the cluster for compliance or security reasons. Large environments where the Engine's resource needs warrant dedicated hardware.
Minimum hosts for HATwo. The Engine VM can migrate between them if one fails.Not applicable to the cluster. Standalone Engine HA requires separate HA infrastructure.

The official recommendation from the oVirt documentation is the self-hosted engine for most deployments. It's what most operators choose. The standalone Engine is still a valid choice when you need strict separation between management and workload infrastructure, or when the Engine's PostgreSQL database needs its own dedicated server for performance at large scale.


4. Self-Hosted Engine Installation

Preparing the First Host

bash: Enable repositories and install packages on the first host (oVirt on CentOS Stream 8)
# CentOS Stream 8 - enable oVirt 4.5 repositories
dnf install -y centos-release-ovirt45
dnf module enable -y javapackages-tools
dnf module enable -y pki-deps
dnf module enable -y postgresql:12
dnf module enable -y mod_auth_openidc:2.3

# Install the hosted engine setup package and VDSM
dnf install -y ovirt-hosted-engine-setup

# Update all packages
dnf update -y

# For RHV on RHEL 8.6+, instead use:
# subscription-manager repos \
#   --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \
#   --enable=rhv-4.4-for-rhel-8-x86_64-rpms
# dnf install -y ovirt-hosted-engine-setup

Running the Hosted Engine Deployer

The Cockpit web interface (port 9090) is the recommended method for hosted engine deployment. It provides a browser based wizard UI that's less error prone than the CLI deployer. The CLI deployer is still supported and works fine for scripted or automated deployments.

bash: Deploy self-hosted engine via CLI
# Run as root on the first host
# Use --noansible if you encounter issues with the Ansible-based storage setup
hosted-engine --deploy

# The deployer will prompt for:
# - Engine FQDN (must resolve in DNS)
# - Engine admin password
# - Engine VM CPU and RAM (minimum 4 vCPUs, 16 GB RAM for Engine VM)
# - Storage type for the hosted engine storage domain (NFS recommended for first deployment)
# - NFS path or iSCSI target details
# - Network bridge to use (ovirtmgmt by default)

# After deployment completes, the Engine is accessible at:
# https://engine.yourdomain.local/ovirt-engine

# Check hosted engine status
hosted-engine --vm-status
The hosted engine deployer expects the first host to be able to reach the Engine VM's FQDN by DNS before the Engine VM even exists. Create the DNS A record and PTR record for the Engine FQDN before running the deployer, pointing to the IP address you'll assign to the Engine VM during setup. If DNS resolution fails during deployment, the deployer exits with a confusing error. The fix is always a DNS record, not a deployer flag.

Standalone Engine Installation

bash: Install standalone Engine on a dedicated server
# On the Engine server (CentOS Stream 8)
dnf install -y centos-release-ovirt45
dnf module enable -y javapackages-tools pki-deps postgresql:12 mod_auth_openidc:2.3
dnf install -y ovirt-engine
dnf update -y

# Run the Engine setup wizard
engine-setup

# The setup wizard configures:
# - Engine hostname and FQDN
# - Database (local PostgreSQL or remote)
# - PKI certificate configuration
# - Admin password
# - Application firewall rules
# - NFS exports for ISO and export storage domains (optional)

# After setup, Engine is accessible at:
# https://engine.yourdomain.local/ovirt-engine

5. Host Registration and Cluster Formation

After the Engine is running, add hosts through the Administration Portal. Each host must have VDSM installed and must be reachable from the Engine on port 54321.

  1. In the Administration Portal, navigate to Compute, then Hosts, then click New.
  2. Enter the host FQDN (not IP address), the root password, and select the cluster. The Engine connects to the host, installs VDSM if it isn't already installed, configures the host, and moves it to the Up state.
  3. For self-hosted engine: the first host is already in the cluster. Add the second host through the portal to enable Engine VM migration between hosts. Once two hosts are in the cluster, the HA daemon can migrate the Engine VM if one host fails.
bash: Verify VDSM status on a host after adding it to the Engine
# On each host, verify VDSM is running and communicating with the Engine
systemctl status vdsmd
systemctl status supervdsmd

# Check VDSM logs for connection issues
tail -f /var/log/vdsm/vdsm.log

# From the Engine, verify host connectivity (run on Engine VM)
engine-config -g VdsmSSLProtocol

# Check the host's certificate was properly installed
ls /etc/pki/vdsm/certs/

# Test Engine to host communication
vdsm-tool is-configured
oVirt uses cluster CPU type to determine which CPU features are exposed to VMs. The cluster CPU type defaults to the detected CPU family of the first host added. When you add a host with a different CPU generation, VMs won't live migrate to it unless their CPU type is compatible. If you have mixed CPU generations, set the cluster CPU type to the lowest common denominator before adding VMs, not after. Changing it after requires rebooting all VMs in the cluster to apply the new CPU configuration.

6. Storage Domains

oVirt organizes storage into Data Centers and Storage Domains. A Data Center is the highest administrative boundary. Each Data Center has one Master Storage Domain (which holds metadata and the OVF store for all VMs), plus additional data, ISO, and export domains. Storage must be configured before VMs can be created.

Storage Domain Types

TypeFunctionStorage Backend
DataStores VM disk images (QCOW2 or RAW). The primary storage domain type.NFS, iSCSI, FC, GlusterFS, local
ISOStores ISO images for VM installation media. Mounted read-only by hosts when attaching ISO to a VM.NFS only
ExportUsed for VM import/export between Data Centers or environments. Deprecated in newer oVirt versions in favor of direct VM export.NFS only

NFS Storage Domain

NFS is the simplest storage backend and the most common choice for labs and smaller deployments. The NFS server must export the share with specific permissions: the vdsm user (uid 36) and kvm group (gid 36) must have read/write access.

bash: Configure NFS export with correct permissions for oVirt
# On the NFS server, create the export directory
mkdir -p /export/ovirt-data
chown 36:36 /export/ovirt-data
chmod 0755 /export/ovirt-data

# Add to /etc/exports
# all_squash,anonuid=36,anongid=36 maps all client users to vdsm:kvm (uid/gid 36)
# This is the only export option guaranteed to work per the official oVirt docs
echo "/export/ovirt-data  *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)" >> /etc/exports

# Apply the export
exportfs -ra

# Verify the export is visible
showmount -e localhost

# In the Administration Portal:
# Storage > Domains > New
# Domain Function: Data
# Storage Type: NFS
# Enter the NFS path: nfs-server.yourdomain.local:/export/ovirt-data

iSCSI Storage Domain

iSCSI storage domains use a LUN from an iSCSI target. oVirt discovers targets via the portal configuration on each host. The same LUN must be accessible from all hosts in the Data Center, but do not format it before adding it to oVirt. oVirt formats the LUN with its own metadata format during storage domain creation.

bash: Configure iSCSI initiator on each host before adding iSCSI storage domain
# Install and configure iSCSI initiator on each host
dnf install -y iscsi-initiator-utils

# Set a unique initiator name per host (change the date and hostname)
echo "InitiatorName=iqn.2024-01.local.yourdomain:$(hostname -s)" \
  > /etc/iscsi/initiatorname.iscsi

systemctl enable --now iscsid

# Discover targets on the storage server
iscsiadm -m discovery -t sendtargets -p 10.0.500.10

# Log in to the target (replace with your target IQN)
iscsiadm -m node \
  -T iqn.2024-01.com.yourstorage:target01 \
  -p 10.0.500.10 \
  --login

# Verify the LUN is visible
lsblk | grep sd

GlusterFS Storage Domain

GlusterFS integration in oVirt enables hyperconverged deployments where storage runs on the same hosts as the hypervisor, without external NAS or SAN hardware. This is similar to Nutanix in concept but using open source components. The integration requires oVirt hosts to also run glusterd (the GlusterFS daemon), and the Gluster volume must be created and started before adding it as an oVirt storage domain.

GlusterFS volumes for oVirt should be created as Arbiter volumes (two full replicas plus one arbiter brick that stores only metadata). An Arbiter volume provides the storage integrity of a three-way replica with 33% less disk usage. For a three-node hyperconverged cluster this is the standard configuration.

bash: Create a GlusterFS Arbiter volume for oVirt storage (run on one host)
# Requires glusterfs-server installed and glusterd running on all three hosts
# Replace host and brick path variables to match your environment

HOST1="host1.yourdomain.local"
HOST2="host2.yourdomain.local"
HOST3="host3.yourdomain.local"
BRICKPATH="/data/gluster/ovirt-vol/brick1"

# Create brick directories on each host (run on each host)
# mkdir -p $BRICKPATH

# Create the arbiter volume (run on any one host)
gluster volume create ovirt-data replica 3 arbiter 1 \
  ${HOST1}:${BRICKPATH} \
  ${HOST2}:${BRICKPATH} \
  ${HOST3}:${BRICKPATH}

# Set required options for oVirt compatibility
gluster volume set ovirt-data cluster.granular-entry-heal enable
gluster volume set ovirt-data network.remote-dio enable
gluster volume set ovirt-data performance.strict-o-direct on
gluster volume set ovirt-data server.allow-insecure on

# Start the volume
gluster volume start ovirt-data

# Verify
gluster volume status ovirt-data

7. Networking: Logical Networks, Bonds, and VLANs

oVirt's networking model is built around logical networks. A logical network is an abstraction that can be mapped to a VLAN, a bond, or a plain NIC across all hosts in a cluster. You define the logical network once in the Engine and then assign it to NICs on each host. Changes to a logical network configuration in the Engine propagate to hosts via VDSM.

The ovirtmgmt Network

The ovirtmgmt network is the management network created automatically during Engine setup. All Engine-to-host communication (VDSM, certificate exchange, host monitoring) uses this network. Every host must have ovirtmgmt assigned and reachable. Don't assign heavy traffic to ovirtmgmt: VM data, storage, or live migration traffic on the management network degrades Engine communication reliability.

Creating Additional Logical Networks

  1. In the Administration Portal, navigate to Network, then Networks, then click New.
  2. Name the network (for example, vm-network-400), set the VLAN tag if needed, and enable VM network if VMs will connect to it.
  3. Assign the network to hosts: in the host configuration, edit the NICs and drag the logical network to the desired NIC or bond. The change is applied live to the host without a reboot.
bash: Configure host NIC bonding and VLAN using nmcli before adding host to Engine
# oVirt supports bond modes 0, 1, 2, 3, 4 (802.3ad LACP), and 5
# Mode 4 (LACP) is the oVirt default. Use mode 1 (active backup) if no LACP support.
# Network teaming is NOT supported - use bonding only.

# Create a bond interface
nmcli con add type bond con-name bond0 ifname bond0 \
  bond.options "mode=802.3ad,miimon=100,lacp_rate=fast"

# Add physical NICs to the bond
nmcli con add type ethernet con-name bond0-slave1 ifname eth0 \
  master bond0
nmcli con add type ethernet con-name bond0-slave2 ifname eth1 \
  master bond0

# Create the management network VLAN on the bond
nmcli con add type vlan con-name ovirtmgmt ifname bond0.100 \
  dev bond0 id 100 \
  ipv4.addresses "10.0.100.11/24" \
  ipv4.gateway "10.0.100.1" \
  ipv4.dns "10.0.100.1" \
  ipv4.method manual

# Bring up the bond and VLAN
nmcli con up bond0
nmcli con up ovirtmgmt

# Verify connectivity before running hosted-engine --deploy
ping -c 3 10.0.100.1

8. VM Management, Templates, and Snapshots

Creating VMs

VMs in oVirt are created through the Administration Portal or the VM Portal (the simplified interface for VM operators without admin rights). At minimum a VM needs: virtual CPU count and model, RAM, one or more virtual disks in a data storage domain, and a virtual NIC attached to a logical network. The Administration Portal provides more options including CPU pinning, NUMA topology, high availability priority, and watchdog configuration.

Templates

A template in oVirt is a read-only snapshot of a VM used as a base for new VM deployments. Creating a template seals the source VM: it's converted to a template and can no longer run as a VM directly. Cloning from a template creates a full copy of the template's disks in the target storage domain.

  1. Install the OS in a VM, configure it fully (install guest agents, update packages, run Sysprep for Windows or cloud-init for Linux to remove machine specific identifiers).
  2. Power off the VM. Right click it and select Make Template. The VM becomes a template. The original VM is consumed and no longer exists as a runnable VM.
  3. To create a new VM from the template: click New VM, select the template from the template list, adjust CPU and RAM as needed, and click OK.
Install the oVirt Guest Agent (ovirt-guest-agent on Linux, ovirt-guest-tools on Windows) before creating a template. The guest agent provides accurate CPU and memory usage statistics in the Engine dashboard, enables graceful shutdown from the portal, allows IP address reporting, and is required for disk hotplug and memory ballooning to work correctly. VMs without guest agents show as disconnected in the monitoring view even when running.

Snapshots

Snapshots in oVirt capture the VM's disk state and optionally its memory state at a point in time. Memory snapshots (live snapshots with memory) allow you to restore a running VM to its exact state including RAM contents. Disk only snapshots are faster and consume less storage but restore the VM to the disk state without restoring memory.

Snapshots are stored in the same data storage domain as the VM's disks and are managed by the Engine, not the storage layer. The recommended limit is 4 snapshots per VM. Beyond that, snapshot chains grow long enough to noticeably impact I/O performance during VM operation and make storage reclamation (deleting old snapshots) slow and resource intensive.


9. Hosted Engine HA Behavior

The hosted engine HA system runs as two daemons on each host: ovirt-ha-agent (monitors local host health and Engine VM status) and ovirt-ha-broker (coordinates with other hosts via the shared storage domain). Every host running the hosted engine addon communicates its health score to the shared storage, and the host with the highest score takes responsibility for running the Engine VM.

Health Scores and Failover

Each host calculates a score based on storage connectivity, network connectivity, and whether the Engine service is responding. If the Engine VM is running on a host that drops below a threshold, or if the Engine service inside the VM stops responding to health checks on port 443, the HA agents on other hosts detect the failure and elect a new host to restart the Engine VM. The entire process typically completes in two to three minutes.

bash: Check hosted engine status and manage maintenance mode
# Check current hosted engine status from any host
hosted-engine --vm-status

# Check which host has the highest score (will run the Engine VM)
hosted-engine --best-candidate

# Put the hosted engine into global maintenance before making Engine changes
# (stops HA monitoring, allows engine-setup to run safely)
hosted-engine --set-maintenance --mode=global

# Verify maintenance mode is active
hosted-engine --vm-status | grep -i maintenance

# Take the Engine out of maintenance after changes are complete
hosted-engine --set-maintenance --mode=none

# Force Engine VM migration to a specific host (use host name from vm-status)
hosted-engine --migrate --host=host2.yourdomain.local
Any Engine configuration change that requires stopping the ovirt-engine service (upgrades, engine-setup runs, certificate renewals) must be done in global maintenance mode. If you run engine-setup without setting global maintenance first, the HA daemons detect the Engine service going down, decide the current host is unhealthy, and attempt to migrate the Engine VM to another host mid-upgrade. This corrupts the upgrade and can corrupt the Engine database. Set global maintenance, make your changes, then unset it.

10. Migration, Affinity Rules, and Scheduling

oVirt live migrates VMs between hosts using KVM's live migration capability over the migration network (or ovirtmgmt if no dedicated migration network is configured). The Scheduling Policy on each cluster determines when and how VMs are distributed across hosts.

Scheduling Policies

PolicyBehaviorUse Case
evenly_distributedDistributes VMs to keep CPU utilization even across all hosts. Migrates VMs away from overloaded hosts.Standard production clusters. Default recommendation.
power_savingConsolidates VMs onto fewer hosts and powers off underused hosts. Powers them back on when demand increases.Environments with variable load where power savings matter. Dev/test clusters.
vm_evenly_distributedDistributes VMs by count across hosts, not by CPU utilization.When workloads are similar in size and VM count is the better balance metric.
none (pinned)VMs don't migrate automatically. Manual migration only.VMs that can't migrate: GPU passthrough, CPU pinned VMs, NUMA sensitive workloads.

Affinity Groups

Affinity groups control which VMs run together or apart. A positive affinity group keeps VMs on the same host (useful for latency sensitive multi-tier apps apps). A negative affinity group keeps VMs on different hosts (useful for HA pairs where you don't want primary and standby on the same node).

  1. In the Administration Portal, navigate to Compute, then Clusters, then select your cluster.
  2. Click the Affinity Groups tab and click New.
  3. Name the group, set Positive or Negative, set whether it's enforced (hard) or preferred (soft), and add the VMs.

Hard affinity constraints block migration if the constraint can't be satisfied. A hard negative affinity group with two VMs and only two hosts in the cluster means one host always has both VMs during maintenance on the other host, which violates the constraint and blocks the migration. Prefer soft affinity rules unless you have a specific reason for hard enforcement, because hard rules can make host maintenance operations fail.


Key Takeaways

  • Network teaming is not supported in oVirt. Use bonding only. The official docs explicitly state that teaming causes errors and breaks hosted engine deployment on the management network. Mode 4 (LACP) is the default supported bond mode. Use mode 1 (active backup) if your switch doesn't support LACP.
  • Self-hosted engine is the recommended deployment model for most environments. The Engine VM runs on the cluster it manages, with built-in HA migration between hosts. Two hosts minimum for HA. No dedicated Engine server required.
  • DNS must work before you run hosted-engine --deploy. Create A and PTR records for the Engine FQDN pointing to the IP you'll assign the Engine VM during setup. If DNS resolution fails mid-deployment, the deployer exits with a confusing error. The fix is always DNS.
  • Any Engine upgrade or engine-setup run requires global maintenance mode first. Without it, the HA daemons detect the Engine service going down and try to migrate the VM to another host mid-upgrade, corrupting both the upgrade and potentially the database.
  • NFS storage exports for oVirt must be owned by uid 36 (vdsm) and gid 36 (kvm) with all_squash,anonuid=36,anongid=36. Any other permission configuration causes the storage domain connection to fail when hosts attempt to mount it.
  • GlusterFS Arbiter volumes (replica 3 arbiter 1) give you two-way replication fault tolerance with 33% lower storage overhead than a full three-way mirror. This is the standard configuration for hyperconverged oVirt deployments on three nodes.
  • The snapshot chain limit per VM is four. Beyond that, I/O performance degrades noticeably during VM operation and snapshot deletion becomes slow. Use snapshots for short-term rollback points, not as a long-term backup strategy.
  • Hard affinity rules can block host maintenance if the constraint can't be satisfied. Prefer soft affinity rules for most cases. Hard rules are for workloads where the constraint is more important than the ability to evacuate a host.

Read more