XCP-ng and XenServer End-to-End Setup: Pools, Xen Orchestra, Storage Repositories, and XenMotion
Standalone Infrastructure | Component: XCP-ng 8.3 / XenServer 8 | Audience: Infrastructure Engineers, VMware Alternatives Evaluators
XCP-ng is the community driven, open source Xen hypervisor maintained by Vates and incubated within the Xen Project under the Linux Foundation. XenServer 8 is the Citrix commercial downstream built on the same codebase. The relationship is similar to oVirt and RHV: same engine, different support model. XCP-ng is free. Xen Orchestra Appliance (XOA), the management platform, is a paid subscription product from Vates, but you can build Xen Orchestra from source yourself and run it free with some limitations. Both are covered with notes where they diverge.
XCP-ng has seen significant adoption as a VMware alternative since Broadcom's acquisition drove licensing changes in 2024. If you're evaluating it from a VMware background, the concepts translate reasonably well: pools are like clusters, hosts are hosts, storage repositories are datastores, and XenMotion is live migration. The management model is different enough to deserve its own explanation.
1. Installation
XCP-ng installs from an ISO onto bare metal. It's not an OS you install Xen on top of, it's a purpose built hypervisor OS (dom0) that boots directly into the Xen hypervisor. The installer is minimal: hostname, management NIC, IP address, root password, and disk selection. Installation takes under 10 minutes on decent hardware. After reboot, the host is accessible via SSH to the root account and via the XAPI management API that Xen Orchestra connects to.
# Connect to XCP-ng host via SSH ssh root@xcpng-host-01.yourdomain.local # Check the XCP-ng version xe host-list params=name-label,software-version # Check host UUID (needed for pool operations and SR creation) xe host-list minimal=true # List current storage repositories xe sr-list params=name-label,type,shared
2. Pool Architecture and the Pool Master
Every XCP-ng host lives inside a pool, even a standalone single host deployment. A pool is the management boundary. all hosts in a pool share storage repositories, network configurations, and VM placement decisions. The pool master runs the XAPI database and coordinates all pool operations. Every other host in the pool is a pool member that accepts commands from the master.
A single pool supports up to 64 hosts. HA requires at least 3 hosts: 2 to host VMs and 1 as a witness. HA in XCP-ng monitors host health and restarts VMs on surviving hosts when a host fails. It requires shared storage so VMs can restart on a different host.
Joining a Host to a Pool
- 1Install XCP-ng on the new host and verify it's accessible via SSH.
- 2From Xen Orchestra, go to the pool you want to join and click Add Hosts.
- 3Enter the new host's IP and credentials. Xen Orchestra connects to the new host and adds it to the pool. The new host's storage and networking are then visible in the pool view.
- 4Before adding the host to the pool, install any missing patches. In Xen Orchestra, go to the host view and click the Patches tab, then Install All Patches. Reboot if prompted. All hosts in a pool should run the same software version for consistent behavior.
3. Xen Orchestra: Free vs Appliance
Xen Orchestra (XO) is the web based management interface for XCP-ng and XenServer pools. Two ways to run it:
- Xen Orchestra Appliance (XOA): A pre built VM image from Vates deployed on one of your XCP-ng hosts. XOA is the supported path. It includes backup functionality, advanced features like Continuous Replication, and professional support from Vates. XOA is subscription based.
- XO from source: You can clone the public GitHub repository and build Xen Orchestra yourself. This gives you the full UI and basic management without a subscription. Some advanced features are behind the XOA subscription. The from source build is widely used in the community for home labs and smaller deployments.
For production environments, XOA is the right call. The support subscription and the reliability of a maintained appliance are worth it. For evaluation and lab use, XO from source is completely viable and well documented on the XCP-ng official docs.
4. Storage Repositories
Storage Repositories (SRs) are the XCP-ng equivalent of datastores. Each SR stores VM disk images (VDIs) and is attached to hosts via Physical Block Devices (PBDs). A PBD is the link between a host and an SR: it stores how the host accesses the SR (NFS path, iSCSI target, local disk path). If you need to change how an SR is accessed (such as an NFS server IP change), you destroy and recreate the PBD with the updated connection details.
| SR Type | Provisioning | Shared | Best For |
|---|---|---|---|
| EXT (local) | Thin | No | Local storage with thin provisioning. Easier to manage than LVM. Recommended for local SRs. |
| LVM (local) | Thick | No | Local storage with thick provisioning. Higher overhead for snapshots. |
| NFS | Thin | Yes | Shared storage for live migration without iSCSI or FC. Simple to configure. |
| iSCSI | Thick | Yes | Shared storage on iSCSI SAN. Required for HA with iSCSI backed VMs. |
| XOSTOR | Thin | Yes | XCP-ng native HCI storage. Pools local disks from multiple hosts into shared storage. No external SAN required. |
# Get the pool master host UUID
MASTER_UUID=$(xe host-list params=uuid minimal=true | head -1)
# Create an NFS SR (shared across all hosts in the pool)
# The SR is created on the master but becomes shared automatically
xe sr-create \
host-uuid=$MASTER_UUID \
type=nfs \
content-type=user \
name-label="NFS-Shared-SR" \
device-config:server="nfs-server.yourdomain.local" \
device-config:serverpath="/export/xcpng-storage"
# Verify the SR was created and is shared
xe sr-list params=name-label,type,shared
# List VDIs (virtual disk images) in the new SR
SR_UUID=$(xe sr-list name-label="NFS-Shared-SR" params=uuid minimal=true)
xe vdi-list sr-uuid=$SR_UUID params=name-label,virtual-size,type
5. Networking
XCP-ng networking is configured per host. Unlike vSphere distributed switches where you configure networking at the cluster level, in XCP-ng you configure network interfaces on each host individually. Xen Orchestra provides a consistent view across the pool but the underlying configuration is applied per host. Networks in XCP-ng are similar to vSwitches: they're virtual switch objects that VMs connect to.
For tagged VLAN networks, create a network on a host and set the VLAN tag. The network then appears in Xen Orchestra as a pool level resource and VMs can attach to it. Physical switch ports connected to XCP-ng hosts must be configured as 802.1q trunks carrying all the VLANs you want to use as VM networks.
# Get the physical interface UUID for the host # eth0 is typically the management NIC, use eth1 or bonded interface for VM networks PIF_UUID=$(xe pif-list device=eth1 host-uuid=$MASTER_UUID params=uuid minimal=true) # Create a VLAN network on top of the physical interface xe network-create name-label="VM-Network-VLAN400" NET_UUID=$(xe network-list name-label="VM-Network-VLAN400" params=uuid minimal=true) # Create the VLAN PIF linking the network to the physical interface xe vlan-create pif-uuid=$PIF_UUID vlan=400 network-uuid=$NET_UUID # Verify the VLAN network exists xe network-list name-label="VM-Network-VLAN400" params=name-label,bridge,MTU
6. VM Management and Templates
VMs in XCP-ng are created from templates: pre configured VM hardware profiles that set CPU, memory, disk, and OS type. XCP-ng ships with templates for all major OS families. Select a template, choose storage and network, attach an ISO if needed, and start the VM. Templates in XCP-ng are hardware profiles, not full OS images. They don't include a pre installed OS the way VMware templates do.
For full OS image templates (equivalent to VMware golden image templates), the workflow is: install the OS in a VM, configure it, install Xen guest tools, then clone the VM as a template in Xen Orchestra. The clone becomes a template and the base VM remains. New VMs created from the template get a full copy of the template's disk, not a thin clone.
7. Live Migration with XenMotion
XenMotion is XCP-ng's live migration feature. It moves running VMs between hosts in the same pool without downtime. VMs experience a brief pause (typically a few seconds during memory state transfer) but stay running throughout. XenMotion requires shared storage: the VM's disk must be on an SR accessible from both the source and destination host.
Storage Motion extends this to move a VM's disk between SRs while the VM is running. You can migrate a VM from local storage to shared storage, from one NFS SR to another, or from an iSCSI LUN to an NFS share, all without downtime. In Xen Orchestra, this is done by long clicking the disk in the VM disk view and selecting the destination SR from the dropdown.
8. Veeam Backup for XCP-ng
For most of XCP-ng's history, Veeam didn't offer native host level backup integration. If you were running XCP-ng and wanted Veeam, you used Veeam Agents deployed inside VMs. That's changed. Veeam released a public beta of a native XCP-ng plugin in late 2025, which is a significant development for XCP-ng shops that want to stay on Veeam.
The beta plugin backs up XCP-ng VMs at the host level without agents inside the VMs. Some limitations exist in the beta that don't exist on VMware or Hyper-V: application aware image processing (AAIP) is not supported, which means backups are crash consistent rather than application aware. You can still restore application items (AD, Exchange, SQL, PostgreSQL) from XCP-ng backups, but those restores are crash consistent rather than quiesced. Live migration is also not available for XCP-ng in the beta.
Check the Veeam helpcenter and the Veeam Community Resource Hub for current status on the XCP-ng plugin before you base architecture decisions on feature availability. Beta products change. What's a limitation today may ship as a feature by the time you read this.
Key Takeaways
- XCP-ng is the open source Xen hypervisor. XenServer 8 is the Citrix commercial downstream. Same codebase, different support model. Upgrading from XenServer or Citrix Hypervisor to XCP-ng preserves all VMs, storage repositories, and network configurations.
- Every XCP-ng host is in a pool, even standalone deployments. The pool master runs the XAPI database. Don't use Maintenance Mode in XCP-ng Center during upgrades. Use Xen Orchestra's Rolling Pool Update instead to avoid accidentally migrating the pool master role mid upgrade.
- Storage Repositories are the datastore equivalent. NFS SRs are thin provisioned and the simplest path to shared storage for live migration. EXT SRs are recommended over LVM for local storage due to better snapshot handling and lower overhead.
- Networks are configured per host in XCP-ng, not at the pool level. Xen Orchestra provides a unified view but the configuration applies individually to each host. Physical switch ports must be configured as 802.1q trunks for VLAN networks.
- XenMotion requires shared storage for live migration. Storage Motion allows live disk migration between SRs without VM downtime. Long click a disk in Xen Orchestra's VM disk view to trigger a live storage migration.
- XOA (Xen Orchestra Appliance) is the recommended management path for production environments. XO from source is viable for labs and evaluation. The subscription enables advanced features like Continuous Replication and Vates support.
- If upgrading from XenServer, disable clustering before upgrading. Clustering relies on proprietary components not available in XCP-ng and leaves corrupt data in the XAPI database that prevents XAPI from starting after the upgrade.
- Veeam released a native XCP-ng host level backup plugin in public beta in late 2025. It backs up VMs without agents inside guests. Application aware image processing is not supported in the beta, meaning backups are crash consistent. Check current status before finalizing your backup architecture for XCP-ng environments.