vSphere to Proxmox VE Migration Guide: Import Wizard, Disk Conversion, Networking Translation, and Post-Migration Validatio
Migration Guide | Source: VMware vSphere 6.5-8.0 | Target: Proxmox VE 8.x | Audience: Infrastructure Engineers, VMware Migration Leads
This is the most common migration happening in infrastructure right now. Broadcom's acquisition of VMware in 2023 eliminated perpetual licenses and restructured pricing in ways that drove cost increases of 200 to 1,200 percent for many customers. Proxmox VE is the platform absorbing the most of that displacement: it's mature, open source, runs KVM and QEMU, and has a migration path from vSphere that improved significantly when Proxmox added the ESXi Import Wizard in late 2024.
This article covers the full migration path: workload assessment, the networking and storage translation that trips people up, the three conversion approaches with honest trade offs for each, post migration validation, and the operational changes you need to plan for before your first VM lands on Proxmox.
1. Workload Assessment Before You Start
Don't migrate blind. A pre migration inventory saves you from discovering incompatibilities mid cutover when your options are limited.
- vSAN backed VMs can't use the Import Wizard. The Proxmox ESXi Import Wizard connects directly to an ESXi host's API. VMs on vSAN storage aren't accessible through a single host's API in the same way as VMs on local or shared SAN storage. These VMs need OVA export or qemu-img conversion instead.
- vTPM state doesn't migrate. If a VM uses a virtual TPM with keys stored in the vTPM rather than an external TPM, those keys don't move. For Windows VMs with BitLocker bound to the vTPM, decrypt the drives before migration. Create a new vTPM on the Proxmox side after migration and re-seal if needed.
- VMware Tools need handling. The simplest approach is uninstalling VMware Tools before migration. If you don't, clean up is possible post migration but involves running a PowerShell cleanup script on Windows VMs. Linux VMs need open-vm-tools removed and virtio drivers verified.
- NIC naming may change on Linux. After migration, Linux VMs may rename the primary NIC because the PCI device address changes when going from vmxnet3 to VirtIO. Keep the source MAC address when creating the NIC in Proxmox to minimize renaming issues.
- RDMA and GPU passthrough need re-evaluation. If VMs use SR-IOV NICs or GPU passthrough in vSphere, the device assignments need to be reconfigured from scratch on Proxmox. There's no automated translation.
2. Networking Translation
vSphere uses vSwitches and distributed vSwitches with port groups. Proxmox uses Linux bridges (vmbr0, vmbr1, etc.) and optionally the OVS or SDN layer for more complex topologies. The translation is straightforward for flat environments but needs planning for environments with VLANs and distributed switching.
| vSphere Construct | Proxmox Equivalent | Migration Note |
|---|---|---|
| vSwitch (Standard) | Linux bridge (vmbr0) | One bridge per physical NIC or bond. VMs attach to bridges. |
| Distributed vSwitch | Linux bridge with VLAN aware mode, or OVS | VLAN aware bridges in Proxmox allow one bridge to handle multiple VLANs without creating a separate bridge per VLAN. |
| Port group (untagged) | VM NIC on bridge, no VLAN tag set | Direct equivalent. |
| Port group with VLAN tag | VM NIC on bridge with VLAN tag set, or bridge in VLAN aware mode | Set the VLAN tag on the VM's NIC in Proxmox, not on the bridge itself, when using VLAN aware bridges. |
Document every port group with its VLAN tag, purpose, and which VMs use it before you start. Map each port group to its Proxmox bridge equivalent. Do it before the first VM moves, not after.
3. Storage Mapping
vSphere uses datastores. Proxmox uses storage: ZFS pools, LVM volumes, directories, NFS mounts, iSCSI, or Ceph. The right Proxmox storage type depends on what you need:
- ZFS: Best all around choice for local storage. Provides checksumming, snapshots, compression, and replication. ARC cache significantly improves performance for VMs with hot data. Use ZFS if your hardware has ECC RAM (which it should for production servers).
- Directory (qcow2 on ext4/xfs): Simplest path. VM disks are qcow2 files in a directory. Snapshots work. No ZFS benefits. Good for NFS backed storage where the underlying FS is managed by the NAS.
- LVM: Thick provisioned block devices. Fast but no native snapshots without LVM thin. Avoid for VM storage unless you have a specific reason.
- Ceph: Distributed block storage for clusters. Equivalent to vSAN. Requires a minimum of 3 nodes. Best for environments building multi node Proxmox clusters that need shared storage without external SAN.
4. Conversion Approaches
Option 1: Proxmox ESXi Import Wizard (Recommended)
Added in Proxmox VE 8.1 and substantially improved since, the Import Wizard connects directly to an ESXi host (or vCenter) via its API and imports VMs into Proxmox with most configuration automatically mapped. It's the cleanest path for environments on ESXi 6.5 through 8.0 with VMs on local or SAN storage (not vSAN).
- 1In the Proxmox web UI, go to Datacenter, then Storage, and click Add. Select ESXi as the storage type.
- 2Enter the ESXi host (or vCenter) IP, username, and password. Proxmox connects to the ESXi API and makes VMs on that host available as importable items.
- 3Navigate to a Proxmox node and select the ESXi storage. Browse to the VM you want to import and click Import. The wizard maps CPU, memory, disk, and network configuration automatically and lets you override any setting before importing.
- 4Power off the VM on the ESXi side before importing. The wizard works on powered off VMs for a clean disk state.
- 5After import, verify the VM's network and storage configuration in Proxmox, assign the correct bridge, boot the VM, and uninstall VMware Tools.
Option 2: OVA Export and Disk Conversion
For VMs on vSAN or in environments where direct ESXi API access isn't available, export the VM as an OVA from vCenter and convert the VMDK disk to qcow2 on the Proxmox side.
# Transfer the OVA to the Proxmox host (via scp, NFS mount, or USB) # Extract the OVA on the Proxmox host tar xvf vm-export.ova # Convert VMDK to qcow2 # -p shows progress, -f source format, -O output format qemu-img convert -p -f vmdk vm-disk.vmdk -O qcow2 /var/lib/vz/images/100/vm-disk.qcow2 # Create a new VM in Proxmox first (without a disk), note the VMID (e.g., 100) # Then import the converted disk into the VM # qm importdisk [vmid] [diskpath] [storage] qm importdisk 100 /var/lib/vz/images/100/vm-disk.qcow2 local-zfs # After importing, attach the disk to the VM via the Proxmox UI # Set the disk controller to VirtIO SCSI for best performance on Linux VMs # Set to SATA or IDE initially for Windows VMs if they fail to boot with VirtIO
Option 3: Virt-v2v
Virt-v2v is a Linux tool that converts VMs from one hypervisor format to another. It handles driver injection for Windows VMs (injecting VirtIO drivers into the VM's driver store so the disk is accessible after conversion) and removes VMware specific components automatically. It's the most thorough conversion tool for Windows VMs but adds complexity compared to the Import Wizard. Use virt-v2v when you need automated driver handling for a large batch of Windows VMs and can't use the Import Wizard.
5. Post Migration Validation
- Boot and application test: Power on each migrated VM and verify the OS boots, applications start, and network connectivity works. Check for VMware specific errors in event logs or journals.
- Install QEMU Guest Agent: Install qemu-guest-agent on Linux VMs and the QEMU Guest Tools on Windows VMs. Without the guest agent, Proxmox can't report accurate IP addresses, graceful shutdown from the UI doesn't work, and some features like freeze consistent snapshots aren't available.
- Disk controller verification: Linux VMs with stripped down initramfs images (RHEL, CentOS, Ubuntu Server) sometimes fail to boot with VirtIO SCSI if the VirtIO block driver wasn't in the initramfs. Start these VMs with SATA or IDE controller first, install VirtIO drivers, rebuild initramfs if needed, then switch to VirtIO SCSI for production performance.
- Backup coverage: Configure backup jobs in Proxmox Backup Server for all migrated VMs before decommissioning any vSphere infrastructure. Don't assume existing Veeam backup chains cover VMs that have migrated to a new hypervisor without reconfiguring the Veeam jobs for Proxmox.
Key Takeaways
- The Proxmox ESXi Import Wizard is the recommended migration path for ESXi 6.5 through 8.0 VMs on local or SAN storage. It doesn't work for vSAN backed VMs. Those need OVA export or qemu-img conversion.
- vTPM state doesn't migrate between hypervisors. Decrypt BitLocker drives before migration if they're bound to the vTPM. Create a new vTPM in Proxmox and re-seal after migration.
- Uninstall VMware Tools before migration. If you don't, post migration cleanup requires a PowerShell script on Windows VMs and manual package removal on Linux VMs.
- Linux VMs may rename the primary NIC after migration because the PCI device address changes. Preserving the MAC address from vSphere when creating the Proxmox NIC reduces this risk.
- Windows VMs may fail to boot with VirtIO SCSI initially. Start them with SATA or IDE first, install VirtIO drivers from the VirtIO ISO, then switch to VirtIO SCSI for production performance.
- Install QEMU Guest Agent on every migrated VM. Without it, graceful shutdown doesn't work from the Proxmox UI, IP addresses aren't reported, and freeze consistent snapshots aren't available.
- Don't run the migrated VM on both ESXi and Proxmox simultaneously. Shut down the ESXi copy or remove it after confirming the Proxmox migration succeeded. Document decommission status in vCenter before anything gets deleted.