Migrating from the Standalone Veeam Backup for Nutanix AHV Appliance to VBR v13

Veeam v13 · Nutanix AHV · Migration Guide
📅 March 2026  ·  ⏱ ~13 min read  ·  By Eric Black
Veeam v13 Nutanix AHV Migration Appliance Plug-In v9 VBR

What You're Actually Migrating From

If you set up Nutanix AHV backups before Veeam v12.2, you went through the standalone Veeam Backup for Nutanix AHV appliance. This was a separate product from VBR. It ran as a Linux-based virtual appliance deployed directly on your AHV cluster, had its own web interface at its own URL, and was managed completely independently from whatever VBR environment you had running for VMware or Hyper-V. Versions 4 through 7 of the product used this appliance model.

Starting with plug-in version 8, Veeam dropped the appliance requirement. The AHV integration moved natively into VBR. Plug-in version 9, which ships with VBR v13, continues that model and adds polish on top. This guide covers the migration from the old appliance world to the new native integration.

There are two paths depending on your situation. If you're upgrading an existing VBR environment that already has the AHV plug-in, you use the in-place upgrade path. If you're moving to a brand new VBR server and want to bring your AHV backups with you, you use the full appliance migration path. Both are covered here.

What You Gain and What Changes

Before the steps, it's worth being clear about what actually changes after this migration, because it affects how you plan and what operational disruptions to expect.

What you gain:

  • AHV workloads managed from the same VBR console as everything else. No more separate web interface, no more logging into a different URL to check an AHV job.
  • The full VBR feature set for AHV: SOBR support, malware detection, SureBackup, backup copy jobs, and Enterprise Manager reporting all apply to AHV backups now.
  • Prism Central support, meaning one connection covers all clusters if PC is in your environment.
  • The old appliance VM can be decommissioned, freeing cluster resources.

What changes operationally:

  • Existing backup chains created by the old appliance are not automatically imported into VBR. The data is still on your repositories, but VBR doesn't know about it until you explicitly import it. That's a manual step, covered in detail later in this guide.
  • Job definitions from the old appliance do not migrate automatically. You'll recreate jobs in VBR after the migration.
  • The old appliance's internal database and configuration don't transfer. VBR starts fresh with the infrastructure you point it at.
ℹ️ Your Backup Data Is Not at Risk
The migration doesn't touch backup files on your repositories. Every restore point written by the old appliance is still there. The import step at the end of this guide makes VBR aware of them so you can restore from them. Nothing gets deleted unless you explicitly delete it.

Before You Start: Version Check and Prerequisites

The upgrade path to plug-in version 9 has specific version requirements that are not optional. Check all of these before doing anything else.

CheckRequirement
VBR version before upgrading to v13 Must be on VBR 12.3.1 (build 12.3.1.1139) or later. If you are on 12.0 or 12.1, upgrade to 12.3.1 first. You cannot jump directly to v13 from an earlier version.
AHV plug-in version before upgrading to v9 Must be on plug-in version 7.1 or 8. If you are on plug-in version 7.0 or earlier (versions 4.0, 4a, 5.0, 5.1, 6, 6.1, or 7), upgrade to 7.1 first through the old appliance interface before attempting the VBR v13 upgrade.
Appliance power state (plug-in 7.1 only) If your current plug-in version is 7.1 specifically, the AHV backup appliance VM must be powered on during the VBR upgrade and Components Update wizard. If it's powered off during upgrade from 7.1, the process will fail at the component update step.
Nutanix AOS version AOS 6.5.x or later is required for plug-in v9. Confirm your cluster AOS version before starting.
VBR upgrade target version Must upgrade to VBR 13.0.1 specifically. This is the build that includes the full AHV plug-in v9 component pipeline.
Active jobs No AHV backup jobs should be running during the upgrade. Schedule this in a maintenance window when jobs are idle.
🚫 Do Not Skip Version Steps
The upgrade path is sequential and the docs are explicit about it. VBR 12.3.1 first, then 13.0.1. Plug-in 7.1 or 8 before upgrading to v9. Attempting to skip versions produces errors during the Components Update step and can leave your AHV infrastructure in a partially updated state that needs manual cleanup to resolve.

Path A: Upgrade In-Place via the Components Update Wizard

This is the path for most people. You already have VBR running, you have the old AHV plug-in in place, and you're upgrading VBR to v13. The process is largely automated through the Components Update wizard that runs after the main VBR upgrade.

Step A.1

Confirm all prerequisites are met

Verify VBR is on build 12.3.1.1139 or later. Verify the AHV plug-in is on version 7.1 or 8. If you're on 7.1 specifically, verify the appliance VM is powered on right now. Verify Nutanix AOS is 6.5.x or later on all clusters. Do not proceed until every one of these checks passes.

Step A.2

Run the VBR upgrade to v13.0.1

Download the VBR v13.0.1 installer from the Veeam downloads page and run the upgrade on your backup server. The installer detects the existing installation and upgrades it in place. It stops all VBR services, applies the upgrade, and restarts them. This typically takes 20 to 40 minutes depending on your environment size and database. Don't interrupt it.

Step A.3

Complete the Components Update wizard

After VBR v13 starts for the first time, the Components Update wizard launches automatically, or you can trigger it manually from Backup Infrastructure > Components Update. This wizard updates all managed infrastructure components to match the new VBR version, including the AHV plug-in on your appliance.

The wizard lists everything that needs updating. Your AHV appliance will appear here. Review the list and click Apply. VBR connects to the appliance over the network and updates the plug-in components in place. The appliance VM itself is not replaced. Its plug-in software is updated.

⚠️ The Appliance Must Be Reachable During This Step
VBR needs network access to the AHV backup appliance to push the component update. If the appliance is unreachable, the update step fails with a network error. Fix connectivity first, then re-run the Components Update wizard. It's safe to run again after resolving the issue.
Step A.4

Verify the upgrade completed successfully

After the Components Update wizard finishes, go to Backup Infrastructure > Managed Servers. Your Nutanix AHV server should appear with a green status and the plug-in version should show 9.x. Open an existing AHV backup job and confirm it looks correct in the v13 console. Run a test backup job to confirm the full pipeline is functional end-to-end.

💡 After a Successful In-Place Upgrade: Consider Retiring the Appliance VM
Once you're running plug-in v9 natively in VBR, the old appliance VM is no longer needed for ongoing operations. After you've confirmed a few successful job runs, you can power it down and eventually delete it from the cluster to reclaim those resources. Don't rush it. Give yourself a week or two of clean job runs first.

Path B: Full Appliance Migration to a New VBR Server

This path is for situations where you're standing up a new VBR v13 server and want to carry your existing AHV backup infrastructure and backup data across to it. It has more steps than Path A because you're disconnecting the appliance from the old VBR server and reconnecting it to the new one, then importing existing backup chains. None of the individual steps are complicated, but the order matters a lot.

Step B.1

Stand up the new VBR v13.0.1 server

Install VBR v13.0.1 on the new server and apply your license. Do not add any Nutanix infrastructure yet. The new server needs to be clean and ready before you move the appliance connection over.

Step B.2

Remove the appliance from the old VBR server

On the old VBR server, go to Backup Infrastructure > Managed Servers. Find the Nutanix AHV server entry, right-click it, and select Remove. This disconnects the old VBR server's management relationship with the appliance. It does not touch any backup data on your repositories.

🚫 This Step Is Not Optional and Cannot Be Skipped
If you skip removing the appliance from the old server and then connect it to the new one, both VBR servers will try to manage the same appliance simultaneously. That causes job conflicts, potential database corruption on the appliance, and unpredictable behavior that's painful to untangle. Remove it from the old server first. Every time.
Step B.3

Add the Nutanix infrastructure to the new VBR server

On the new VBR v13 server, add your Prism Central or Prism Element cluster as you would in a fresh setup (see the Veeam v13 with Nutanix AHV setup guide for that full walkthrough). VBR discovers the cluster and the existing appliance VM running on it. Complete the wizard. At this point VBR is connected to the cluster, with the old appliance now under the new server's management.

Step B.4

Update the appliance components

Run the Components Update wizard on the new VBR server. This updates the plug-in components on the appliance to version 9 under the new server's management. The appliance is effectively re-provisioned and handed off cleanly.

Step B.5

Add the backup repositories to the new VBR server

Your old backups live on repositories that the new VBR server doesn't know about yet. Go to Backup Infrastructure > Backup Repositories and add each one using the same paths and credentials as before. The backup files are untouched. VBR just needs to be pointed at them.

Step B.6

Rescan the repositories and import existing backups

Right-click each repository in the Backup Repositories view and select Rescan. VBR scans the repository and discovers the backup files written by the old appliance. After the rescan, go to Backups > Disk (Imported). Your old AHV restore points will appear there. Right-click the imported backup sets and select Map Backup to bring them into VBR's active database. They'll then appear in Backups > Disk and be fully available for restores.

ℹ️ Imported Backups Are Immediately Available for Restore
Once mapped, the old restore points are fully usable for file-level restore, instant recovery, and full VM restore through the new VBR server. You don't need to wait for a new backup job run to restore from existing points.
Step B.7

Recreate backup jobs on the new VBR server

Job definitions don't migrate automatically. Create new AHV backup jobs on the new VBR v13 server targeting your Nutanix VMs and pointing to your repositories. When the first new job runs, VBR detects the same VM identifiers in the imported backup chains and chains onto the existing files where possible. Either way, your VMs are protected going forward from the moment the new jobs run.

After Migration: Cleanup and Validation

Whether you took Path A or Path B, work through this list before calling the migration done:

  • Run at least two full backup job cycles and verify both complete without errors.
  • Perform a test restore of at least one VM end-to-end. A file-level restore from an existing restore point is a solid first check. A full VM restore is better.
  • Confirm restore points appear correctly in Backups > Disk for all protected VMs.
  • If you imported old backups via rescan, verify the imported chains show up correctly and are usable for restore.
  • Confirm email notifications are configured and firing on the new server.
  • Once you've had clean job runs for at least a week, power down the old appliance VM. After another clean stretch with no issues, delete it from the cluster.
⚠️ Don't Delete Old Backup Files Until You Have a Clean New Chain
Don't remove any backup files from repositories until the new VBR setup has successfully completed at least one full backup run per protected VM and you've confirmed those new restore points are usable. Once you have a solid new chain in place, the old imported restore points will expire naturally through VBR's retention. Let retention handle it rather than manually deleting anything.

Optional: Switch to Prism Central Deployment

If you were previously connecting to standalone Prism Element clusters and you have Prism Central in your environment, the migration to v13 is a good time to make the switch. Adding Prism Central gives you a single connection that covers all managed clusters instead of maintaining individual cluster entries.

To make the switch after migration: add your Prism Central instance to VBR under Backup Infrastructure > Managed Servers > Add Server > Nutanix AHV and enter the Prism Central address. VBR automatically reconfigures standalone cluster entries that are already managed by that Prism Central instance and converts them to PC-managed connections. Existing jobs continue to work through the new connection. You don't need to remove and re-add standalone entries manually.

💡 Worth the Switch If You Have More Than One Cluster
If you're protecting VMs across more than one AHV cluster, Prism Central deployment simplifies everything. One connection, all clusters visible, workers deployable across any of them from a single pane. For single-cluster environments the operational benefit is smaller, but it still consolidates your infrastructure view.

Closing Thoughts

This migration is one of those things that sounds bigger than it actually is once you have the version requirements mapped out and the sequence clear in your head. The in-place upgrade path through the Components Update wizard is about as low-friction as a major platform migration gets, assuming you're already on 12.3.1 and plug-in 7.1 or 8. The wizard does the actual work.

The full migration to a new VBR server has more steps, but the steps themselves are not hard. The order is what matters: remove from the old server before connecting to the new one, rescan repositories before recreating jobs, validate restores before you decommission anything. Follow that sequence and it's clean.

The real payoff is what you get after. AHV workloads in VBR natively means they're part of the same protection strategy as everything else you're managing. Same malware detection, same SOBR policies, same SureBackup verification, same reporting in Enterprise Manager. That's what the whole migration is actually for, and it's worth the effort to get there.

What You've Covered

  • Version prerequisites confirmed: VBR 12.3.1 minimum, plug-in 7.1 or 8 minimum, AOS 6.5.x minimum, appliance powered on if coming from 7.1
  • Path A: In-place upgrade completed via VBR v13.0.1 upgrade and Components Update wizard
  • Path B: Appliance removed from old server first, connected to new VBR v13 server, components updated
  • Repositories added to new server, rescanned, and old backup chains imported and mapped
  • New backup jobs created and at least two successful runs completed
  • Test restore performed and confirmed end-to-end
  • Old appliance powered down after validation period, decommissioned after extended clean run
  • Optional Prism Central connection added for multi-cluster environments

Read more