Veeam v13: Changed Block Tracking Deep Dive -- VMware, Hyper-V, and Nutanix AHV

Veeam v13 CBT VMware vSphere Hyper-V RCT Nutanix AHV Troubleshooting

Veeam v13 Series | Component: VBR v13 | Audience: Hands-on Sysadmins, Enterprise Architects

Changed Block Tracking is one of those features that most people take for granted until the day it breaks. Then they discover they don't actually understand how it works, and troubleshooting it becomes a lot harder than it needs to be. CBT is also not a single thing. VMware, Hyper-V, and Nutanix AHV each implement it differently, with different files, different fallback behaviors, and different reset procedures. Treating them as equivalent leads to incorrect assumptions when something goes wrong.

This article covers what CBT actually does at the hypervisor level, how Veeam uses it, what happens when it breaks across each platform, and how to fix it. There's also a PowerShell section at the end for automating CBT resets at scale, because doing it one VM at a time in the console is not a production procedure.


1. What CBT Does and Why It Matters

Without CBT, backing up a VM incrementally means reading the entire virtual disk and comparing it against what you already have in the repository. For a 500 GB VM with 2 GB of daily changes, that means reading all 500 GB to find 2 GB of differences. CBT flips that. The hypervisor tracks which blocks change as they change, and when backup software asks "what changed since the last backup?" it gets back a list of changed block offsets. It reads only those blocks. The disk reading work drops from 500 GB to 2 GB.

That's the entire value proposition. Shorter backup windows, less I/O on production storage during backup jobs, less network traffic to the repository, and faster completion times on environments with hundreds of VMs. In large environments without CBT, backup windows can exceed 24 hours. With CBT they're a fraction of that.

When CBT is unavailable or corrupted, Veeam doesn't silently fail. It falls back to its proprietary filtering mechanism, which calculates checksums for every data block and compares them against the stored checksums from the previous backup. This produces an incremental backup file of normal size, but it requires reading the entire VM disk to do it. Backup jobs slow down dramatically. The backup chain stays intact. You just lose the performance benefit until CBT is restored.


2. VMware vSphere CBT

How It Works

VMware CBT has been part of vSphere since version 4.0 as part of the VADP (vStorage APIs for Data Protection) framework. The VMkernel tracks which blocks on a virtual disk have changed (block size is variable, starting at a minimum of 64 KB on smaller disks and growing larger as disk size increases), and records those changes in a CTK file stored alongside the VMDK on the datastore. The CTK file is approximately 0.5 MB per 10 GB of virtual disk size. It doesn't grow beyond that size unless the virtual disk itself is expanded.

The tracking is keyed to change IDs. Each time a snapshot is taken, VMware assigns a change ID to the disk at that point. When backup software requests the changed blocks, it provides the change ID from the previous backup as the starting point and VMware returns only the blocks that changed since then. Veeam stores that change ID after each backup session and provides it on the next run.

Veeam enables CBT on VMs automatically when a backup job runs, using a stun and unstun cycle (creating and immediately removing a snapshot) to insert the change tracking filter into the VMware storage stack. You don't need to enable CBT manually. But you do need to understand the requirements:

  • VM hardware version 7 or later. VMs on older hardware versions fall back to Veeam's proprietary filtering automatically.
  • Storage must go through the ESXi storage stack: VMFS, NFS, and RDM in virtual compatibility mode all work. RDM in physical compatibility mode doesn't.
  • The VM disk can't be an independent disk (persistent or non-persistent), as those are unaffected by snapshots and CBT relies on snapshot operations.
  • CBT should not be enabled on VMware Horizon View linked clone or instant clone VMs. VMware documents this explicitly.

The CTK File

When CBT is active on a VM you'll see CTK files in the VM's folder on the datastore alongside the VMDKs: vmname-ctk.vmdk for the base disk, and vmname-000001-ctk.vmdk for any snapshot disks. After a successful backup and full snapshot consolidation, the snapshot CTK files should disappear. If they accumulate, that's a sign snapshots aren't consolidating properly, which is a separate problem that will eventually cause CBT data to become inconsistent.

When CBT Breaks on VMware

VMware CBT can become invalid or corrupted in several ways. Host crashes or hard power loss while VMs are running is the most common. If the VMkernel loses track of changes while a VM is powered on, CBT resets. You'll see this in Veeam's job log as: "CBT data is invalid, failing over to legacy incremental backup." Veeam is telling you it read the whole disk this run and things are fine, but you should reset CBT properly so the next run is efficient again.

Other causes include: interrupted backups where the snapshot wasn't properly removed, storage connectivity failures during a backup, VM migrations between hosts that left the CTK files in an inconsistent state, and historical VMware bugs in specific ESXi builds (particularly in vSphere 6.x) that caused CBT data corruption. Veeam's KB1113 documents the reset procedure and VMware's Broadcom KB 339974 covers the PowerCLI approach for bulk resets.

Resetting CBT on VMware: Two Methods

There's a cold method and a hot method. Cold requires powering off the VM. Hot uses snapshots to cycle CBT without powering off, but it does create a brief stun of the guest (usually under a second on healthy storage).

PowerCLI: Reset CBT on a single VM without powering off (hot method)
# Requires VMware PowerCLI installed and connected to vCenter
# Connect-VIServer -Server vcenter.domain.local -User username -Password password

param([Parameter(Mandatory)][string]$VMName)

$vm = Get-VM -Name $VMName

if ($null -eq $vm) {
    Write-Error "VM '$VMName' not found"
    exit 1
}

Write-Host "Resetting CBT on: $VMName"

# Step 1: Disable CBT
$spec = New-Object VMware.Vim.VirtualMachineConfigSpec
$spec.ChangeTrackingEnabled = $false
$vm.ExtensionData.ReconfigVM($spec)
Write-Host "  CBT disabled"

# Step 2: Create a snapshot to commit the change (brief guest stun)
$snap = New-Snapshot -VM $vm -Name "CBT-Reset-Snap" -Confirm:$false
Write-Host "  Reset snapshot created"

# Step 3: Remove the snapshot
Remove-Snapshot -Snapshot $snap -Confirm:$false
Write-Host "  Reset snapshot removed"

# Step 4: Re-enable CBT
$spec2 = New-Object VMware.Vim.VirtualMachineConfigSpec
$spec2.ChangeTrackingEnabled = $true
$vm.ExtensionData.ReconfigVM($spec2)
Write-Host "  CBT re-enabled"

# Step 5: Create and remove a second snapshot to confirm CBT is active
$snap2 = New-Snapshot -VM $vm -Name "CBT-Verify-Snap" -Confirm:$false
Remove-Snapshot -Snapshot $snap2 -Confirm:$false
Write-Host "  CBT verified active"

Write-Host "CBT reset complete on $VMName. Next backup will read the full disk, then return to incremental."
PowerCLI: Reset CBT on all VMs in a cluster with CBT corruption warnings
# Run against VMs that have flagged CBT issues in Veeam job logs
# Adjust cluster name and vCenter connection as needed

param(
    [Parameter(Mandatory)][string]$ClusterName,
    [switch]$WhatIf
)

$vms = Get-Cluster -Name $ClusterName | Get-VM | Where-Object { $_.PowerState -eq "PoweredOn" }
Write-Host "Found $($vms.Count) powered-on VMs in cluster: $ClusterName"

$resetCount = 0
$skipCount  = 0

foreach ($vm in $vms) {
    # Check if CBT is currently enabled on this VM
    $cbtEnabled = $vm.ExtensionData.Config.ChangeTrackingEnabled

    if (-not $cbtEnabled) {
        Write-Host "  SKIP: $($vm.Name) (CBT not enabled)"
        $skipCount++
        continue
    }

    # Check for existing snapshots - can't reset CBT if snapshots exist
    $snaps = Get-Snapshot -VM $vm -ErrorAction SilentlyContinue
    if ($snaps) {
        Write-Host "  SKIP: $($vm.Name) (has $($snaps.Count) existing snapshot(s) - consolidate first)"
        $skipCount++
        continue
    }

    if ($WhatIf) {
        Write-Host "  WHATIF: Would reset CBT on $($vm.Name)"
        continue
    }

    Write-Host "  Resetting CBT: $($vm.Name)"
    try {
        $spec = New-Object VMware.Vim.VirtualMachineConfigSpec
        $spec.ChangeTrackingEnabled = $false
        $vm.ExtensionData.ReconfigVM($spec)

        $snap = New-Snapshot -VM $vm -Name "CBT-Reset-$(Get-Date -Format 'yyyyMMdd')" -Confirm:$false
        Remove-Snapshot -Snapshot $snap -Confirm:$false

        $spec2 = New-Object VMware.Vim.VirtualMachineConfigSpec
        $spec2.ChangeTrackingEnabled = $true
        $vm.ExtensionData.ReconfigVM($spec2)

        $snap2 = New-Snapshot -VM $vm -Name "CBT-Verify-$(Get-Date -Format 'yyyyMMdd')" -Confirm:$false
        Remove-Snapshot -Snapshot $snap2 -Confirm:$false

        $resetCount++
        Write-Host "    Done"
    } catch {
        Write-Host "    ERROR: $($_.Exception.Message)"
    }
}

Write-Host ""
Write-Host "Reset complete. Reset: $resetCount | Skipped: $skipCount"
Write-Host "Run active full backups after this to re-establish CBT baselines on affected VMs."
After a CBT reset, the next backup run reads the entire VM disk to re-establish the tracking baseline. The backup file created is still incremental in size because Veeam compares against existing data. But the disk read time is the same as a full backup. Plan CBT resets for off peak periods or maintenance windows, especially for large VMs or environments where storage I/O is constrained during backup windows.

3. Microsoft Hyper-V: Two CBT Implementations

Hyper-V is unusual in that Veeam uses two completely different CBT implementations depending on the host OS version. The behavior, the files involved, and the reset procedures are all different. Getting them confused leads to incorrect troubleshooting.

Hyper-V 2016 and Later: Resilient Change Tracking (RCT)

Microsoft introduced native changed block tracking in Hyper-V 2016 called Resilient Change Tracking, or RCT. On supported hosts, Veeam uses RCT exclusively. It does not install a filter driver on these hosts. The requirements are straightforward but strict:

  • All hosts in the cluster must be Hyper-V 2016 or later. If even one node is still on 2012 R2, Veeam falls back to its proprietary driver for the entire cluster.
  • The cluster functional level must be upgraded to 2016.
  • VM configuration version must be 8.x or later.

The "Resilient" in RCT refers to its ability to survive host crashes. It maintains three separate bitmaps to guarantee no changes are lost even during abnormal shutdowns:

  • In-memory bitmap: The most granular and current representation of changed blocks. Lives only in host RAM. Lost if the host crashes.
  • RCT file (.rct): Less granular than the in-memory bitmap. Written in write-back mode. Used when the in-memory bitmap is unavailable, such as when a VM live migrates to a different host.
  • MRT file (.mrt): The Modified Region Table. The coarsest bitmap, written in write-through mode. Used when both the in-memory bitmap and RCT file are unavailable, typically after a power loss or host crash. It covers more blocks than the RCT file would for the same change set, but it guarantees nothing is missed.

After a normal backup cycle you'll see .vhdx, .rct, and .mrt files in the VM's storage location. The RCT and MRT files are per-disk, not per-VM. If a VM has three virtual disks you'll see three of each.

When RCT data is corrupted, use the Reset-HvVmChangeTracking PowerShell cmdlet from within VBR. This clears and resets the change tracking data for a specific VM or a specific VHD/VHDX file. After the reset, the next backup job reads the full disk but produces a normal incremental. CBT returns to normal on the subsequent run.

VBR PowerShell: Reset Hyper-V RCT for a specific VM
Connect-VBRServer -Server "vbr-server.domain.local"

# Reset CBT for an entire VM (all disks)
$vm = Find-VBRHvEntity -Name "vm-name-here" -Server (Get-VBRServer -Name "hyperv-host.domain.local")
Reset-HvVmChangeTracking -VM $vm

Write-Host "RCT reset complete. Next backup will read full disk, then return to incremental."

Disconnect-VBRServer
VBR PowerShell: Reset Hyper-V RCT for a specific VHD file
Connect-VBRServer -Server "vbr-server.domain.local"

# Reset CBT for a single VHD/VHDX disk on a VM
$vm  = Find-VBRHvEntity -Name "vm-name-here" -Server (Get-VBRServer -Name "hyperv-host.domain.local")
$vhd = Get-VBRHvVMDisk -VM $vm | Where-Object { $_.Path -like "*disk-1*" }

Reset-HvVmChangeTracking -VHD $vhd

Write-Host "RCT reset for specific VHD. Next backup will do full read of this disk only."

Disconnect-VBRServer

Hyper-V 2012 R2 and Earlier: Veeam's Proprietary CBT Driver

On Hyper-V 2012 R2 and earlier, Microsoft had no native CBT. Veeam shipped its own solution: a file system filter driver called the Veeam CBT driver. This driver is installed automatically on every Hyper-V host when the host is added to Veeam and a backup job addresses a VM on it. It runs as a Windows service and intercepts I/O to track changed blocks in the virtual disk files.

The driver stores its tracking data in CTP files in C:\ProgramData\Veeam\CtpStore\ on each host. There's a subfolder per VM, and each VHD/VHDX file has a corresponding CTP file. These files live on the Hyper-V host, not alongside the VM disks in the cluster storage.

The important limitation: since Veeam's CBT driver is not compatible with third party SMB implementations, it doesn't work correctly on some hyperconverged infrastructure setups where storage is served via custom SMB implementations. On those environments, upgrading to Hyper-V 2016 and using RCT is the right path.

To reset Veeam's proprietary CBT on Hyper-V 2012 R2, use the same Reset-HvVmChangeTracking cmdlet. It handles both the CTP driver reset and the RCT reset depending on which mechanism is in use on the target host.


4. Nutanix AHV CBT

AHV CBT works fundamentally differently from VMware and Hyper-V. It doesn't use a tracking file or a filter driver. Instead, it relies entirely on snapshot comparison through the Nutanix REST API. There's no CBT file to inspect, no driver to reset.

How AHV CBT Works

During the first full backup, Veeam creates a native Nutanix AHV snapshot of the VM and uses Nutanix REST API calls to read the snapshot content and identify unallocated blocks, which are skipped. This makes the first backup faster than a raw full disk read would be.

For subsequent incremental backups, Veeam creates a new snapshot, then uses the Nutanix REST API to compare the new snapshot against the snapshot retained from the previous backup session. The API returns the list of blocks that differ between the two snapshots. Veeam reads only those blocks. The previous snapshot is then deleted and the new one is kept for the next run.

This is why after each AHV backup job you'll see one snapshot per VM left on the Nutanix cluster. The name includes third_party_backup_snapshot followed by a UUID. That's deliberate. Veeam is keeping it as the reference point for the next incremental comparison. Don't delete it manually. If it gets deleted, the next backup can't compare snapshots and will fall back to a full read.

AHV CBT Limitations

Veeam doesn't use CBT for AHV backup jobs that include a protection domain with consistency groups containing two or more entities. In those cases Veeam reads the full disk content and compares it against what's already in the repository. Incremental backups in that mode take progressively longer as the chain grows. If you're seeing AHV incrementals grow unexpectedly large or slow, check whether your job includes a multi-entity consistency group.

Nutanix AOS version compatibility also matters. As noted in the Veeam community forums, Nutanix changed an API that Veeam relies on in AOS 6.8, which broke CBT for that release. Veeam's support team confirmed the issue. If you're on an AOS version that Veeam hasn't formally validated against, check the interoperability matrix before assuming CBT should work.


5. Cross Hypervisor Comparison

AspectVMware vSphere CBTHyper-V RCT (2016+)Hyper-V Veeam Driver (2012 R2)Nutanix AHV
ImplementationVMkernel native, part of VADPMicrosoft native, Hyper-V 2016+Veeam proprietary filter driverNutanix REST API snapshot comparison
Tracking filesCTK files alongside VMDKs on datastoreRCT and MRT files alongside VHDXsCTP files in C:\ProgramData\Veeam\CtpStore\ on hostNo tracking files. Snapshot retained per VM on cluster.
Crash resilienceReset required after host crash. CTK data may be lost.Three bitmap layers (memory, RCT, MRT). Survives crashes via MRT.CTP files can become stale after host restart.If snapshot is intact, CBT survives host failure.
VM migration behaviorCTK files travel with VMDK on datastore. CBT survives vMotion.RCT file handles migrations. In-memory bitmap rebuilt on new host.CTP files on old host. New host rebuilds from scratch. Next run is full read.Snapshot stays on cluster. CBT survives VM migrations.
Reset mechanismPowerCLI: disable, snapshot cycle, re-enableReset-HvVmChangeTracking cmdletReset-HvVmChangeTracking cmdletNo explicit reset. Delete retained snapshot to force full re-read.
Fallback when CBT unavailableVeeam proprietary checksum filteringFull disk read and comparisonFull disk read and comparisonFull disk read and repository comparison

6. Performance Impact and Sizing Considerations

CBT itself has a small but measurable overhead on VMs. The VMkernel (VMware) or Hyper-V kernel is doing additional work to track every write. In practice this is rarely observable on modern hardware, but it's worth knowing for specific workloads.

  • For VMware, the CTK file is pre-allocated at a fixed size relative to the disk. There's no growing file causing fragmentation. The overhead is in the VMkernel I/O path recording the changed block offsets.
  • For Hyper-V RCT, the write-through nature of the MRT file means every write to the virtual disk also writes to the MRT. This is the expected cost for the crash resilience guarantee. On high I/O VMs (large SQL servers, high-throughput databases) this can become visible as slight write latency increases.
  • For AHV, the overhead is in the snapshot comparison API calls Veeam makes between job runs. The snapshots themselves consume Nutanix cluster storage while they're retained.

The sizing implication is straightforward: the size of your CTK or RCT/MRT files scales with virtual disk size, and those files live on your production datastore. For environments with hundreds of large VMs, factor the tracking file overhead into your datastore capacity planning. On a 1 TB virtual disk, the CTK file is approximately 500 MB. Not significant on its own, but across 200 VMs with multi-disk configurations it adds up.

If you're seeing unexpectedly slow incremental backups on a subset of VMs, check the Veeam job log for "CBT data is invalid" or "failing over to legacy incremental backup." That message means CBT broke on those VMs and Veeam is reading full disks. It's the single most common cause of incremental backups that are slower than expected and it's almost always fixable with a CBT reset.

7. Monitoring CBT Health Across Your Environment

Waiting for a slow backup to tell you CBT is broken isn't a great monitoring strategy. A better approach is proactively querying the last job sessions for CBT fallback warnings and addressing them before they affect your backup windows.

PowerShell: Find all VMs with CBT fallback warnings in recent job sessions
Connect-VBRServer -Server "vbr-server.domain.local"

$cutoff  = (Get-Date).AddHours(-24)
$sessions = Get-VBRBackupSession | Where-Object {
    $_.EndTime -gt $cutoff -and $_.Result -ne "None"
}

$cbtIssues = @()

foreach ($session in $sessions) {
    $taskSessions = Get-VBRTaskSession -Session $session
    foreach ($task in $taskSessions) {
        if ($task.Info -like "*CBT data is invalid*" -or
            $task.Info -like "*failing over to legacy*" -or
            $task.Info -like "*change block tracking*invalid*") {
            $cbtIssues += [PSCustomObject]@{
                VMName    = $task.Name
                JobName   = $session.JobName
                EndTime   = $session.EndTime
                Message   = "CBT invalid - legacy incremental used"
            }
        }
    }
}

if ($cbtIssues.Count -eq 0) {
    Write-Host "No CBT issues detected in the last 24 hours."
} else {
    Write-Host "VMs with CBT issues in the last 24 hours: $($cbtIssues.Count)"
    $cbtIssues | Format-Table -AutoSize
    $cbtIssues | Export-Csv "C:\Reports\CBT-Issues-$(Get-Date -Format 'yyyyMMdd').csv" -NoTypeInformation
}

Disconnect-VBRServer

Run this script as a scheduled task nightly. If the output CSV has entries, investigate and reset CBT on affected VMs before the next backup window. Catching CBT corruption the same night it happens means one slow backup run instead of weeks of degraded incremental performance while the issue goes unnoticed.


Key Takeaways

  • CBT lets Veeam ask the hypervisor "what changed?" instead of reading the entire disk. Without it, incremental backups still work but require full disk reads. The backup file stays incremental in size. Only the read time is affected.
  • VMware CBT stores change data in CTK files alongside the VMDKs on the datastore. CTK files are approximately 0.5 MB per 10 GB of disk. They don't grow unless the disk is expanded.
  • Hyper-V 2016 and later use Microsoft's native RCT with three redundant bitmaps: in-memory, RCT file, and MRT file. The MRT is write-through, which adds minor I/O overhead but guarantees no changes are missed after a crash. All Hyper-V 2016 cluster nodes must be on 2016+ for RCT to activate. One older node and the whole cluster falls back.
  • Hyper-V 2012 R2 and earlier use Veeam's proprietary filter driver and CTP files stored on the host in C:\ProgramData\Veeam\CtpStore\. These don't survive VM live migration to a different host cleanly.
  • AHV CBT uses Nutanix REST API snapshot comparison. No tracking file. One retained snapshot per VM after each backup. That snapshot is the reference for the next incremental. Don't delete it manually.
  • The VMware CBT reset procedure is: disable CBT, create and remove a snapshot to commit the change, re-enable CBT, create and remove another snapshot to confirm. The VM doesn't need to power off, but expects a brief stun during snapshot operations.
  • Reset-HvVmChangeTracking is the VBR PowerShell cmdlet for both Hyper-V CBT mechanisms. It handles both RCT and the proprietary driver based on what the target host is running.
  • "CBT data is invalid, failing over to legacy incremental backup" in a job log means CBT broke on that VM. One slow backup is expected. Reset CBT and it returns to normal on the next run. If you're not monitoring for this warning, you won't know it's happening until your backup windows start growing.

Read more