Running Veeam VBR and Rubrik in the Same Environment: What Nobody Tells You

Veeam VBR · Rubrik · Coexistence · Architecture
📅 March 2026  ·  ⏱ ~12 min read  ·  By Eric Black
Veeam VBR Rubrik CBT VMware Snapshots VSS Coexistence

Why This Scenario Exists

Nobody designs a VMware environment with both Veeam VBR and Rubrik protecting the same VMs on purpose. It happens for real-world reasons: a merger or acquisition brings two different backup stacks under one roof. A migration from one platform to the other takes longer than planned, leaving both running in parallel for months. A business unit has a standalone Rubrik deployment while corporate IT runs Veeam. Or a proof-of-concept Rubrik deployment never got fully decommissioned after the organization committed to Veeam.

Whatever the reason, if you're in this situation you need to understand exactly what happens at the technical layer when two enterprise backup platforms are simultaneously protecting the same VMware virtual machines. The short answer is that it can work, but the collision points are real, they can be subtle, and the failure modes are the kind that produce multi-hour backup jobs and support cases with no clear root cause unless you know where to look.

How Both Products Take Backups: The Shared Mechanics

Both Veeam VBR and Rubrik rely on the same VMware APIs to protect vSphere VMs. That's not a coincidence; it's the design of the vSphere data protection ecosystem. Both products:

  • Call the vSphere API to create a VM snapshot, which freezes a point-in-time view of the VM's disks
  • Use Changed Block Tracking (CBT) to identify which disk blocks changed since the last backup, enabling efficient incremental backups
  • Read the snapshot data (either via NBD network transport or SAN direct-access transport) to copy it to their respective repositories
  • Call the vSphere API again to delete the snapshot once the backup copy is complete
  • For application-consistent backups on Windows VMs, coordinate with VSS (Volume Shadow Copy Service) inside the guest

When both products are protecting the same VMs, they are competing for access to the same underlying mechanisms. VMware's APIs don't have a coordination layer that prevents two backup products from using CBT, taking snapshots, or calling VSS on the same VM at the same time. That coordination responsibility falls entirely on you through scheduling.

CBT Conflicts: The Core Problem

Changed Block Tracking is a VMware VMDK-level feature that records which 512-byte blocks of a virtual disk have been modified since a designated checkpoint. Each backup product queries CBT to determine which blocks it needs to read for an incremental backup session, then updates the CBT checkpoint after it finishes reading. This is how both Veeam and Rubrik achieve incremental-forever efficiency instead of rereading entire virtual disks on every backup run.

The problem is that CBT was designed to work reliably with one backup product querying it at a time. It does technically support multiple readers, but the interactions are not clean when jobs overlap. The most damaging scenario: a Rubrik job runs and completes inside the window of a Veeam incremental job. When this happens, Veeam's CBT checkpoint gets reset or corrupted from Veeam's perspective. The next time Veeam reads CBT, it either sees a full-backup-worth of changed blocks instead of an incremental, or it reads incorrect change data entirely. The result is an incremental backup job that reads nearly the entire virtual disk, taking as long as an active full backup and transferring far more data than expected.

🚫 CBT Resets Are Silent and Hard to Attribute
When a CBT reset occurs because a competing backup product ran during a Veeam job window, Veeam will log that a CBT reset was detected and that it is falling back to reading more data, but it cannot identify what caused the reset. The log says CBT was reset before the job started, but provides no attribution. You will see a backup job that normally completes in 15 minutes take 12 hours, with no error, no failure, just a very slow read phase. This is the classic symptom of CBT interference from a concurrent backup product.

The reverse is equally true. A Veeam job running during a Rubrik window causes the same problem for Rubrik's incremental chain. Neither product is aware of the other, and neither product has any mechanism to defer its CBT operations when it detects another reader is active.

CBT conflicts do not always manifest as obvious failures. Incremental jobs that take longer than normal, jobs that suddenly transfer dramatically more data than the previous run without any corresponding change in workload, or VMs that frequently fall back to active full backup when no active full is scheduled -- all of these patterns should prompt investigation of whether concurrent backup jobs are interfering with CBT.

VMware Snapshot Collisions

Both Veeam and Rubrik create VM snapshots as part of their backup workflow. VMware allows multiple snapshots to coexist on a VM in a chain, but backup product snapshots are not designed to coexist. Each backup product assumes it is the only entity creating and managing snapshots on a VM during its backup window.

When two backup jobs run concurrently on the same VM, several bad outcomes are possible:

Snapshot chain corruption: If one product creates a snapshot and then another product also creates a snapshot before the first is deleted, the chain structure becomes complex. When the first product attempts to delete its snapshot, the delete operation may fail or produce unexpected results because the chain has a newer snapshot sitting on top of it. VMware's snapshot delete logic requires consolidating changes in order, and out-of-order deletions can fail silently or corrupt the chain.

Extended snapshot lifetime: In normal operation, backup snapshots are deleted within seconds to minutes of the backup read completing. When concurrent jobs are running, snapshot cleanup can be delayed by contention. A snapshot that stays open for hours instead of minutes consumes growing amounts of datastore space as the snapshot delta file accumulates all writes made to the VM during that period. On busy VMs, this can grow rapidly and consume significant storage.

Orphaned snapshots: If a job fails while a snapshot is open, the cleanup process may not complete. This is true for any single backup product, but the risk is elevated in coexistence environments because job failures from contention are more common. Orphaned snapshots from either product left on a VM over time degrade performance and eventually cause backup failures as the snapshot chain grows too complex for efficient operation.

⚠️ Check Snapshot State Regularly in Coexistence Environments
In any environment running two backup products against the same VMs, schedule regular snapshot audits in vCenter to catch orphaned or stale snapshots before they grow. The vSphere client's snapshot manager for individual VMs, or a PowerCLI script scanning all VMs for snapshots older than a defined threshold, is a practical way to catch these. A VM with a backup snapshot more than 4 hours old is a red flag regardless of which product created it.

VSS Contention on Windows VMs

For Windows VMs running VSS-aware applications, both Veeam and Rubrik trigger VSS inside the guest to achieve application-consistent backup. VSS is a Windows framework that coordinates between backup requestors (the backup software) and VSS writers (the applications, such as SQL Server or Exchange) to quiesce in-flight transactions and flush data to disk before the backup snapshot is taken.

VSS does not support concurrent requestors on the same machine. Only one VSS backup session can be active at a time on a given Windows system. If Veeam and Rubrik both attempt to initiate a VSS session on the same Windows VM simultaneously, one of them will succeed and the other will receive a VSS error and fail the backup of that VM. Depending on retry logic and timing, this can cascade: the failed backup retries, the retry overlaps with yet another job on a different VM, and job completion times drift further and further outside the intended backup window.

VSS failure symptoms in coexistence environments include guest processing errors logged in the backup product that initiated second, VMs where application-consistent backups intermittently succeed and intermittently fall back to crash-consistent, and VSS writer timeout events visible in the Windows Application event log on the VM itself.

Storage Array Snapshot Workflows

Both Veeam and Rubrik support backup from storage snapshots (BfSS) on compatible arrays including Pure Storage FlashArray, NetApp, and others. When either product is configured to use array-level snapshots as part of its backup workflow, the interaction between the two products at the storage layer adds another potential conflict point.

In a BfSS workflow, the backup product coordinates with vSphere to create a brief VM snapshot, then triggers the storage array to create an array-level snapshot of the underlying LUN, then immediately deletes the VMware snapshot. The array snapshot becomes the source for the actual backup data copy. This allows the VMware snapshot to exist for only a few seconds rather than for the duration of the data copy.

When both Veeam and Rubrik are configured for BfSS against the same VMs on the same storage array, both are independently managing storage array snapshot lifecycles for the same underlying LUNs. Neither product is aware of the other's array snapshots. In failure scenarios, particularly when a job fails to clean up after itself, orphaned array snapshots and orphaned LUN presentations can accumulate. These are harder to identify and clean up than VMware-level snapshots because they require looking at both the backup product's view and the storage array's view to understand which snapshots are legitimate and which are orphaned.

The Only Real Mitigation: Scheduling Discipline

There is no technical integration between Veeam VBR and Rubrik that prevents conflicts. Neither product has an API for advertising its active backup windows to the other. The only mitigation that reliably prevents CBT conflicts, snapshot collisions, and VSS contention is ensuring that their backup schedules for shared VMs never overlap.

What this requires in practice:

Separate backup windows with buffer time. If Veeam runs from 10 PM to 2 AM, Rubrik should not start until 3 AM or later. The buffer needs to account for job overruns. A job that normally finishes in two hours may take four hours when the source data volume spikes. If the Rubrik window starts the moment Veeam's nominal completion time is reached, a Veeam overrun puts both products in conflict.

No shared VMs in both products' daily schedules. The safest configuration is to decide which product runs which day on shared VMs, if both must run. Veeam runs Monday/Wednesday/Friday, Rubrik runs Tuesday/Thursday, for example. This eliminates the risk of same-day overlap entirely for routine incrementals.

Reconcile retry windows. Both products have configurable retry logic for failed backup tasks. A Veeam job that fails and retries at 3 AM needs to be accounted for when Rubrik's schedule is set. Retry windows are frequently overlooked in coexistence scheduling and are a common source of unexpected conflicts after weeks of clean operation.

Synthetic full and active full schedules. Both products periodically run full backup operations that take significantly longer than incrementals. These must be explicitly scheduled to not overlap between products. A Rubrik full running on the same night as a Veeam synthetic full on the same VM pool almost guarantees conflicts.

💡 Use vCenter Alarms to Detect Concurrent Snapshot Activity
Configure a vCenter alarm to trigger when a VM has more than one snapshot present simultaneously. In a correctly scheduled dual-product environment, no VM should ever have two backup snapshots at the same time. An alarm firing on this condition is a reliable early indicator that a scheduling conflict has occurred, before it degrades into a support case.

Supportability Reality

When things go wrong in a coexistence environment, both Veeam and Rubrik support teams will identify the presence of a competing backup product and flag it as a potential cause. Neither product is officially supported in configurations where both are protecting the same VMs simultaneously. This is not a licensing restriction; it is a practical acknowledgment that the conflict scenarios described above are real, reproducible, and outside what either vendor's support team can reliably isolate and resolve.

What that means operationally: if you open a Veeam support case and the environment has Rubrik also protecting the same VMs, the first troubleshooting step will be to determine whether Rubrik activity is contributing to the symptom. If it is, Veeam support cannot fix the Rubrik side, and Rubrik support cannot fix the Veeam side. You are the integration layer. Resolution requires you to adjust scheduling and confirm the symptom disappears with non-overlapping windows before either vendor will dig deeper.

This is not theoretical obstruction from support teams. The symptoms are genuinely ambiguous. CBT resets, VSS errors, and snapshot failures all have multiple potential causes unrelated to backup product coexistence. Support needs to rule out the interaction before investigating product-specific bugs.

How to Design for Coexistence If You Must

If coexistence is unavoidable during a migration or transition period, the practical design principles are:

Segment VMs by product. The cleanest approach is to decide which product owns which VMs and ensure each VM is protected by only one product at a time. Veeam owns VMs A-M, Rubrik owns VMs N-Z, with a defined cutover date when ownership transfers. VMs in transit from one product to the other go through a brief period of double protection, managed with explicit scheduling controls, and that period is time-bounded.

If overlap is required, use different backup frequencies. Veeam runs daily incrementals, Rubrik runs a weekly full on a dedicated day with no Veeam activity scheduled. The overlap is minimal and the weekly Rubrik run is accounted for explicitly in the Veeam schedule.

Monitor CBT behavior actively. In any dual-product environment, set up alerting on backup duration and data transferred for incremental jobs. An incremental that suddenly reads 10x its normal data volume is a signal of CBT interference. Catching this early prevents it from becoming a recurring unknown source of backup window overruns.

Document everything explicitly. Which product owns which VMs, what the backup windows are for each product, what the retry windows are, and when synthetic or active fulls are scheduled. In shared environments without explicit documentation, the first person who edits a schedule without knowing the other product's constraints will create conflicts that take days to diagnose.

Planning Your Exit

The coexistence period should have a defined end date. Running two enterprise backup platforms against the same VMs indefinitely is not a sustainable operational model. Licensing, support, complexity, and the ongoing risk of conflicts all increase over time.

The practical exit path: identify which product is the strategic standard for your environment. If Veeam is the long-term platform, Rubrik protection of shared VMs should stop as soon as Veeam has established a clean backup chain for each VM. You don't need Rubrik's backup history to remain accessible forever once Veeam has adequate retention of its own. Set a clear decommission date for Rubrik on shared VMs, stop new Rubrik backup jobs for those VMs, confirm Veeam has the restore points you need, and remove the VMs from Rubrik's protection scope.

If Rubrik is the target platform and you are migrating away from Veeam, the same logic applies in reverse. Rubrik needs to establish its backup chain for each VM before Veeam can be removed from that VM's protection scope. Run both for the minimum time required to build adequate retention in the destination product, then cut over cleanly.

Key Takeaways

  • Both Veeam and Rubrik use the same vSphere APIs (CBT, snapshots, VSS) and have no native coordination with each other
  • CBT resets from concurrent jobs cause slow incremental backups with no obvious error -- the symptom is anomalous job duration and data transfer volume
  • VMware snapshot chains can be corrupted or left orphaned when two products manage snapshots on the same VM concurrently
  • VSS does not support concurrent backup requestors -- one product will fail on any Windows VM where both attempt guest processing simultaneously
  • Storage array BfSS workflows add a third conflict layer at the array snapshot and LUN presentation level
  • The only reliable mitigation is scheduling: non-overlapping windows with buffer for overruns and retries
  • Neither Veeam nor Rubrik supports dual-protection of the same VMs as an official configuration -- support cases become complicated
  • Coexistence should be time-bounded with a defined exit date and a clear designation of which product owns which VMs

Read more