Veeam Backup Job Scheduling Strategy -- Synthetic Full, Active Full, and Backup Windows

Everyone starts with "run at midnight." It works fine for ten VMs. It starts causing problems at fifty. By a hundred it is a constant source of noise, missed windows, and storage surprises that are hard to diagnose because the real cause is just contention. Veeam's scheduling is flexible enough to handle real production environments well, but only if you understand what the options actually do and how they interact. This article covers the full picture: forever incremental vs. periodic fulls, how synthetic full actually works under the hood, active full and when it earns its cost, backup window configuration, and how to spread jobs across a large environment without creating a daily backup storm.

The Default Chain: Forever Forward Incremental

When you create a new backup job without configuring any periodic full backups, you get forever forward incremental. The first run creates a full backup (a VBK file). Every subsequent run creates a small incremental file (a VIB file) capturing only the blocks that changed since the previous run. One full, followed by an indefinitely growing sequence of incrementals.

This is storage-efficient and puts minimal load on production infrastructure after the first run. Each incremental reads only changed blocks, processes them quickly, and writes a small file to the repository. For environments where storage is the primary constraint and daily change rates are low, this is a perfectly reasonable way to run.

The tradeoff is restore time. To restore a VM to the latest restore point, Veeam has to apply every incremental in the chain on top of the original full. A chain that is 30 incrementals deep takes longer to restore from than a chain that is 6 deep. On modern repository hardware this difference is usually small, but it is real and it grows the longer you run without a full backup to reset the chain.


Synthetic Full Backup

A synthetic full backup creates a new VBK file without reading anything from production storage. Here is exactly what happens during a synthetic full session:

First, Veeam runs a normal incremental backup and adds it to the existing chain. This is the only step that touches production storage. Then, Veeam Data Mover on the repository consolidates the existing full backup plus all the incrementals in the chain, including the one just created, into a brand new full backup file. Once the synthesis completes, the incremental created at the start of the session is deleted because its data is now part of the new full. The old full backup file stays on disk until the retention policy cleans it up.

The result is a fresh chain reset that happened almost entirely on the repository server. Production storage saw only a normal incremental read.

Synthetic full runs even on days the regular job is not scheduled

If you schedule the parent backup job to run Sunday through Friday at midnight, and you configure synthetic full to run on Saturday, Veeam will still trigger a synthetic full session on Saturday at midnight even though no regular backup is scheduled that day. The synthetic full session always runs at the same time as the parent job. If a regular backup session and a synthetic full are both scheduled on the same day, only the synthetic full is produced. The incremental that would have been created by the regular session is skipped. If you need an additional incremental that day for some reason, run the job manually and Veeam will create a normal incremental.

Synthetic full is not supported on object storage repositories

If your backup job targets an object storage repository directly (S3, Azure Blob, or similar), synthetic full is not available. Object storage does not support the random read and write access pattern the synthesis process requires. For jobs writing directly to object storage, use active full backups on a schedule instead. For jobs writing to a local or network repository that offloads to object storage via Scale-Out Backup Repository, synthetic full works fine on the performance tier.

When to use synthetic full

Synthetic full is the right choice when you want to periodically reset the backup chain without hitting production storage or consuming WAN bandwidth on a full read. Large VMs, constrained backup windows, or any environment where you cannot afford the I/O cost of a weekly active full are all good candidates. The repository needs enough free space to hold both the existing chain and the new synthetic full simultaneously during the synthesis process, so account for that when planning capacity.


Active Full Backup

An active full backup reads every block of every VM in the job from production storage and writes a complete new VBK file to the repository. It is the most resource-intensive backup operation you can run in Veeam. It consumes production storage I/O, backup proxy resources, and network bandwidth between production and the repository. For large VMs it can run for hours. It also resets the chain the same way a synthetic full does.

The advantage is that the resulting backup file is completely self-contained and derived directly from production data. There is no dependency on prior backup files and no synthesis step. For environments with compliance requirements around periodic independent full backups, active full is the correct choice. Some auditors will not accept a synthetic full as a "real" full backup because it is derived from previous backups rather than read directly from production.

Active full wins when both are scheduled on the same day

If you configure both an active full and a synthetic full to run on the same day, Veeam creates the active full and skips the synthetic full entirely. Active full always takes priority. This matters when building a schedule with monthly active fulls and weekly synthetic fulls: make sure your monthly active full days do not land on the same day as a scheduled synthetic full, or the synthetic full will silently not run that week.

When to use active full

Active full makes sense when compliance requires a periodic independent full read from production data, when you are writing directly to object storage where synthetic full is unavailable, or when your backup window can comfortably absorb the extra I/O. Weekly active fulls on a Saturday or Sunday night work well for environments that have the capacity for it. Just make sure the window is long enough to finish before Monday morning.


Choosing a Schedule Pattern

PatternBest forProduction impactRepository work
Forever forward incremental only Small environments, low change rates, tight storage Low after first run Low, chain grows over time
Daily incremental + weekly synthetic full Most production environments Low (no full read from production) Medium on synthesis day
Daily incremental + weekly active full Object storage targets, compliance requirements High on full day Low (no synthesis)
Daily incremental + monthly active full + weekly synthetic full Compliance environments needing independent monthly verification High once per month, low otherwise Medium on synthesis days

Backup Windows: Why "Run at Midnight" Breaks Down

The backup window setting defines the time period during which a job is allowed to run. It does not define when the job starts. The schedule handles that. The backup window is what Veeam checks to decide whether to keep running or terminate a session that has run past the allowed hours.

For one job protecting a handful of VMs, a loose overnight window or no window at all works fine. In an environment with thirty jobs all scheduled at midnight, you get a backup storm: every job competes for proxies, storage I/O, and network bandwidth simultaneously. Jobs that completed in 45 minutes now take three hours because they are all fighting over the same resources. Some miss their window. Alerts fire. People investigate problems that are really just contention.

Spreading jobs across the night

Tier by priority and size. Run your most critical, smallest VMs first. Domain controllers and SQL Servers should have restore points before the large archive file server backup starts. If something goes wrong mid-night you want the important stuff protected first.

Stagger start times by 15 to 30 minutes. This gives each job time to acquire proxies and start working before the next one requests the same resources. Ten jobs starting at exactly 00:00:00 creates contention that staggering eliminates almost entirely.

Set backup windows with a realistic close time. If your workday starts at 8am, close backup windows at 6am to give yourself a two-hour buffer for jobs that run long. A backup job still running at 7:55am is generating production storage I/O at exactly the wrong time.

Veeam backup job schedule settings showing the allowed hours grid for backup window configuration

The allowed hours grid in a backup job schedule. Jobs running outside this window are terminated at the next task boundary.


Retry Behavior and Window Breaches

Veeam retries failed tasks up to three times by default, with a configurable interval between attempts. In most environments this is fine. In an environment where jobs are already finishing close to the end of the backup window, a retry attempt that starts at 5:50am can run past 6am and breach the window you configured. Either extend the backup window to absorb retry attempts or reduce the retry count for jobs that consistently run close to the boundary. A job that fails and retries into business hours is worse than a job that fails cleanly and fires an alert you investigate in the morning.


Health Check and Compact: Two More Schedule Decisions

Health check periodically reads backup data and verifies it against stored checksums. It catches silent corruption in backup files before you discover it during a restore, which is the worst possible time to find out. Schedule it weekly or monthly, on a day when the backup job is otherwise light. Do not schedule it on synthetic full day because both operations hit the repository simultaneously and add up to a lot of I/O in one window.

Defragment and compact full backup rewrites the VBK file to reclaim space and improve read performance after long-running incremental chains have fragmented it internally. Schedule this monthly, on a weekend night when the repository has headroom. It needs temporary additional free space equal to the size of the full backup being compacted, so check capacity before it runs.

A practical starting schedule for a production environment

Sunday 11pm: active full for tier-1 VMs (SQL, AD, Exchange). Monday through Saturday 11pm: daily incremental for tier-1. Friday 11pm: synthetic full for tier-2 and tier-3 VMs. Saturday 1am: health check. First Sunday of each month: defragment and compact. Backup windows close at 6am for all jobs. This gives tier-1 a fresh independent full every week, synthetic full for everything else to keep chain depth manageable, and maintenance spread across the weekend when repository load is lowest.

Key Takeaways

  • Forever forward incremental is storage-efficient and low-impact on production but builds a chain that grows indefinitely. Restore time increases as the chain gets longer.
  • Synthetic full runs an incremental first to capture the latest changes, then synthesizes a new full from the entire chain on the repository. The incremental created during that session is deleted once synthesis completes. No production storage read beyond the incremental.
  • Synthetic full runs automatically even on days the regular job is not scheduled. If both a regular session and synthetic full are scheduled the same day, only the synthetic full is produced.
  • Active full reads every block from production storage. Resource-intensive but fully independent from prior backups. Takes priority over synthetic full if both are scheduled the same day.
  • Synthetic full is not supported on object storage repository targets. Use active full for jobs writing directly to object storage.
  • Backup windows define the allowed running period, not the start time. Close them before business hours with buffer for retry attempts.
  • Stagger job start times by 15 to 30 minutes. Tier by priority so critical VMs complete before larger, less critical jobs start.
  • Schedule health check and defragment on separate days from synthetic full to avoid stacking repository I/O in the same window.

Read more