Veeam v13: Backup Copy Jobs Deep Dive -- Modes, GFS Retention, Seeding, and SOBR Targets
Veeam v13 Series | Component: VBR v13 | Audience: Hands-on Sysadmins, Enterprise Architects
Backup copy jobs are the mechanism that delivers the second copy in your 3-2-1 strategy. They're also one of the most frequently misconfigured features in Veeam. The mode choice between Immediate and Periodic has consequences that aren't obvious until a GFS full doesn't get created when expected. The retention interaction between primary jobs and copy jobs surprises people who assume the copy just mirrors the primary's retention. Seeded copy jobs for WAN environments require a specific setup sequence that most people skip, and then wonder why the initial copy runs for three weeks over the WAN when it could have seeded from a local drive.
This article covers backup copy jobs completely: how the two modes work and when to use each, GFS retention on copy jobs with the specific caveats that catch people, seeded copies, source selection, SOBR as a copy target, PowerShell automation, and what failure patterns look like and how to diagnose them.
1. Immediate Copy vs Periodic Copy Mode
This is the most impactful decision in copy job configuration and the one that's most often made without fully understanding the difference.
Immediate Copy Mode
In Immediate Copy mode, the backup copy job wakes up whenever a new restore point is created by a source backup job and immediately copies it to the target repository. There's no schedule. Every new primary backup triggers a copy run. The copy target stays as current as the primary, typically with a lag of only as long as the copy transfer takes.
The RPO advantage is real: if the primary repository fails two hours after a backup job completes and before the next backup runs, Immediate Copy mode means the copy repository already has that restore point. Periodic mode might not have transferred it yet.
The trade off: GFS fulls are not created if the copy job didn't run on the day the GFS full was scheduled. This is documented explicitly in the official Veeam helpcenter. If your Immediate Copy job is source-driven and the source backup job didn't run on Friday (the day you've configured for weekly GFS fulls), no weekly GFS full is created for that week. There's no catch-up mechanism for Immediate Copy mode. The GFS slot is simply missed.
Periodic Copy Mode
In Periodic Copy mode, the copy job runs on a defined schedule (every N hours, daily at a specific time) and copies the latest available restore point that exists at the time the job runs. It doesn't copy every new restore point, just the latest one at each scheduled run.
The consequence is that you can lose restore point granularity. If your primary job runs every hour and your copy job runs daily, the copy repository has one restore point per day while the primary has 24. You're not mirroring every primary restore point, you're sampling the latest one at each copy interval.
The GFS behavior is more reliable in Periodic mode: if the GFS full was scheduled for Friday and the copy job runs daily, VBR will create the GFS synthetic full on Friday from the latest available backup chain data, even if the primary backup job ran on Thursday night. GFS fulls in Periodic mode are created on schedule regardless of when the source data arrived. If a GFS synthetic full wasn't created on its scheduled day, VBR creates it after the next successful run.
Which Mode to Use
| Use Case | Recommended Mode | Reason |
|---|---|---|
| Offsite copy to a DR site over a reliable link | Immediate Copy | Best RPO. Every restore point is copied as soon as it exists. |
| Copy to tape or slow WAN link | Periodic Copy | Controls when bandwidth is consumed. Doesn't compete with primary backup windows. |
| Copy job with GFS retention as primary compliance mechanism | Periodic Copy | GFS fulls are created reliably on schedule regardless of source backup timing. |
| Cloud target (S3, Azure Blob) with egress cost per transfer | Periodic Copy | Controlling when transfers happen controls egress cost. Immediate Copy can trigger unexpected egress charges. |
| High-frequency primary backups (hourly or shorter) | Periodic Copy | Immediate Copy would trigger a copy run on every primary run. The copy target can't keep up with 24 copies per day. |
2. GFS Retention on Copy Jobs
GFS retention on a backup copy job works by flagging full backup files with weekly (W), monthly (M), or yearly (Y) flags. Once a GFS flag is assigned to a full backup file, that file can no longer be deleted or modified. Short-term retention can't touch it. The retention policy applies on top of short-term retention: VBR keeps the GFS-flagged fulls until their GFS period expires, and manages regular backup chain files with the short-term retention point count independently.
GFS Methods: Synthetic Full vs Active Full
When creating GFS archive fulls, VBR uses synthetic full creation by default. It reads data from the existing backup chain on the copy target and synthesizes a full backup without re-reading from the source. This is efficient but generates random I/O on the copy target, which is a problem for deduplication appliances (ExaGrid, Data Domain, StoreOnce) that are optimized for sequential writes.
For copy jobs targeting deduplication appliances, switch the GFS method to Active Full. Active Full reads directly from the source backup repository and transfers a full copy to the target. The writes to the dedup appliance are sequential, which the appliance handles efficiently. The trade off is higher source I/O and more WAN bandwidth consumed during GFS full creation.
The Short-Term Retention Interaction
When GFS is enabled, short-term retention counts restore points only in the active backup chain, not across the entire combination of all backup chains on the copy target. GFS-flagged full backups start new backup chains. VBR stops merging incrementals into them because they can't be modified. The active chain is the one between the most recent GFS full and the present. Short-term retention manages that active chain. Everything behind a GFS flag is managed by the GFS retention period alone.
The practical result: if you set 14 restore points of short-term retention and enable weekly GFS, your copy target holds the last 14 restore points in the current chain plus at least one GFS-flagged full per week going back however far your weekly retention period extends. The total storage consumption is higher than 14 restore points alone. Size copy target repositories to account for GFS-flagged fulls on top of your short-term retention target, not instead of it.
3. Copy Job Source: Backup Job vs Backup Repository
A backup copy job can draw from two different source types: a specific backup job, or a backup repository. The choice affects which restore points are available, how GFS flags are applied, and what happens when source jobs change.
- Source: Backup Job. The copy job monitors specific backup jobs you name. When any of those jobs create a new restore point, the copy job picks it up. Adding a new VM to the source backup job automatically includes it in the copy job's scope. Removing a VM from the source job stops new restore points for that VM from being copied, but doesn't remove existing restore points from the copy target until retention expires them.
- Source: Backup Repository. The copy job monitors all backups stored in a specified repository and copies everything there. This is useful for MSP scenarios where you want to copy everything in a repository regardless of which jobs produced it. The scope changes automatically as backups are added to or removed from the repository.
For most environments, sourcing from specific backup jobs gives you tighter control. Sourcing from a repository is better when you want to apply a uniform offsite copy policy across all jobs on a site without managing copy configurations per job.
4. Seeded Copy Jobs for WAN Environments
The first run of a backup copy job to a remote site has to transfer the full backup data. For a 10 TB environment over a 100 Mbps WAN link, that's roughly nine days of continuous transfer. A seeded copy job eliminates this initial blast by pre-loading the target repository with a copy of the backup data, then pointing the copy job at that seed as its starting point. The copy job only needs to transfer changes from there forward.
The Seeding Workflow
- Run the source backup jobs normally until you have a full backup chain you want to seed from.
- Copy the backup files to portable media (external drives, NAS, shipping drives) and physically transport them to the remote site. This is called "backup copy seeding" or sometimes "ship the seed."
- At the remote site, place the backup files in a directory on the target repository server.
- In VBR, rescan the target backup repository so VBR indexes the seeded backup files.
- Create the backup copy job. On the Target step of the wizard, click the Map backup link and select the seeded backup files as the starting point. VBR maps the copy job to the existing data and begins copying only incremental changes.
5. Copy Jobs Targeting SOBR
A backup copy job can target a Scale Out Backup Repository as its destination. The copy job data lands in the SOBR's performance tier and participates in the same capacity tier offload and archive tier processes as primary backup data. This is a powerful combination for environments that want offsite copy to land on fast local disk and then age to object storage automatically.
Three constraints when using a SOBR as a copy job target:
- GFS-flagged fulls and Move mode. GFS-flagged full backups are treated as sealed chains once the active short-term chain moves forward past them. When they age past the SOBR Move threshold, VBR offloads them to the capacity tier the same as any other inactive sealed chain. What stays on the performance tier is the active short-term chain, because it is never sealed and cannot be moved. Size the performance tier for the short-term chain plus GFS fulls that have not yet reached the Move age threshold, not just the short-term operational restore window.
- Chain integrity on the performance tier. If the SOBR's Data Locality policy moves part of a backup chain to a sealed or evacuated extent and that extent goes offline, the copy job chain breaks on the next run. Monitor extent health on SOBRs that are copy job targets the same way you monitor primary job target SOBRs.
- Forever forward incremental and Move mode. Forever forward incremental chains are always active because the single full backup is never sealed. Move mode requires an inactive sealed chain to operate, so Move mode has nothing to act on with a forever forward incremental chain. This applies to both primary jobs and copy jobs equally. Use Copy mode if your copy job produces a forever forward incremental chain and you need capacity tier offload.
6. PowerShell Automation
Connect-VBRServer -Server "vbr-server.domain.local"
# Get source backup jobs to copy from
$sourceJobs = @(
Get-VBRJob -Name "Backup - Production VMs",
Get-VBRJob -Name "Backup - Database Servers"
)
# Get target repository
$targetRepo = Get-VBRBackupRepository -Name "DR-Site-Repo"
# Build GFS retention schedule
# Keep weekly fulls for 4 weeks, monthly fulls for 12 months, yearly fulls for 7 years
$gfsPolicy = New-VBRBackupGFSRetentionPolicy `
-IsWeeklyEnabled $true -WeeklyRetentionPeriod Weeks4 `
-IsMonthlyEnabled $true -MonthlyRetentionPeriod Months12 `
-IsYearlyEnabled $true -YearlyRetentionPeriod Years7
# Create the copy job in Periodic mode (runs daily at 01:00)
$copyJob = Add-VBRBackupCopyJob `
-Name "Copy - Production to DR" `
-SourceJob $sourceJobs `
-BackupRepository $targetRepo `
-RestorePointsToKeep 14 `
-GFSRetentionPolicy $gfsPolicy `
-Description "Daily copy to DR site with 7-year GFS retention"
Write-Host "Copy job created: $($copyJob.Name)"
Write-Host "Mode: $($copyJob.CopyMode)"
# Schedule the copy job to run daily at 01:00
$schedule = New-VBRScheduleOptions -Type Periodically -PeriodicallyKind Hours -FullPeriod 24 -StartDateTime ([datetime]"2025-01-01 01:00")
Set-VBRJobSchedule -Job $copyJob -Options $schedule
Disconnect-VBRServer
Connect-VBRServer -Server "vbr-server.domain.local"
$copyJobs = Get-VBRJob | Where-Object { $_.JobType -eq 'BackupCopy' }
foreach ($job in $copyJobs) {
Write-Host "`n=== $($job.Name) ==="
Write-Host " Mode: $($job.CopyMode)"
Write-Host " Last Result: $($job.GetLastResult())"
Write-Host " Last Run: $($job.LatestRunLocal)"
# Get restore points for this copy job and check GFS flags
$backup = Get-VBRBackup | Where-Object { $_.JobId -eq $job.Id }
if ($backup) {
$points = Get-VBRRestorePoint -Backup $backup | Sort-Object CreationTime -Descending
Write-Host " Restore Points: $($points.Count)"
# Find GFS-flagged restore points
$gfsPoints = $points | Where-Object { $_.GetGFSFlags() -ne 'None' }
if ($gfsPoints) {
Write-Host " GFS Points:"
$gfsPoints | ForEach-Object {
Write-Host " $($_.CreationTime.ToString('yyyy-MM-dd')) - Flags: $($_.GetGFSFlags())"
}
} else {
Write-Host " GFS Points: None flagged"
}
}
}
Disconnect-VBRServer
Connect-VBRServer -Server "vbr-server.domain.local"
$copyJobs = Get-VBRJob | Where-Object { $_.JobType -eq 'BackupCopy' }
$threshold = (Get-Date).AddHours(-25) # Flag copy jobs not run in over 25 hours
$issues = @()
foreach ($job in $copyJobs) {
$lastRun = $job.LatestRunLocal
$lastResult = $job.GetLastResult()
$lag = if ($lastRun) { [math]::Round(((Get-Date) - $lastRun).TotalHours, 1) } else { 999 }
# Flag jobs not run recently or with failure/warning state
$warning = $false
$reason = @()
if (-not $lastRun -or $lastRun -lt $threshold) {
$warning = $true
$reason += "Last run: $(if ($lastRun) { "$lag hours ago" } else { 'never' })"
}
if ($lastResult -eq 'Failed') {
$warning = $true
$reason += "Last result: FAILED"
}
if ($lastResult -eq 'Warning') {
$reason += "Last result: WARNING"
}
if ($warning -or $lastResult -eq 'Warning') {
$issues += [PSCustomObject]@{
JobName = $job.Name
LastRun = if ($lastRun) { $lastRun.ToString("yyyy-MM-dd HH:mm") } else { "Never" }
LagHours = $lag
LastResult = $lastResult
Issues = $reason -join "; "
}
}
}
if ($issues.Count -eq 0) {
Write-Host "All copy jobs are running on schedule with no failures."
} else {
Write-Host "Copy jobs requiring attention: $($issues.Count)"
$issues | Format-Table -AutoSize
$issues | Export-Csv "C:\Reports\CopyJob-Health-$(Get-Date -Format 'yyyyMMdd').csv" -NoTypeInformation
}
Disconnect-VBRServer
7. Common Failure Patterns and How to Diagnose Them
GFS Full Not Created on Schedule
The most common GFS complaint. Check the copy job mode first. If it's Immediate Copy mode and the source backup job didn't run on the day the weekly GFS full was scheduled, no GFS full was created. The fix is switching to Periodic Copy mode for any copy job where GFS reliability matters, or accepting that Immediate Copy mode can miss GFS creation on days when the source job doesn't run.
If the copy job is Periodic mode and a GFS full still didn't get created, check whether the copy job completed successfully on the scheduled GFS creation day. If the copy job ran with warnings or a partial result, VBR may not have had sufficient data to create the GFS full. Check the job log for that specific day's run.
Copy Job Always Shows Warning with "Some VMs are not Protected"
This happens when a backup copy job can't process all VMs in the time window before the next run starts. VBR creates a backup file on the target for the VMs it did process but leaves others inconsistent. The cause is usually a copy interval that's too short relative to how much data needs to transfer, or WAN bandwidth insufficient to transfer all VMs between copy runs. Increasing the copy interval or reducing the scope per copy job resolves this.
Copy Job Running Behind Primary by More Than One Day
In Periodic Copy mode, if the copy job is set to copy daily but is consistently behind by more than one primary backup cycle, the WAN link between the source and target isn't fast enough to transfer the changed data volume within 24 hours. Options: reduce the amount of data per copy job by splitting VMs across multiple copy jobs, add WAN acceleration, or increase the copy interval to every 48 hours and accept a deeper RPO on the copy. WAN acceleration produces the most impact when data has good deduplication ratios against existing data at the target.
Copy Job Failing with "Target Repository Has No Free Space"
On a SOBR target, this usually means the performance tier is full even though the SOBR summary shows available capacity in the capacity tier. VBR can't write to the capacity tier directly for new backup chains. Add a performance tier extent or reduce the offload age threshold so data moves to the capacity tier faster. On a standard repository target, size the repository to hold short-term retention plus GFS-flagged fulls across their full retention windows.
Key Takeaways
- Immediate Copy mode copies every new restore point as it's created. Best RPO. GFS fulls are NOT created if the source backup job didn't run on the scheduled GFS creation day. Use Periodic Copy mode when GFS reliability matters more than minimum RPO on the copy.
- Periodic Copy mode copies the latest available restore point at each scheduled interval. You lose granularity on high-frequency primary jobs but GFS fulls are created reliably on schedule. If a GFS full was missed, VBR creates it after the next successful copy run.
- GFS-flagged full backups can't be deleted or modified by short-term retention. Once flagged, the file is owned by its GFS retention period. Short-term retention manages only the active chain between the most recent GFS full and the present.
- Only yearly GFS with no weekly or monthly cycle creates a dangerously long incremental chain. Configure weekly and monthly GFS cycles as well, or set a periodic full backup schedule on the copy job to break the chain at regular intervals.
- For deduplication appliance targets, switch GFS creation to Active Full mode. Synthetic full creation uses random I/O that hurts dedup appliance performance. Active Full writes sequentially, which dedup appliances handle efficiently.
- Seeded copy jobs eliminate the initial WAN blast for large environments. Copy backup files to portable media, ship to the remote site, rescan the target repository, then map the copy job to the seeded data. VBR transfers only incrementals forward from the seed point.
- On a SOBR with Move mode, GFS-flagged fulls are moved to the capacity tier once they are sealed and age past the Move threshold, same as any other inactive chain. The active short-term chain stays on the performance tier because it is never sealed. Size the performance tier for the short-term chain plus GFS fulls that have not yet aged out to the capacity tier.
- "Some VMs are not protected" warnings on copy jobs mean the job can't transfer all VMs within the copy interval. Increase the interval, reduce the job scope, or add WAN acceleration.