Break Glass #01: VBR Server Dead - Rebuild from Configuration Backup

Break Glass // Scenario 01
Your VBR server is gone. Hardware failure, OS corruption, ransomware -- does not matter. The console is unreachable. Jobs are not running. The backup data is sitting on your repositories untouched. But you cannot get to it until you rebuild the server. You have a configuration backup. This is how you use it.
Break Glass VBR v13 Config Backup Disaster Recovery

Why This Happens

VBR servers fail for the same reasons everything else fails. Hardware dies. RAID controllers corrupt volumes. A Windows patch kills the PostgreSQL service bindings and nobody notices until a job fails at 3 AM. A ransomware actor goes after the backup server specifically because they know what it is and what stopping it means.

In MSP environments the VBR server often lives on hardware that has been deprioritized for years. Everyone focuses on client workloads. The backup server just runs. Nobody plans to rebuild it until they have to.

VBR v13 uses PostgreSQL as the default configuration database on Windows. If the OS dies or PostgreSQL gets corrupted during an ungraceful shutdown, you are not recovering that database without significant effort. A configuration backup is the clean path out.

One thing to be clear on before we go further. The configuration backup stores everything VBR knows about your environment: jobs, schedules, proxies, repositories, credentials, tape infrastructure, scale-out backup repository definitions, and encryption key hashes. It does not store backup data. Your restore points are safe on the repositories. What disappeared is VBR's map of how to reach them. That is what you are rebuilding.

Triage

Do not spin up a new server yet. Confirm the situation and gather everything you need before you touch an installer.

  1. 1Confirm the VBR server is actually unrecoverable. Can you boot to Windows recovery? Can you mount the drive on another host? If the PostgreSQL data directory is readable, you may have options short of a full rebuild. Do not assume total loss until you have ruled out simpler paths.
  2. 2Find the configuration backup file (.bco extension). It should be on a repository that is not the VBR server itself -- a network share, object storage, or a separate repo host. If the config backup was stored on the VBR server's local disk, you have a harder situation. See the Gotchas section.
  3. 3Identify the VBR version of the failed server. VBR v13 can only restore configuration backups created with VBR v13. This is a change from v12, which supported cross-version config restores going back to v10a. If your last config backup was taken on a v12 server, you cannot restore it directly onto a v13 install. You would need to install v12 first, restore the config backup there, then upgrade to v13. Check your asset tracking, licensing portal, or any documentation you have to confirm the version that created the .bco file.
  4. 4Retrieve the config backup encryption password. If the backup was encrypted (it should be), you need this password to open the .bco file. Without it, the wizard cannot read the backup. This is separate from job encryption passwords and separate from the PostgreSQL password.
  5. 5Retrieve the PostgreSQL superuser password set during the original VBR installation. You need this when installing VBR on the replacement server. It is not stored anywhere in the VBR UI or in the config backup. If it is not in your password vault, see the Gotchas section.
  6. 6Check whether any registry keys were manually created or modified on the original VBR server. The configuration database does not preserve registry values. If you had custom registry entries under HKLM\SOFTWARE\Veeam\ for tuning or workarounds, document them now before they become a mystery later.
  7. 7Get the service account credentials for the Veeam services. If VBR runs under a domain account and that password has changed since the last config backup was taken, services may fail to start on the new server. Get the current credentials now.
  8. 8Confirm you have your Veeam license key or access to the licensing portal. A fresh install on new hardware may require re-activation.

The Recovery Path

  1. 1Before running the installer, check that port 443 is not already bound to something on the new server. VBR v13 requires port 443 for its REST API and web UI. Anything else sitting on 443 causes VeeamBackupRESTSvc to fail to start. Check now:
    netstat -ano | findstr :443
    If anything comes back, identify the process by PID and resolve it before running setup.
  2. 2Provision a replacement Windows Server 2019 or 2022. Use the same hostname as the original if you can. A hostname change does not break the config restore itself, but it requires re-establishing trust relationships with every managed server that referenced the old name.
  3. 3Mount the VBR v13 ISO and run setup.exe. Select Veeam Backup and Replication. At the database step, choose PostgreSQL and enter the superuser password. Use the same installation path as the original server where possible. Do not configure any backup infrastructure during setup -- leave the environment empty. Let the installer complete and verify all Veeam services start before continuing.
  4. 4Open the VBR console. From the main menu (the hamburger icon, top left), select Configuration Backup. Click Restore. The Configuration Database Restore wizard opens.
  5. 5At the Restore Mode step, select Restore -- not Migrate. Migrate is for moving from SQL Server to PostgreSQL on the same machine. You want a straight restore. Click Next.
  6. 6At the Configuration Backup step, specify the path to your .bco file. You can point at a repository the new server can reach, or copy the file locally and browse to it. Click Analyze. The wizard reads and validates the backup. If it reports a version error, the backup predates v10a -- the earliest version VBR v13 supports. If it reports the file is corrupted, you may have a damaged or incomplete file.
  7. 7At the Specify Password step, enter the config backup encryption password. If the password is wrong, the wizard rejects it with no bypass and no hint. You either have it or you do not.
  8. 8Review the configuration backup parameters the wizard displays. Confirm job count and the backup timestamp match what you expect. This is your last checkpoint. Make sure this is the most recent .bco file before proceeding.
  9. 9Complete the remaining steps for target database and restore options, then click Restore at the Summary step. The wizard prompts you to close the console -- accept it. Veeam services stop, the configuration restores, and services restart automatically. This typically takes 5 to 20 minutes depending on configuration size. Do not interrupt it.
  10. 10When the wizard finishes, open the VBR console. All jobs will be disabled. This is intentional. Veeam disables everything on restore to prevent two VBR instances from running the same jobs at the same time. Leave them disabled for now.
  11. 11Go to Backup Infrastructure. Check every managed server -- proxies, repository hosts, managed Windows and Linux servers. Anything showing as disconnected needs credentials re-entered. Right-click the server, go to Properties, and update the credentials.
  12. 12Go to Backup Repositories. Confirm each repository is accessible and that backup chains are visible under it. If a repository shows as empty or unreachable, that is a connectivity or credential issue, not data loss. Fix the connection before moving on.
  13. Decision Point: Scale-Out Backup Repositories
    After a configuration restore, Veeam automatically puts all capacity tier extents of any scale-out backup repositories into Sealed mode. You will see this in the console. Rescan each scale-out backup repository, then open its properties and remove the extents from Sealed mode manually. The data is intact. Sealed mode is a built-in protection mechanism that engages during config restore. It is expected and it requires a manual step to clear.
    Decision Point: Hardened Linux Repository
    The single-use SSH credentials used to establish the original hardened repo connection were consumed during initial setup and do not exist anymore. You need to generate new single-use credentials on the Linux host and re-add the repository to VBR. The backup data and immutability flags on the repo are completely untouched. Only the VBR-to-repo connection needs to be rebuilt. Do not use a persistent root account as a workaround -- that defeats the security model of the hardened repository.
  14. 13Re-apply any custom registry keys that were on the original VBR server. The config backup does not preserve them. If you documented them during triage, apply them now before re-enabling jobs that may depend on them.
  15. 14Run a test restore before re-enabling any schedules. Pick a non-critical VM, right-click a restore point, and run Instant VM Recovery to a non-production datastore. If the VM powers on, the restore path is intact. If it fails, diagnose it now -- not after you have re-enabled 40 jobs.
  16. 15Re-enable jobs in priority order. Start with your most critical backup jobs and verify the first scheduled run completes successfully before enabling others. A job that ran fine on the old VBR can fail on the new one if a proxy assignment or repository credential did not come back cleanly.
  17. 16If Veeam ONE is deployed, re-add the rebuilt VBR server. In Veeam ONE Monitor, go to Configuration, Data Collection, Data Source. Add the VBR server by DNS name or IP. Use a Service Account type user for the connection. Veeam ONE installs the Analytics Service on the VBR server and resumes data collection on its own.
  18. 17Run a manual configuration backup right now. Verify it lands on the target repository with the current timestamp and is encrypted. You just rebuilt this environment from scratch. Protect the current state immediately.

Gotchas

Config Backup Stored on the VBR Server Itself
The default repository path after a fresh VBR install is C:\Backup on the VBR server. If nobody moved it, the config backup went down with the machine. Your only option is physical drive recovery. If that fails, you are rebuilding from scratch: reinstall VBR, re-add every managed server manually, re-create every job, and use Rescan on each repository to rediscover the existing backup chains. The restore point data is still there on those repos. VBR just has no record of it. This is the worst case and it is entirely preventable.
PostgreSQL Superuser Password Not Documented
This password is set at install time and does not exist anywhere in the VBR console, the config backup, or any Veeam log. If it is not in your password vault, you will hit a wall during setup on the replacement server. There is no recovery path for this without a working PostgreSQL instance -- which the dead server no longer is. Document this password on every VBR server in your estate right now, before you need it.
Unencrypted Config Backup Is Not Supported for Restore in v13
VBR v13 does not support restoring from unencrypted configuration backup files. The restore wizard will reject them. Even in earlier versions, unencrypted config backups excluded saved infrastructure credentials and encryption key hashes. In v13, Veeam closed this gap entirely by requiring encryption on the .bco file for restore to work. Additionally, if the Password Manager contains at least one encryption password and config backup encryption is not enabled, Veeam disables the configuration backup job. Enable config backup encryption. This is not optional in v13.
Plugins Are Not Restored Automatically
Veeam plugins for Nutanix AHV, Proxmox, oVirt, and similar non-native hypervisors are separate installer packages. Config restore brings back the job definitions and infrastructure entries for those platforms, but the plugin binaries are not included in the config backup. After restore, those infrastructure items appear in the console but jobs targeting them will fail immediately. Reinstall each plugin on the rebuilt server before re-enabling the jobs that depend on it.
Registry Keys Do Not Survive Config Restore
Veeam's official restore prerequisites explicitly state this. Any registry values that were manually created or modified under HKLM\SOFTWARE\Veeam\ on the original server are not preserved in the configuration database. They are gone after a rebuild. If you applied tuning keys, workaround keys, or any custom registry changes -- check your change management records and re-apply them manually after restore.
Scale-Out Repo Extents Land in Sealed Mode
This is documented behavior that surprises people every time. After a configuration restore involving scale-out backup repositories, all capacity tier extents automatically go into Sealed mode. Jobs will not write to them in this state. Nothing is wrong with the data. Rescan the SOBR and then manually remove the extents from Sealed mode in the repository properties. Make this part of your post-restore checklist so it does not catch you off guard after you have re-enabled jobs.
Port 443 Conflict Breaks the REST Service
VBR v13 requires port 443 for the REST API service. If anything else is already bound to 443 -- IIS, a monitoring tool, another application -- VeeamBackupRESTSvc fails to start with error 1053. The console may appear to open but behave erratically. Check port 443 before you run the installer, not after you spend an hour debugging service startup failures.

Prevention Checklist

  • Enable configuration backup encryption. Store the password somewhere accessible without the VBR server -- team vault, offline DR document, both.
  • Set the config backup target to a repository on a different host. Not the local disk of the VBR server.
  • Add a second config backup job pointing at a second repository on a separate host. One job is one point of failure.
  • Document the PostgreSQL superuser password in the team vault on the day you install VBR. Label it clearly.
  • Keep the VBR installer in offline storage. The Veeam download portal requires authentication and may not be accessible during a ransomware incident.
  • Test the config restore in a lab at minimum once a year. If it fails in the lab, you want to know before production needs it.
  • Keep a physical or offline DR runbook with VBR server details: hostname, IP, version, service account, PostgreSQL password, config backup location. Not in the console you cannot reach.
  • Document any custom registry keys applied to the VBR server. The config backup does not preserve them.
Break Glass Recap
  • Config backup must be off-server, encrypted, password accessible without VBR
  • VBR v13 only restores config backups created with v13 -- cross-version restore from v12 or earlier is not supported
  • PostgreSQL superuser password is required at install time -- not in the config backup
  • Wizard: Main Menu > Configuration Backup > Restore, then select Restore mode (not Migrate)
  • All jobs disabled after restore -- expected, leave them until you verify infrastructure
  • Scale-out repo capacity tier extents go into Sealed mode -- rescan and clear manually
  • Hardened repo connections require new single-use credentials after a rebuild
  • Reinstall non-native hypervisor plugins before re-enabling jobs that use them
  • Registry keys are not in the config backup -- re-apply manually
  • Unencrypted config backup is not supported for restore in v13 -- encryption is required
  • Port 443 must be free before VBR v13 install
  • Test restore a VM before re-enabling schedules

Read more