Veeam v13: VBR PostgreSQL Operational Tuning
Veeam v13 Series | Component: VBR v13, PostgreSQL 15+ | Audience: Hands-on Sysadmins, MSP Engineers
VBR v13 defaults to PostgreSQL for all new installations. PostgreSQL is also the only supported database option if you're moving to the Linux Software Appliance. If you're running VBR on an older SQL Server instance, you have a migration ahead of you. If you've already migrated and things are slower than expected, that's almost always a tuning issue. The default PostgreSQL configuration is built for general workloads, not for VBR's access patterns, and if you migrated manually rather than through the Veeam installer, the tuning step may have been skipped entirely.
This article covers the migration from SQL Server to PostgreSQL step by step, what the Veeam tuning command actually does and why it matters, vacuum behavior and why it's different from SQL Server's autoshrink mental model, monitoring queries you should be running regularly, and what happens to performance at scale past 50 tenants or 500+ VMs.
1. Should You Migrate? Decision Guide
Not every VBR installation needs to migrate today. Here's where things stand.
| Your Situation | Migrate Now? | Reason |
|---|---|---|
| Running SQL Express, small environment, no plans for Linux Appliance | Optional | SQL Express works fine under the 10 GB database cap. Migrate when convenient, not urgently. |
| Running SQL Express, approaching 10 GB database limit | Yes | SQL Express will stop accepting writes at 10 GB. PostgreSQL has no size cap. |
| Running licensed SQL Server just for VBR | Yes | That SQL Server license exists only to serve VBR. Migrate to PostgreSQL and reclaim the cost and patching overhead. |
| Planning to move to the Linux Software Appliance | Required | The Linux Appliance is PostgreSQL only. No migration, no Linux Appliance. |
| MSP with 50+ tenants, file-to-tape workloads, large database | Plan carefully | File-to-tape jobs generate heavy database metadata. PostgreSQL handles it but needs tuning. Plan the migration with a maintenance window and test on a staging environment first. |
| Running SQL Server AlwaysOn or Failover Cluster for VBR HA | Not yet | PostgreSQL doesn't support AlwaysOn or FCI equivalents natively. If database level HA is a requirement, stay on SQL Server until Veeam addresses this. |
2. The Migration Process
It's a backup and restore cycle, not a live database conversion. VBR stays on the same server. It disconnects from the SQL Server database and reconnects to PostgreSQL. Nothing changes about your jobs, schedules, infrastructure, or credentials. The only thing that changes is which database engine VBR talks to.
- Install PostgreSQL. If you're on Windows, the Veeam v13 ISO includes PostgreSQL 15. Install it from there or install it separately. Supported versions are 14 and later. The Veeam installer sets it up with a dedicated Veeam user and a basic configuration. If you install PostgreSQL separately, you'll need to do the Veeam user setup manually.
- Create a configuration backup. In VBR, go to the main menu, then Configuration Backup, then Backup Now. Enable encryption on this backup. The configuration backup contains credentials and other sensitive data. Without encryption, that data is stored in clear text in the backup file.
- Restore with Migration mode. Go to the main menu, then Configuration Restore, then Migrate. Select the configuration backup file you just created, enter the encryption password, and select the PostgreSQL instance as the target. VBR restores the configuration to PostgreSQL and reconnects.
- Run the tuning command. This is the step most guides skip entirely. See Section 3 for what it does and why it matters.
- Verify and re-enable jobs. Jobs are disabled during migration. Re-enable them after verifying the console connects cleanly, credentials work, and backup infrastructure is visible. Run a test job before re-enabling everything.
3. The Tuning Command
The Veeam PowerShell cmdlet Set-VBRPSQLDatabaseServerLimits calculates PostgreSQL configuration values based on your server's actual hardware and outputs them to a file. It's not optional on any environment with meaningful scale. The default PostgreSQL configuration allocates minimal shared memory and conservative connection limits designed for general workloads. VBR's access pattern is read heavy during reporting and write heavy during job runs, with bursts of concurrent queries that the default configuration throttles unnecessarily.
Connect-VBRServer -Server "localhost" # Generate recommended configuration values based on this server's hardware # Outputs a file you review and apply to postgresql.auto.conf Set-VBRPSQLDatabaseServerLimits -DumpToFile "C:\temp\pg-recommended.txt" Disconnect-VBRServer # Review the output file Get-Content "C:\temp\pg-recommended.txt" # The key parameters it tunes: # shared_buffers: how much memory PostgreSQL uses for its buffer cache # effective_cache_size: hint to the query planner about available OS cache # maintenance_work_mem: memory for maintenance operations (VACUUM, CREATE INDEX) # max_connections: maximum concurrent client connections # work_mem: memory per sort operation per connection # wal_buffers: write-ahead log buffer size # Apply the recommended values to postgresql.auto.conf # (replace with actual values from your output file) $pgDataDir = "C:\Program Files\PostgreSQL\15\data" $pgAutoConf = "$pgDataDir\postgresql.auto.conf" # Example - use the actual values from Set-VBRPSQLDatabaseServerLimits output Add-Content $pgAutoConf "shared_buffers = 2GB" Add-Content $pgAutoConf "effective_cache_size = 6GB" Add-Content $pgAutoConf "maintenance_work_mem = 512MB" Add-Content $pgAutoConf "max_connections = 200" # Reload PostgreSQL config (no restart needed for most parameters) # Run from psql or using pg_ctl # pg_ctl reload -D "C:\Program Files\PostgreSQL\15\data" Write-Host "Review pg-recommended.txt and apply values to postgresql.auto.conf" Write-Host "Then reload PostgreSQL: pg_ctl reload -D '$pgDataDir'"
If VBR installed PostgreSQL during its own setup process, this tuning is applied automatically and you don't need to run it manually. The command is primarily needed when you installed PostgreSQL independently before migrating VBR to it, or when you're on an older VBR version that didn't do automatic tuning.
4. Vacuum: What It Is and Why It Matters
PostgreSQL uses MVCC (Multi Version Concurrency Control) for transaction isolation. When a row is updated or deleted, the old version isn't removed immediately. It's marked as dead and left in place until VACUUM cleans it up. A database with frequent updates (like VBR's job session tables) accumulates dead rows that consume disk space and slow down queries until VACUUM runs. This is fundamentally different from SQL Server's autoshrink approach.
PostgreSQL runs autovacuum automatically in the background. For most VBR installations, autovacuum handles the dead row cleanup without any manual intervention. You need to actively think about vacuum when:
- The VBR database is large (hundreds of GB) after migrating from a heavily used SQL Express instance, and autovacuum hasn't had time to clean up the dead rows from the migration.
- File-to-tape jobs are running. These jobs generate substantial metadata per file (roughly the same amount of database writes as a VM job generates per VM, for each file). A large NAS-to-tape job can write enough session metadata to cause table bloat faster than autovacuum's default schedule handles it.
- You're seeing slow queries in pg_stat_activity or the VBR console is slow to display job history.
-- Connect to the VBR database as postgres user
-- Default VBR database name is VeeamBackup
-- Check tables with the most dead rows (candidates for manual VACUUM)
SELECT
schemaname,
relname AS table_name,
n_live_tup,
n_dead_tup,
ROUND(n_dead_tup::numeric / NULLIF(n_live_tup + n_dead_tup, 0) * 100, 1) AS dead_pct,
last_autovacuum,
last_autoanalyze
FROM pg_stat_user_tables
WHERE n_dead_tup > 10000
ORDER BY n_dead_tup DESC
LIMIT 20;
-- Check database size
SELECT pg_size_pretty(pg_database_size('VeeamBackup')) AS db_size;
-- Run VACUUM ANALYZE manually on the most bloated table
-- (replace table_name with the actual table from the query above)
VACUUM ANALYZE [table_name];
-- Run full VACUUM on the whole database (takes longer, frees more space)
-- Do this during a maintenance window - it acquires locks
VACUUM ANALYZE;
5. Monitoring Queries
-- Check active connections and what they're doing
SELECT
pid,
usename,
application_name,
state,
wait_event_type,
wait_event,
query_start,
EXTRACT(EPOCH FROM (now() - query_start)) AS query_seconds,
LEFT(query, 120) AS query_preview
FROM pg_stat_activity
WHERE datname = 'VeeamBackup'
AND state != 'idle'
ORDER BY query_start;
-- Check for long-running queries (over 60 seconds)
SELECT pid, usename, state, query_start,
EXTRACT(EPOCH FROM (now() - query_start)) AS seconds_running,
LEFT(query, 200) AS query
FROM pg_stat_activity
WHERE datname = 'VeeamBackup'
AND state = 'active'
AND query_start < now() - INTERVAL '60 seconds'
ORDER BY seconds_running DESC;
-- Check connection count vs max_connections
SELECT
COUNT(*) AS current_connections,
(SELECT setting::int FROM pg_settings WHERE name = 'max_connections') AS max_connections,
ROUND(COUNT(*)::numeric / (SELECT setting::int FROM pg_settings WHERE name = 'max_connections') * 100, 1) AS pct_used
FROM pg_stat_activity
WHERE datname = 'VeeamBackup';
6. Performance at Scale: Past 50 Tenants
MSP environments with 50 or more tenants, large numbers of VMs, and active tape workflows put specific pressure on the PostgreSQL database that smaller environments don't surface. The two problems that show up most at this scale are connection saturation and table bloat from tape metadata.
Connection Saturation
VBR opens database connections for each backup job, each restore session, and each management operation running concurrently. In a busy MSP environment with hundreds of simultaneous tenant jobs, the default max_connections value (100 in a standard PostgreSQL install) fills up. When it does, new connection attempts fail with "too many clients" errors and VBR jobs start failing with database connectivity errors that look like infrastructure problems but are actually database limits.
The solution is two part: set max_connections to a value appropriate for your scale (the Set-VBRPSQLDatabaseServerLimits output gives you a starting point), and consider deploying PgBouncer as a connection pooler in front of PostgreSQL. PgBouncer maintains a pool of persistent connections to PostgreSQL and hands them to VBR clients on demand, allowing VBR to handle far more concurrent operations than max_connections would otherwise permit without increasing memory overhead proportionally.
Tape Metadata Bloat
File-to-tape jobs write one metadata record per file into the VBR database, the same way VM backup jobs write one record per VM. A NAS job backing up a share with 10 million files, then writing it to tape, generates 10 million metadata rows. These rows accumulate faster than autovacuum's default schedule can clean them. The result is a database that grows to hundreds of gigabytes, slow job history queries in the console, and high CPU from autovacuum trying to catch up.
The practical fixes: increase autovacuum aggressiveness on the tape catalog tables specifically, run scheduled manual VACUUM ANALYZE during off peak hours, and consider the tape catalog management settings in VBR that control how long tape catalog entries are retained. Removing old tape catalog entries from the VBR interface (not directly from the database) reclaims both database space and metadata processing overhead.
Key Takeaways
- PostgreSQL is the default for all new VBR v13 installations and the only option for the Linux Software Appliance. Veeam's stated direction is to phase out SQL Server as a supported VBR database. That's not immediate but it's confirmed. Plan the migration even if you don't execute it now.
- Migration is a backup and restore cycle, not a live conversion. Your jobs, schedules, infrastructure settings, and credentials all carry over. Only the database engine changes.
- Run Set-VBRPSQLDatabaseServerLimits and apply the output to postgresql.auto.conf after migrating. The default PostgreSQL configuration is not tuned for VBR access patterns. Skipping this step is the most common reason post migration performance is worse than SQL Server.
- Exclude the PostgreSQL data directory and binary directory from antivirus real time scanning. This is documented in the Veeam PostgreSQL guide and is responsible for the majority of post migration performance complaints.
- PostgreSQL autovacuum handles dead row cleanup automatically for most environments. You need to think about manual VACUUM when you're running file-to-tape workloads with millions of files, when the database is very large after migration, or when job history queries in the console are slow.
- At MSP scale with 50+ tenants, watch max_connections. When it fills up, VBR jobs fail with database connectivity errors that look like infrastructure problems. Set max_connections based on Set-VBRPSQLDatabaseServerLimits output, and consider PgBouncer if concurrent connection counts consistently approach the limit.