Building a Hardened BaaS and DRaaS Stack with Veeam Cloud Connect v13

Veeam Cloud Connect v13 - Service Provider Architecture
Veeam v13 BaaS DRaaS VSPC v9 Hardened Repository S3 Object Lock Greenfield
Building a Veeam Cloud Connect BaaS and DRaaS stack from scratch in 2026 means making architecture decisions that will be very expensive to undo later. Hardened repository placement, S3 object lock configuration, SOBR tiering, tenant isolation model, VSPC topology - get any of these wrong and you are either rebuilding infrastructure or carrying technical debt that limits how you can grow the service. This is the architecture playbook for getting it right the first time.

1. Stack Architecture Overview

A production Veeam Cloud Connect BaaS and DRaaS stack has six functional layers. Understanding how they relate to each other before touching any configuration is what separates a well-designed stack from one that scales into problems.

Full Stack Architecture Diagram

Veeam Cloud Connect v13 - Full SP Stack SERVICE PROVIDER SIDE TENANT SIDE VSPC v9.1 Management + Billing Tenant Portal VBR v13.0.1 P2 Cloud Connect Server Windows Server 2022 Cloud Gateway(s) TCP 6180 inbound DMZ or public NIC Hardened Repository Rocky Linux / DISA STIG XFS + chattr immutability Single-use credentials Scale-Out Backup Repo Performance: Hardened Repos Capacity: S3 + Object Lock Auto-offload on age/size S3 Object Storage Object Lock (WORM) AWS / Wasabi / on-prem S3 Archive Tier (optional) S3 Glacier / Azure Archive Long-term compliance Tenant VBR v12.3.2 or v13 On-premises or hosted Protected Workloads vSphere / Hyper-V / AHV Physical / Agents Self-Service Backup Portal via VSPC Cloud Replication Target Replica VMs in SP Org VDC (VCD) Failover triggered via tenant VBR TCP 6180 (TLS) perf extent age/size offload WAN/TLS VBR 13.0.1 P2 (build 13.0.1.2067) | VSPC 9.1 P1 (build 9.1.0.30713) | March 2026 SP must run VCC 13 before any tenant upgrades to VBR 13 | VSPC v9 required for VBR v13

The six layers in order from infrastructure up: the hardened repository (physical block storage, XFS, immutability enforced at the filesystem level), the Scale-Out Backup Repository that abstracts the performance and capacity tiers, the S3 object lock capacity tier for long-term retention, the VBR server running the Cloud Connect service, the Cloud Gateway handling WAN termination, and VSPC on top managing tenants, billing, and the self-service portal. Each layer has specific decisions that cannot easily be changed later.

2. VBR Server - Sizing and Placement

The VBR server in a Cloud Connect deployment is not a backup proxy. It is the management and coordination layer. In a production SP environment it should be dedicated to that role, not doubling as a proxy or repository server. The VBR server stores the VeeamCatalog database (SQL Server or PostgreSQL), handles all tenant job scheduling and session management, and communicates with VSPC.

ComponentMinimumRecommended (Production)Notes
CPU4 vCPU8+ vCPUScale with concurrent tenant sessions
RAM8 GB16-32 GB4 GB per 100 concurrent tenant jobs recommended
OS disk100 GB200 GB SSDVeeamCatalog grows with tenant count
OSWindows Server 2019Windows Server 20222022 required for VBR 13 best support
SQLSQL Express (free, 10GB limit)SQL Standard or PostgreSQL 15SQL Express hits limits fast in production. Size the DB server separately.
Network1 GbE10 GbE or bonded NICsSeparate NIC for tenant traffic if possible
Do Not Run VBR and Hardened Repository on the Same Host
The hardened repository security model requires that even if the VBR server is fully compromised, the attacker cannot reach the hardened repository - because the single-use deployment credentials are discarded after setup and not stored on VBR. Collocating VBR and the hardened repo on the same physical host breaks this threat model entirely. Keep them physically separate.

SQL vs. PostgreSQL for the VBR Database

VBR v13 supports both SQL Server and PostgreSQL as the configuration database. PostgreSQL is free and fully supported. For a greenfield deployment, PostgreSQL 15 on a dedicated VM is the right call - it eliminates the SQL licensing cost and the 10GB Express limit that catches MSPs off guard when tenant counts grow. The VBR installer handles PostgreSQL setup. Use a dedicated VM for the database, not the VBR server itself, if you expect more than 50 tenants.

3. Building the Hardened Repository

The hardened repository is the primary ransomware defense for your BaaS stack. It uses Linux filesystem-level immutability via chattr +i - once a backup file is written and the immutability flag is set, no process including root can modify or delete it until the immutability window expires. The immutability service (veeamimmureposvc) runs with root privileges but has no network access - it is local-only by design, and checks immutability attributes every 20 minutes.

Option A: Veeam Infrastructure Appliance (Recommended for Greenfield)

Starting with v13, Veeam ships a hardened repository appliance ISO. Built on Rocky Linux, hardened to DISA STIG, auto-patched by Veeam, XFS pre-configured. For a greenfield deployment, this is the right starting point. You do not need Linux expertise to stand it up. The vhradmin account can only log in via console (not SSH), which is the right security posture.

Option B: Manual Linux Build

If you are building on existing Linux infrastructure or have specific OS requirements, the manual path gives you more control. Ubuntu 22.04 LTS and RHEL 9 are the most common choices.

Step 1 - OS Hardening Before Veeam Touches It
Initial OS hardening - run before Veeam installation
# Disable SSH password auth - key-based only sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config systemctl restart sshd # Disable root SSH login sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin no/' /etc/ssh/sshd_config # Set up firewall - allow only required ports # 6162: Veeam Data Mover # 2500-3300: Veeam data transport # 22: SSH management (restrict to management IP only in production) ufw default deny incoming ufw allow from {mgmt-subnet} to any port 22 ufw allow 6162/tcp ufw allow 2500:3300/tcp ufw enable # Ensure NTP is synced - immutability timestamps depend on accurate time timedatectl set-ntp true timedatectl status
Step 2 - Prepare the XFS Volume
XFS with reflink and CRC enabled is required for Fast Clone during synthetic fulls. This command formats the volume correctly. Run this once - you cannot add reflink to an existing XFS filesystem without reformatting.
Format the backup volume as XFS with reflink and CRC
# Identify your block device lsblk # Example: /dev/sdb is the dedicated backup disk # Partition it (if needed) parted /dev/sdb mklabel gpt parted /dev/sdb mkpart primary 0% 100% # Format with XFS - reflink=1 and crc=1 are mandatory mkfs.xfs -b size=4096 -m reflink=1,crc=1 /dev/sdb1 -f # Create mount point mkdir -p /mnt/veeamrepo # Add to fstab for persistent mount echo "/dev/sdb1 /mnt/veeamrepo xfs defaults,nofail 0 2" >> /etc/fstab mount -a # Verify it mounted and reflink is active xfs_info /mnt/veeamrepo | grep reflink
Step 3 - Create the Veeam Service Account
Create the locveeam service account with correct permissions
# Create dedicated non-root service account useradd -m -s /bin/bash locveeam # Set a strong password - this is the single-use credential # Veeam will deploy Data Mover using this credential and then discard it passwd locveeam # Grant ownership of the repository directory chown locveeam:locveeam /mnt/veeamrepo chmod 0700 /mnt/veeamrepo # Verify directory permissions - must be exactly 0700 # and owner must match the account Veeam will use ls -la /mnt/ | grep veeamrepo # CRITICAL: Ensure locveeam is NOT in the sudo group groups locveeam # Should NOT show sudo or wheel
Step 4 - Add to VBR as Hardened Repository
In the VBR console: Backup Infrastructure, then Backup Repositories, then Add Repository, then Direct Attached Storage, then Linux (Hardened Repository). Enter the server IP, use the single-use credentials you set in Step 3, point to /mnt/veeamrepo, set your immutability window (30 days minimum recommended for ransomware protection). After adding, Veeam deploys its Data Mover using the credentials and discards them - they are not stored in the VBR database.
Step 5 - Post-Deployment SSH Lockdown
Once Veeam has deployed its Data Mover, you do not need SSH for day-to-day operations. Lock it down.
Post-deployment SSH restriction
# Disable SSH entirely if management is via console only (recommended for physical servers) systemctl stop ssh systemctl disable ssh # OR restrict to management VLAN only in firewall # ufw delete allow 22/tcp # ufw allow from 10.x.x.0/24 to any port 22 # Disconnect iDRAC/iLO from the production network # Management interfaces should be on an isolated OOB network only # Physical access to console = physical access to root recovery options # Verify veeamtransport is running (Data Mover - runs as non-root) ps aux | grep veeamtransport # Verify veeamimmureposvc is running (Immutability service - root, no network) systemctl status veeamimmureposvc

Immutability: What It Protects and What It Does Not

The immutability flag set by chattr +i prevents modification and deletion by any process, including root. What it does not prevent: a hypervisor administrator deleting the entire VM, a storage administrator deleting the LUN, or physical access to the disks. These attack vectors require separate controls - dedicated physical hardware for the hardened repo, storage admin credentials held separately, and iDRAC/iLO isolated from the production network.

The .VBM File Exception
The .VBM metadata file in each backup chain is NOT immutable - it must be updated on every job pass to reflect the current restore points. This is by design and does not weaken the protection of the actual .VBK and .VIB backup data files, which are immutable. You cannot restore from a corrupted .VBM alone, but backup data files are protected.
Reverse Incremental and Forever-Forward Incremental Not Supported
Hardened repositories only support forward incremental with periodic synthetic or active full backups. Reverse incremental rewrites backup files on every run - that is incompatible with immutability. Forever-forward incremental also requires modifying existing files. If any existing backup jobs use these methods, change them before pointing at a hardened repo. Backup copy jobs require GFS retention policy enabled to use immutability.

4. Scale-Out Backup Repository and Tiered Storage

The Scale-Out Backup Repository (SOBR) is the abstraction layer that sits between the VBR server and the physical storage. Tenants are assigned to a SOBR, not to individual repositories. The SOBR manages tiering automatically - recent backups live on the performance tier (hardened repositories), older backups are offloaded to the capacity tier (S3 object storage) based on age or space thresholds. Tenants cannot see which tier their data is on, and restore operations work the same regardless of tier.

SOBR Design Decisions

The most important SOBR decision is the placement policy. Data Locality places all restore points for a given VM on the same extent - best for restore performance. Performance spreads data across all extents - best for write throughput when ingesting many concurrent tenant jobs. For a Cloud Connect SP environment, Data Locality is typically the right choice. Tenants have defined SLAs, and restoring quickly from a single extent is more important than maximizing ingestion throughput.

One SOBR Per Service Tier, Not One SOBR for Everything
Running all tenants through a single SOBR makes billing, SLA management, and capacity planning harder than it needs to be. Design SOBRs around service tiers: one for Tier 1 tenants with aggressive SLAs and dedicated extents, one for standard tenants with shared extents. This also lets you apply different immutability windows per tier - Tier 1 tenants may have contractual requirements for 90-day immutability that you do not want to apply to every backup on the platform.
Add hardened repositories as SOBR extents via PowerShell
# Connect to VBR server Connect-VBRServer -Server localhost # Get the hardened repos to add as extents $repo1 = Get-VBRBackupRepository -Name "HardenedRepo-01" $repo2 = Get-VBRBackupRepository -Name "HardenedRepo-02" # Create the SOBR with data locality placement Add-VBRScaleOutBackupRepository ` -Name "SP-Tier1-SOBR" ` -PolicyType DataLocality ` -Extent @($repo1, $repo2) ` -MaxTaskCount 4 # Verify SOBR created successfully Get-VBRScaleOutBackupRepository -Name "SP-Tier1-SOBR"

Offload Policy Configuration

The offload policy determines when data moves from the performance tier to the capacity tier. Two triggers: age-based (move backups older than N days) and size-based (move when performance tier exceeds N% capacity). Both can be active simultaneously. Age-based offload is the primary trigger for most SP environments - keep the last 30 days on fast local storage, everything older goes to S3.

Configure capacity tier offload policy
# Get the SOBR and add capacity tier $sobr = Get-VBRScaleOutBackupRepository -Name "SP-Tier1-SOBR" $s3repo = Get-VBRObjectStorageRepository -Name "SP-S3-ObjectLock" # Set capacity tier with offload policy Set-VBRScaleOutBackupRepository -ScaleOutBackupRepository $sobr ` -EnableCapacityTier ` -CapacityTierObjectStorageRepository $s3repo ` -MoveBackupFilesOlderThan 30 ` -CopyBackupsToCapacityTier $false # EnableCopyMode offloads a copy but keeps performance tier data too # Set to $false to move (not copy) - saves performance tier space # Set to $true for extra resilience if S3 costs allow it

5. S3 Object Lock - The Second Immutability Layer

The hardened repository protects on-site backup data. S3 Object Lock extends immutability to the capacity tier in object storage. Together they give you immutability at both layers - ransomware cannot reach the local backups through Veeam, and cannot reach the S3 backups through the S3 API because Object Lock makes them WORM-protected at the storage level regardless of what credentials an attacker has obtained.

Bucket Configuration Requirements

Object Lock Must Be Enabled at Bucket Creation
S3 Object Lock cannot be enabled on an existing bucket. It must be set when the bucket is created. If you create a bucket, add it to Veeam, start offloading data, and then decide you want immutability - you cannot add it. Create a new bucket with Object Lock enabled, add it as a new capacity tier, and migrate. Plan this correctly upfront.
Create S3 bucket with Object Lock - AWS CLI example
# Create bucket with object lock enabled aws s3api create-bucket \ --bucket veeam-sp-capacity-tier \ --region us-east-1 \ --object-lock-enabled-for-bucket # Verify object lock is active on the bucket aws s3api get-object-lock-configuration \ --bucket veeam-sp-capacity-tier # Do NOT set a default retention policy on the bucket # Veeam manages per-object retention locks at offload time # A bucket-level default interferes with Veeam's object lifecycle management # CRITICAL: Do NOT enable Versioning independently # Do NOT enable S3 Lifecycle policies - Veeam manages object lifecycle # Veeam must be the sole entity managing objects in this bucket

Adding the Object Lock Bucket to VBR

Add S3 Object Storage Repository in VBR
In the VBR console: Backup Infrastructure, then Object Storage Repositories, then Add Object Storage. Select your provider (AWS S3, S3-Compatible, Wasabi, etc.). In the bucket settings, enable Immutability and set the immutability period. VBR will lock each object for the specified number of days after it is written. The immutability period on the S3 bucket should be longer than the offload age - if you offload after 30 days and set S3 immutability to 35 days, you have a 5-day overlap window. Set it to at least the offload age plus 7 days.
Do Not Enable Default Retention at the Bucket Level When Using Veeam
If you set a default retention on the S3 bucket AND Veeam sets per-object retention, the bucket-level retention can prevent Veeam from managing object expiry correctly. Let Veeam manage per-object locks entirely. Disable the default retention on the bucket and let Veeam set retention on each object as it offloads data.

6. Cloud Gateway Design

The Cloud Gateway is the WAN termination point for tenant connections. Every tenant's VBR connects inbound to the Cloud Gateway on TCP 6180. The gateway handles TLS termination and forwards the backup data stream to the appropriate cloud repository. The VBR server manages the gateway configuration but does not sit in the data path - the gateway talks directly to the repository.

Single Gateway vs. Multiple Gateways

A single gateway works for small deployments but creates a single point of failure and a throughput bottleneck. The right architecture for production is a pool of Cloud Gateways behind a load balancer, or at minimum two gateways for redundancy. VBR supports multiple gateways and will balance connections across them automatically when configured as a pool. Each gateway should be dedicated to that role - do not run the VBR server or hardened repository on a gateway host.

ScenarioGateway CountNotes
Lab / POC1Acceptable for testing only
Up to 50 tenants2Active/active pool, failover
50-200 tenants2-4Load balance via NLB or round-robin DNS
200+ tenants4+Scale horizontally, monitor concurrent session count per gateway
Gateway Sizing Rule of Thumb
Each Cloud Gateway can handle approximately 200-300 concurrent tenant connections on a 4-vCPU / 8GB RAM VM. The bottleneck is almost always network bandwidth, not CPU. Size your gateway network interface and upstream bandwidth against your peak concurrent backup windows, not your tenant count. Most tenants do not back up simultaneously.

7. Tenant Isolation Model

Tenant isolation in Cloud Connect is enforced by the platform, not by operational discipline. Each tenant account has a defined storage quota on specific repositories or SOBR extents. The tenant's VBR can only write to, read from, and delete within their own namespace. A tenant cannot enumerate other tenants' backup jobs, cannot see other tenants' restore points, and cannot consume storage from another tenant's quota. This isolation holds even when multiple tenants share a physical SOBR extent.

Tenant Account Types

VCC supports three tenant types. A Cloud Connect Backup tenant uses the VCC infrastructure for backup storage only - their VBR manages the jobs locally and uploads to the cloud repository. A Cloud Connect Replication tenant runs VM replicas in the SP's infrastructure, typically in a VCD Org VDC. A combined tenant uses both services. Each type is configured separately in VSPC.

Create a Cloud Connect tenant account via VSPC API
POST https://vspc.example.com/api/v3/organizations/companies/{companyUid}/sites Authorization: Bearer {vspc-token} Content-Type: application/json { "siteName": "tenant-acme-vcc", "vbrServerId": "{vbr-server-uid}", "tenantUsername": "acme.backups", "tenantPassword": "{generated-strong-password}", "description": "ACME Corp Cloud Connect Tenant", "cloudConnectResources": { "quotaGb": 500, "repositoryUid": "{sobr-uid}", "wanAcceleratorEnabled": false } }

Storage Quota Planning

Quota is the contractual storage commitment per tenant. Set it at the sold capacity plus a reasonable headroom - tenants whose backups grow beyond quota will fail silently at job level, which creates support calls. A 20% overage buffer above the sold capacity is a reasonable starting point. Monitor quota utilization via VSPC and trigger alerts at 80% consumed so you can proactively upsell or expand before jobs fail.

8. BaaS - Backup as a Service Configuration

BaaS in a Cloud Connect context means the SP provides cloud storage and the tenant provides the VBR server that runs the backup jobs. The tenant's VBR connects to the SP's Cloud Gateway, authenticates with the tenant credentials, and writes backups to the cloud repository quota allocated to that tenant. The SP does not manage the tenant's backup jobs - that is the tenant's responsibility, or optionally the SP's via VSPC hosted VBR.

Tenant Onboarding Sequence

Step 1 - Create Company in VSPC
In VSPC, create a company record for the tenant. This is the billing and management anchor. All VSPC reporting, alarms, and billing flows from the company record.
Step 2 - Create Cloud Connect Tenant Account
In VBR (or via VSPC API), create the VCC tenant account with the storage quota, repository assignment, and credentials. VSPC links this tenant account to the company record.
Step 3 - Provide Tenant Connection Details
The tenant needs: the Cloud Gateway DNS name or IP, TCP port 6180, their tenant username and password, and the SSL certificate fingerprint of the Cloud Gateway. VSPC can send a welcome email with these details automatically if configured.
Step 4 - Tenant Adds SP in Their VBR Console
In the tenant's VBR: Backup Infrastructure, then Service Providers, then Add Service Provider. They enter the gateway address, port, and credentials. VBR validates the connection and displays the available cloud repositories. The tenant then creates backup jobs targeting the cloud repository.

Backup Job Requirements at the Tenant

Jobs targeting a cloud repository on a hardened SOBR must use forward incremental with synthetic full or active full. The SP should communicate this requirement during onboarding. Tenants arriving from a previous VBR environment may have reverse incremental jobs configured - these will fail when pointed at the cloud repository. VSPC will surface these failures in the management console.

Tenant Self-Service Restore via Backup Portal
VSPC exposes a self-service restore portal that tenants can use to restore individual files and VMs without contacting the SP. The portal authenticates against the tenant's company record in VSPC. Configure this during onboarding - it reduces support load and is a genuine service differentiator. Tenants who can restore themselves at 2am without opening a ticket are happy tenants.

9. DRaaS - Replication as a Service Configuration

DRaaS via Veeam Cloud Connect Replication sends VM replicas to infrastructure in the SP's environment. In a VCD-backed SP, replicas land in the tenant's VCD Org VDC. The tenant's VBR manages the replication job - same forward direction, same Cloud Gateway, different destination. The replica is a standby VM in the SP's infrastructure that can be failed over in minutes.

VCD Integration for Cloud Connect Replication

When the SP runs VCD, Cloud Connect Replication can place replicas directly into tenant Org VDCs. This requires the SP to register VCD as a Cloud Host in VBR and allocate hardware plans to tenants. The hardware plan defines the vCPU, RAM, and storage available to the tenant for running replicas. Tenants who have not purchased a hardware plan cannot use replication - storage-only tenants remain backup-only.

Register VCD as Cloud Host in VBR
# In VBR console: Backup Infrastructure > Cloud Hosts > Add Cloud Host # Select VMware Cloud Director # Enter VCD URL, system org credentials # VBR will discover all Org VDCs available for tenant replica placement # After adding VCD as cloud host, allocate hardware plans to tenant accounts # In tenant account properties > Hardware Plans tab # Assign vCPU allocation, RAM allocation, storage allocation per tenant # This maps to the tenant's Org VDC resource envelope in VCD

Universal CDP - New in v13

Veeam v13 extended Universal CDP (Continuous Data Protection) to VCD tenants. Any Windows machine can now be a source for CDP replication to the SP's VCD infrastructure - not just VMs managed by vCenter. This significantly expands the DRaaS surface. Tenants running mixed environments (VMs plus physical Windows servers) can protect everything under a single VCC DRaaS contract instead of requiring separate solutions for different source types.

Failover Planning

A DRaaS service requires a documented failover procedure that you have actually tested. The tenant's VBR initiates failover, which powers on the replica VMs in the SP's VCD Org VDC. Network connectivity post-failover depends on the network design - stretched VLANs, VPN, or re-IP plans. Define this with each tenant at contract time. An untested failover plan is not a failover plan.

10. VSPC v9.1 - The Management Layer

VSPC v9.1 is required for VBR v13. Current build is 9.1.0.30713 (Patch 1). If you are running VSPC v8, upgrade to v9 before upgrading VBR to v13 - the v8 management agent is incompatible with v13.

What Changed in VSPC v9

The biggest operational change in v9 is that Cloud Connect is no longer mandatory for all VSPC operations. Prior to v9, any VSPC deployment required a full VCC backend. In v9, MSPs managing customer Veeam environments directly (without Cloud Connect) can do so without standing up VCC infrastructure. For a BaaS/DRaaS SP, Cloud Connect is still required for those specific services - but the architectural coupling is looser than it was.

Tenant consolidation is the other major operational improvement. Multiple VCC tenant accounts can now be grouped under a single company record. A customer with five separate locations and five separate VCC tenant accounts no longer requires five separate company records in VSPC. Billing, reporting, and alarm management all consolidate to the company level. This was a significant pain point for MSPs with multi-site customers and it is properly fixed in v9.

VSPC Deployment Topology

VSPC is a Windows application with a SQL Server or PostgreSQL backend. The VSPC server should be separate from the VBR server. Management agents are deployed on every managed VBR server and deliver telemetry back to VSPC. The self-service portal is a web application served from the VSPC server that tenants access directly.

VSPC ComponentRolePlacement
VSPC ServerManagement plane, web UI, API gatewayDedicated Windows VM, separate from VBR
VSPC DatabaseConfiguration, reporting data, billing recordsSQL Server or PostgreSQL, dedicated instance for production
Management AgentDeployed to each managed VBR server, reports telemetryInstalled by VSPC on managed servers
Backup PortalTenant self-service restore UIServed from VSPC server, accessible to tenants

ConnectWise and PSA Integration

VSPC v9 integrates with ConnectWise Manage for ticket creation and billing synchronization. Backup failures and missed RPO events create tickets automatically in ConnectWise. The billing sync exports per-tenant usage data for invoicing. The default synchronization interval is 5 minutes - if you need to change this, contact Veeam support, it is not exposed in the UI.

VSPC v9.1 Known Issue: VCC Tenants Unmapped After Upgrade
Under certain conditions after upgrading to VSPC v9.1, Cloud Connect tenants may be unmapped from their VSPC company records. Verify all tenant mappings immediately after upgrading VSPC. Re-map any unlinked tenants before the next billing cycle. This is documented in Veeam KB4788.

11. SP Upgrade Order and Version Sequencing

Upgrade sequencing is the most operationally dangerous part of running a Veeam SP platform. Get it wrong and tenants get locked out of the cloud repository mid-backup-window.

Critical Rule: SP Upgrades Before Tenants
The SP must upgrade the Cloud Connect server (VBR) before any tenant upgrades their VBR to the same version. If a tenant upgrades to VBR v13 while the SP is still running v12, the tenant's VBR will attempt to use v13 protocol features that the SP's gateway does not understand. The tenant gets locked out of the cloud repository. No tenant backups, no tenant restores, until you upgrade the SP-side VBR. Communicate upgrade windows to tenants and enforce the SP-first sequence.
StepComponentVersionNotes
1VSPC9.1 (build 9.1.0.30713)Must upgrade before VBR v13
2SP VBR (Cloud Connect Server)13.0.1 Patch 2 (build 13.0.1.2067)Upgrade SP side before tenants
3Cloud GatewaysAuto-upgraded with VBRGateways update automatically when VBR upgrades
4Hardened Repo agentsAuto-upgraded with VBRData Mover on repos updates automatically
5Tenant notificationN/ANotify tenants SP is now on v13, they may upgrade when ready
6Tenant VBR upgrades12.3.2+ or 13.0.1v12.3.2.3617+ backward-compatible with VCC v13

VBR v12.3.2.3617 and later is backward compatible with Cloud Connect v13. Tenants who stay on v12 continue to work. Tenants who upgrade to v13 also work, as long as the SP is already on v13. The only broken state is tenant-on-v13 while SP-is-on-v12.

12. Hardening and Post-Build Checklist

Run through this checklist after the initial build before onboarding any tenants.

AreaCheckExpected State
Hardened Repolocveeam not in sudo/wheel groupgroups locveeam shows no sudo
Hardened RepoSSH disabled or restricted to management subnetNo public SSH access
Hardened RepoiDRAC/iLO on isolated OOB networkNot reachable from production VLAN
Hardened RepoXFS formatted with reflink=1,crc=1xfs_info shows reflink=1
Hardened Repoveeamimmureposvc runningsystemctl status shows active
Hardened RepoImmutability window set (minimum 30 days)Visible in VBR repository properties
S3 Object LockBucket created with Object Lock at creation timeaws s3api get-object-lock-configuration returns Enabled
S3 Object LockNo S3 Lifecycle policies on the bucketNo lifecycle rules configured
S3 Object LockDefault retention disabled (Veeam manages per-object)No default retention rule on bucket
SOBRCapacity tier offload configuredMove backups older than 30 days to S3
SOBRImmutability enabled on all performance extentsAll extents show immutability enabled in VBR
VBRVBR not collocated with hardened repoSeparate physical or VM hosts
VBRSQL Express not used for productionSQL Standard or PostgreSQL on dedicated instance
VBRCloud Gateway SSL certificate is CA-signedNot self-signed - tenant VBR validates this
Cloud GatewayMinimum two gateways in poolSingle gateway is single point of failure
VSPCVSPC v9.1 Patch 1 installedBuild 9.1.0.30713
VSPCTenant company records created and VCC tenants mappedVerify after any VSPC upgrade
VSPCSelf-service backup portal accessible to test tenantLogin and restore test successful
Upgrade sequencingTenant communication plan documentedSP upgrade first, tenants notified, window defined
TestingEnd-to-end test backup and restore with test tenantBackup completes, restore completes, immutability flag set on files

Read more