Veeam v13: Setup and Configuration with VMware Cloud Foundation
Veeam v13 Series | Component: VBR v13, VSA v13 | Audience: Hands-on Sysadmins, Enterprise Architects
VCF 9 changed enough that instructions written for VCF 5.x don't map cleanly onto what you'll actually find in the console. The management plane moved. SDDC Manager's UI is deprecated and its workflows have shifted to VCF Operations. Workload domains are now deployed and managed through vCenter directly. If you're setting up Veeam v13 against a fresh VCF 9 environment and wondering why your muscle memory from previous versions keeps leading you to the wrong place, this is the article for you.
1. What Actually Changed in VCF 9
Before touching anything in Veeam, it's worth understanding what VCF 9 changed so the configuration decisions make sense.
| Area | VCF 5.x | VCF 9 |
|---|---|---|
| Management plane | SDDC Manager central to all operations | VCF Operations is the primary console. SDDC Manager UI deprecated in VCF 9, scheduled for full removal in a future major release. |
| Workload domain deployment | SDDC Manager wizard | VCF Operations Inventory, then vCenter directly |
| vCenter access | Managed through SDDC Manager | vCenter per Fleet Instance, accessed directly |
| Infrastructure backup | SDDC Manager Administration, Backup | VCF Operations Fleet Management, Lifecycle, Backup Settings |
| Service accounts | SDDC Manager manages vsphere.local accounts | VCF SSO with vCenter Server Linking across workload domains |
For Veeam, the practical implication is straightforward: you register each vCenter server directly with VBR, exactly the same as you would with a standalone vSphere environment. There's no VCF management API layer between Veeam and vCenter. If you have three workload domains, you add three vCenter connections.
Before deploying, check KB2443 at veeam.com/kb2443 for the current vSphere compatibility matrix. Veeam confirmed vSphere 9.0 readiness during the pre-release period and the formal support statement is updated there as testing against GA builds completes.
2. Deploy the Veeam Software Appliance
The VSA is the right starting point for a fresh VCF 9 deployment. You get a hardened Linux appliance with DISA STIG compliance, no Windows OS to manage, automatic patching, and the Security Officer account architecture that fits naturally with the zero trust posture that VCF environments tend to require.
What You Need Before You Start
- VCF 9 management domain deployed and vCenter accessible
- A vSAN or NFS datastore with at least 250 GB available for the VSA VM. Check the current system requirements on the Veeam help center for exact disk configuration.
- A dedicated portgroup for Veeam management traffic on its own VLAN, separated from production VM traffic
- DNS resolution working for the hostname you're going to assign. The VSA needs a resolvable FQDN for certificate validation and for Veeam ONE connectivity later. An IP address won't cut it.
- The VSA OVA downloaded from the Veeam portal
- Your Veeam license file
Deploy from OVA
- In vCenter, Deploy OVF Template. Point to the VSA OVA. Select your management domain cluster as the compute resource. Use Thin Provision on vSAN.
- On the network mapping step, map to your dedicated Veeam management portgroup. Don't deploy the VSA onto the same portgroup as production VMs.
- On the Customize Template step, set the hostname, IP, subnet, gateway, DNS servers, and NTP server. Get these right before you deploy. Changing the hostname after first boot requires extra steps you don't want to deal with.
- Power on. First boot initialization takes roughly 5 minutes. The VSA is ready when the Veeam web UI responds at https://your-vsa-hostname:9419.
First Login: Accounts and Security Officer
Don't open the VBR web UI yet. Start with the Host Management console at port 10443. This is where you configure the two accounts that control the appliance itself, and it has to happen before anything else.
- Browse to https://your-vsa-hostname:10443. Accept the self signed certificate warning. Log in with the default veeamadmin credentials.
- You're immediately prompted to change the password and set up MFA. Complete both. The MFA setup shows a QR code for your authenticator app. Do this before proceeding to anything else.
- Log out of veeamadmin. Log in as veeamso (the Security Officer account). It also requires a password change and MFA setup on first login. Complete both.
- Store both the veeamadmin and veeamso credentials and MFA recovery codes in your enterprise password manager. These are the two most critical credentials in the environment. Treat them accordingly.
Apply the License
- In the VBR web UI at port 9419, log in with veeamadmin.
- Main menu, License, upload your license file. Verify the instance count and expiry match your entitlement.
- If you're on Veeam Universal License, confirm the count covers your full VCF VM inventory plus growth headroom. VCF environments tend to grow faster than the initial sizing assumes.
3. Connect Veeam to VCF 9 vCenter
In VCF 9 each Fleet Instance has its own vCenter. Register them with VBR individually. There's no single VCF API endpoint to point Veeam at.
Create the Service Account First
Don't connect Veeam to vCenter with the vCenter administrator account or the VCF SSO administrator. Those accounts have far more permissions than Veeam needs. A compromised VBR service account that can act as vCenter administrator is a much worse situation than a compromised account with backup only permissions.
- In vCenter, go to Administration, Single Sign On, Users and Groups. Select the vsphere.local domain and create a new user. Something like svc-veeam-backup.
- Assign the minimum permissions Veeam requires. The exact list is published in the Veeam help center under Permissions for VMware vSphere. It covers: Virtual Machine (snapshot operations, guest operations, configuration), Datastore (allocate space, browse, low level file operations), Network (assign network), Host (storage partition configuration), Global (cancel task, disable and enable methods), and Cryptographic Operations if you're backing up encrypted VMs.
- Apply permissions at the vCenter root so they propagate to all datacenters and clusters.
- Add the account to the VBR credentials manager: Settings, Credentials, Standard Account.
Add the Management Domain vCenter
- VBR web UI, Backup Infrastructure, Managed Servers, Add Server, VMware vSphere.
- Enter the FQDN of the management domain vCenter. Use the FQDN, not the IP. VCF 9 relies on certificate validation and IP based connections cause certificate errors.
- Select the service account credentials you just created.
- Accept the SSL certificate fingerprint. Verify it matches the certificate on your vCenter before clicking Accept. This is one of those steps people click through without thinking. Don't.
- VBR scans the vCenter inventory and populates the hierarchy. Confirm the expected VMs are visible before moving on.
Add Workload Domain vCenters
Repeat the registration process for each workload domain vCenter. In VCF 9 they're linked via SSO federation, but from Veeam's perspective each one is a separate connection. Three workload domains means three vCenter connections in VBR.
The vsphere.local service account you created is visible across the VCF instance via SSO federation. You don't need to create separate accounts per vCenter, but you do need to assign the Veeam permissions in each vCenter independently. The account exists everywhere. The permissions don't automatically follow it.
4. Deploy Backup Proxies
The proxy reads data from ESXi hosts and sends it to the repository. In a VCF 9 vSAN environment, proxy placement decisions have a real impact on backup performance and on whether you're doing HotAdd or falling back to network transport.
Which Proxy Approach for VCF 9
| Proxy Type | How It Works | Best Fit |
|---|---|---|
| Virtual proxy (HotAdd) | Proxy VM reads VMDK via SCSI HotAdd on the same ESXi host, no network I/O for data | Best for vSAN backed workloads where you want to avoid backup traffic on the network |
| Network proxy (NBD) | Data read over the network via the ESXi NFC service | Works everywhere, lower throughput than HotAdd, no placement constraints |
| Veeam Infrastructure Appliance (VIA) | Pre-hardened JeOS appliance, same base as VSA | Recommended for new VCF 9 deployments. No Linux hardening work required, automated patching. |
For most VCF 9 environments, deploy the VIA as your proxy. It's the same hardened JeOS base as the VSA, deploys from an OVA, and handles proxy and mount server roles without any Linux hardening work on your part.
Deploy the VIA
- Download the VIA ISO from the Veeam portal. The VIA and VSA are separate ISOs despite sharing the JeOS base.
- Boot the VIA ISO on a VM in your target workload domain cluster. On the mode selection screen, choose Infrastructure Appliance for the standard proxy and mount server role. If you need a hardened repository instead, choose that option but note that a VIA configured as a hardened repository can't simultaneously serve as a mount server.
- Complete the hostname, network, and account setup the same way as the VSA initial configuration. The VIA has the same Host Administrator and Security Officer account structure.
- In the VBR web UI, go to Backup Infrastructure, Backup Proxies, Add Proxy. Select VMware Backup Proxy, enter the VIA hostname, select its Host Administrator credentials.
- Set maximum concurrent tasks. A conservative starting point is one concurrent task per 2 CPU cores assigned to the VIA. You can increase it later based on what you observe.
HotAdd and DRS: The Problem You'll Hit
HotAdd requires the proxy VM and the source VM to be on the same ESXi host. vSphere DRS can move VMs between hosts between backup runs. When the proxy and source VM end up on different hosts, the backup falls back to NBD. In an active DRS environment this happens regularly.
You have two options. Use DRS affinity rules to keep proxy VMs on the same hosts as the VMs they're primarily responsible for. Or accept that some backup runs fall back to NBD and size your network for it. Most environments choose the second option rather than manage affinity rules that fight against DRS's job. Just know it's happening and account for it in your backup window sizing.
5. Configure the Repository
For a VCF 9 environment your realistic repository options are: a VIA configured as a hardened Linux repository, a dedicated external NAS or SAN over NFS or iSCSI, or object storage as a primary or capacity tier target. Most environments combine two of these.
One Thing You Shouldn't Do
Don't put your hardened repository on vSAN. If vSAN is compromised, the repository storage is compromised with it. A hardened repository on dedicated external storage survives an event that takes out your vSAN cluster. One on vSAN doesn't. This is the kind of thing that seems fine in a capacity planning spreadsheet and becomes a problem in an actual incident.
Add the Repository to VBR
- VBR web UI, Backup Infrastructure, Backup Repositories, Add Repository.
- Select Direct Attached Storage for a VIA repo. Select Network Attached Storage for NFS or SMB.
- For a VIA hardened repository, select Linux, enter the VIA hostname, use the VIA Host Administrator credentials. VBR connects and registers it.
- Enable immutability and set the retention period. 30 days is a reasonable baseline. The immutability window means backup files can't be deleted for 30 days after creation regardless of who requests it.
- Configure the capacity limit. Leave at least 20 percent headroom above your expected data footprint. Full synthetic backup jobs need breathing room and you don't want retention pruning to fight for space with active backup jobs.
Object Storage for the Capacity Tier
If you have S3 compatible object storage available, configure a Scale Out Backup Repository with a performance tier (the VIA hardened repo) and a capacity tier (object storage). Older restore points move to object storage automatically based on the operational window you set. You get two media types, an offsite copy, and automated data movement without managing it manually.
Enable S3 Object Lock on the capacity tier bucket. That's immutability at the object storage layer, independent of VBR. Even a fully compromised VBR server can't delete objects during the retention period when Object Lock is active.
6. Create Backup Jobs
With the VSA deployed, vCenters registered, proxies added, and repositories configured, it's time to actually protect something.
- VBR web UI, Jobs, Backup, Create. Select VMware vSphere.
- Name the job to include the workload domain. WD01-Tier1-Daily is more useful six months from now than Backup Job 1. Ambiguous job names become a real problem once the list grows.
- Add VMs. You can add individual VMs, folders, resource pools, or entire clusters. Adding the workload domain cluster as a container means new VMs added to the cluster are automatically protected without job edits. Start here and exclude what shouldn't be protected rather than manually adding what should be.
- On the Guest Processing tab, enable application aware processing for VMs running SQL, Exchange, or AD. This requires guest credentials. Use a domain account with local admin on the target VMs, added to the VBR credentials manager.
- Enable job level encryption on the Advanced tab. Select your encryption password. If you have Enterprise Manager connected, key escrow is automatic. If you don't have EM connected, go connect it before you encrypt production jobs.
- Set your schedule and retention. Daily backup with 14 to 30 day retention is a reasonable starting point for most workloads. Adjust for your actual RPO requirements.
A Few vSAN Specifics
vSAN deduplication and compression on the production datastore doesn't carry over to Veeam backups. Veeam applies its own dedup and compression at the repository level. Don't be surprised if the backup data size doesn't match what you expected based on vSAN logical capacity numbers. They're measuring different things.
vSAN snapshots consume capacity on the vSAN datastore during the backup window. Monitor vSAN capacity during your first few backup runs before you declare the deployment done. More than a few environments have discovered capacity problems during the first full backup that weren't visible in the initial sizing.
vSAN encrypted VMs are backed up in their decrypted form by Veeam when you enable job level encryption. The backup file is re-encrypted by Veeam with your job password. The vSAN encryption key is not embedded in the backup. That's the correct behavior. Your restore path doesn't depend on vSAN encryption keys remaining accessible.
7. Verify Before You Declare Done
A backup job showing Success is not a verified backup environment. These steps are what separate environments that can actually restore from environments that think they can.
- Run the first jobs manually. Don't wait for the scheduled window. Start the first run now, watch it complete, and read the job session details. Look specifically for warnings about proxy selection, HotAdd falling back to NBD, and application aware processing failures. These are the issues you want to find now.
- Run Instant Recovery on a non-production VM. Pick something real but non-critical. Run Instant Recovery from the most recent restore point. Verify the VM powers on, network connectivity works, the application inside responds. Then stop the session. This is the only test that actually confirms the full restore path works end to end.
- Test a file level restore. Mount a backup, restore a test file. Confirms guest indexing and the file level restore path are working.
- Verify encryption is actually active. Try to restore an encrypted backup without the correct password. Confirm it fails with the right error. Encryption that's silently disabled looks identical to encryption that's working, until you need it.
- Run the Security and Compliance Analyzer. Home view, Security and Compliance, Analyze Now. Resolve every finding before the environment goes into production use. Not after.
8. VCF 9 vs Standalone vSphere: What's Actually Different
If you've set up Veeam against standalone vSphere before, a few things in VCF 9 will catch you out.
- The vsphere.local domain is shared across the VCF instance. A service account created in the management domain vCenter is visible in workload domain vCenters via SSO federation. You don't need separate accounts per vCenter. But you do need to assign Veeam permissions in each vCenter independently. The account exists everywhere automatically. The permissions don't.
- VCF 9 uses VCF Operations for platform management. VCF component backups (the vCenter appliance itself, NSX managers, VCF automation components) are configured in VCF Operations under Fleet Management, Lifecycle, Backup Settings. Not in Veeam. Veeam backs up the VMs running on VCF. VCF Operations backs up the VCF management plane configuration. These are separate things and both need to be configured.
- NSX in VCF 9 is managed as a workload domain service. You don't need to do anything special in Veeam for NSX overlay networks. Veeam backs up the VM regardless of what network overlay is underneath it. NSX configuration backup is handled by VCF Operations separately.
- vSphere Tags matter more in VCF 9 for workload placement and policy. Veeam can use vSphere Tags as job inclusion criteria. Tag based backup policies mean new VMs tagged for a protection tier are automatically covered without manual job edits when they're provisioned. Set this up from the start and save yourself the ongoing maintenance work.
What You've Completed
- Deployed the VSA as an OVA into the VCF 9 management domain cluster, configured veeamadmin and veeamso with MFA, and applied the license.
- Created a dedicated vsphere.local service account with minimum required permissions and registered the management domain and workload domain vCenters with VBR individually.
- Deployed VIA proxy instances into workload domain clusters and understood the HotAdd and DRS tradeoff you'll need to manage.
- Configured a hardened Linux repository with immutability enabled on dedicated non-vSAN storage.
- Created backup jobs with encryption, application aware processing, and cluster container inclusion so new VMs are protected automatically.
- Verified the deployment with Instant Recovery, a file level restore, encryption validation, and the Security and Compliance Analyzer.