Setting Up Veeam Kasten on OpenShift -- Operator, Helm, NFS Location Profiles, and DR Export

Veeam Kasten OpenShift 4.18 Operator Helm NFS FileStore CSI Snapshots k10tools DR Export

Veeam Kasten Series | Component: Veeam Kasten 8.x on OpenShift 4.16 through 4.18+ | Audience: Platform Engineers, OpenShift Administrators, Kubernetes Backup Architects

This article targets OCP 4.18.25 specifically and covers everything you need to get Veeam Kasten running correctly in that environment. The version referenced throughout is Kasten 8.x, which is the current release series as of mid-2025 with 8.5.x being the latest stable builds at time of writing.

There are two ways to install Kasten on OpenShift and they are genuinely different in how you configure and maintain the deployment. The Operator path installs through OLM and uses a K10 Custom Resource to drive configuration after the operator is running. The Helm path installs directly and uses values files and flags. Both end up running the same pods in the same kasten-io namespace. The reason people run into trouble on OpenShift specifically is that both paths require steps that generic Kubernetes guides completely skip over: Security Context Constraints, OpenShift OAuth setup via a dedicated service account, and Route configuration. Miss any of those and you get a deployment that technically runs but is either inaccessible or stuck on an authentication error when you try to open the dashboard. That is exactly what Subhadip ran into, and this article covers it step by step.


1. Prerequisites and Pre-Flight Checks

What You Need Before You Start

RequirementMinimumNotes
OpenShift version4.16, 4.17, or 4.18+Kasten 8.x explicitly added support for OCP 4.18 in the release notes. OCP 4.14 support was formally removed in Kasten 8.0.12. Clusters on OCP 4.14 or earlier should not upgrade to Kasten 8.0.8 or later due to a missing SelfSubjectReview API that causes dashboard authentication failures.
Cluster admin accessRequiredSCC creation, namespace creation, and ClusterRoleBinding all require cluster-admin. Log in with oc login as cluster-admin before starting anything.
CSI driver with VolumeSnapshot supportRequiredKasten's snapshot mechanism depends entirely on the Kubernetes VolumeSnapshot API. ODF, NetApp Trident, and HPE CSI all qualify. Check with oc get volumesnapshotclass before going further.
Annotated VolumeSnapshotClassRequiredAt least one VolumeSnapshotClass needs the annotation k10.kasten.io/is-snapshot-class=true. Section 2 covers this.
Default StorageClassRequiredKasten stores its catalog and internal state in persistent volumes. A default StorageClass must exist or you need to specify one at install time.
NFS reachabilitySituationalIf you are using NFS FileStore as a location profile, the export must be reachable from all cluster nodes and mountable with ReadWriteMany. Test connectivity from each node before configuring the profile, not after.

Running the Pre-Flight Check with k10tools

Before installing anything, run the Kasten pre-flight check. It validates CSI capabilities, VolumeSnapshot API availability, RBAC permissions, and storage class configuration. Fix every reported failure before proceeding. A failure you ignore at this stage will show up as something much harder to diagnose later in the installation or during your first backup attempt.

bash: Create the namespace and run the pre-flight check
oc new-project kasten-io \
  --description="Kubernetes data management platform" \
  --display-name="Veeam Kasten"

# Run the pre-flight check using k10tools
# This validates the cluster without installing anything
curl https://docs.kasten.io/downloads/latest/tools/k10_primer.sh | bash

# For targeted storage class validation:
curl https://docs.kasten.io/downloads/latest/tools/k10_primer.sh | \
  bash /dev/stdin --storageclass hpe-csi-driver

# For air-gapped environments:
# curl https://docs.kasten.io/downloads/latest/tools/k10_primer.sh | \
#   bash /dev/stdin -i your-registry.local/k10tools:latest
k10tools checks VolumeSnapshot API availability by listing VolumeSnapshotClasses. If your CSI driver is installed but no VolumeSnapshotClass exists yet, it will flag this. Create the VolumeSnapshotClass first as covered in Section 2, then re-run the pre-flight check. It does not modify any cluster state and is safe to run multiple times.

2. CSI Snapshot Prerequisites

Kasten uses the Kubernetes VolumeSnapshot API for every snapshot operation. Before you install Kasten, you need at least one VolumeSnapshotClass that is annotated for Kasten to use. Without this annotation, Kasten discovers and catalogs your applications but fails at the snapshot step on every backup attempt.

bash: Annotate VolumeSnapshotClasses for Kasten
# List what is available
oc get volumesnapshotclass

# Annotate the class for your CSI driver
# HPE CSI:
oc annotate volumesnapshotclass hpe-snapshot-class \
  k10.kasten.io/is-snapshot-class=true

# ODF / Ceph RBD:
oc annotate volumesnapshotclass ocs-storagecluster-rbdplugin-snapclass \
  k10.kasten.io/is-snapshot-class=true

# ODF / CephFS:
oc annotate volumesnapshotclass ocs-storagecluster-cephfsplugin-snapclass \
  k10.kasten.io/is-snapshot-class=true

# Verify
oc get volumesnapshotclass \
  -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.annotations.k10\.kasten\.io/is-snapshot-class}{"\n"}{end}'
If you have multiple storage backends with separate VolumeSnapshotClasses, annotate each one you want Kasten to use. Kasten matches by CSI driver name, not by class name. If you annotate only one class but your applications use PVCs from a different driver, those applications will not be snapshotable.

3. Installation Path A: Operator via OperatorHub

The Operator path installs Kasten through OLM. In OperatorHub you will typically see two listings: Veeam Kasten (Free) for the certified free edition which supports up to 5 nodes without a license key, and an enterprise operator edition. Following Red Hat Marketplace's closure in April 2025, the enterprise listing landscape in OperatorHub may vary depending on your OCP version and catalog configuration. If you were previously on the kasten-k10-operator-paygo-rhmp-bundle or kasten-k10-operator-term-rhmp-bundle operators and are seeing issues with licensing or catalog availability after April 2025, refer to KB4774 for current guidance.

Path A: Operator Install

Step 1: Use k10tools to Prepare the OpenShift Cluster

This is the step that saves you the most pain. Kasten provides a dedicated k10tools openshift prepare-install command that does three things automatically: it extracts the CA certificate from the cluster, stores it as a ConfigMap named custom-ca-bundle-store in the kasten-io namespace, and generates the correct Helm install command for your environment. Running this before creating the K10 CR or running helm install means you do not have to manually extract router CA certificates or figure out the right CA bundle path.

bash: Download k10tools and run the OpenShift prepare-install command
# Download k10tools for your architecture
# Replace linux-amd64 with linux-arm64 or linux-ppc64le if needed
curl -Lo k10tools https://docs.kasten.io/downloads/latest/tools/k10tools-linux-amd64
chmod +x k10tools

# Prepare the cluster -- extracts CA certs, creates ConfigMap, outputs helm command
./k10tools openshift prepare-install -n kasten-io

# The command outputs something like:
# Created ConfigMap custom-ca-bundle-store in namespace kasten-io
# Suggested helm install command:
# helm install k10 kasten/k10 --namespace=kasten-io \
#   --set scc.create=true \
#   --set route.enabled=true ...

# Verify the CA bundle ConfigMap was created
oc get configmap custom-ca-bundle-store -n kasten-io

Step 2: Create the k10-dex-sa Service Account

This is the step that catches most people on operator-based installs. Kasten uses Dex as an identity broker between the dashboard and OpenShift OAuth. Dex needs a dedicated service account annotated with the OAuth redirect URI before the K10 CR is applied. If you create the K10 CR first and add the service account after, the Dex authentication chain does not initialize correctly and you end up with an inaccessible dashboard and a crashing auth-svc pod. Do this step before you touch the K10 CR.

One thing worth knowing: for a pure operator-based install you do not have to manually extract the service account token. The official documentation states that after the service account is created, Kasten automatically generates the corresponding client secret and you can omit clientSecret from the K10 CR entirely. The manual token extraction steps below are included because they are required for the Helm path and because many people prefer to be explicit about what is being set. Both approaches work correctly. If you are Helm-only, extracting the token manually is required. If you are operator-only and want the simpler path, create the service account with the annotation and leave clientSecret out of the K10 CR.

bash: Create the k10-dex-sa service account and token secret
APPS_BASE_DOMAIN=$(oc get ingress.config cluster \
  -o jsonpath='{.spec.domain}')
API_URL=$(oc get infrastructure cluster \
  -o jsonpath='{.status.apiServerURL}')

echo "Apps domain: ${APPS_BASE_DOMAIN}"
echo "API URL: ${API_URL}"

# Create the service account with OAuth redirect URI annotation
cat <
  

Step 3: Install the Operator from OperatorHub

  1. In the OpenShift Console, navigate to Operators > OperatorHub and search for Veeam Kasten. Select the edition you need: Veeam Kasten (Free) for up to 5 nodes, or the enterprise Veeam Kasten operator if you have an enterprise license.
  2. Click Install. Set the Update Channel to stable. Set Installation Mode to A specific namespace on the cluster and select kasten-io.
  3. Set Update Approval to Manual for production. This prevents automatic updates from running during backup windows. Automatic is fine for dev and test clusters.
  4. Click Install and wait for the operator pod to reach Running state. When it does, click View Operator.
  5. Once the operator is running, you will see a Console Plugin section on the operator details page. Enable the Kasten Console Plugin here to get Kasten status visible directly from the OpenShift Console without switching to the Kasten dashboard separately.
bash: Verify the operator and K10 CRD are available
# Confirm operator pod is running
oc get pods -n kasten-io

# Confirm the K10 CRD is available
oc get crd k10s.apik10.kasten.io

Step 4: Create the K10 Custom Resource

The K10 CR is what actually deploys Kasten after the operator is installed. Think of it as the equivalent of a Helm values file. The operator watches for this CR and uses its spec to configure and launch all the Kasten pods. The example below includes all the required OpenShift-specific fields. Notice that cacertconfigmap.name references the ConfigMap created by k10tools in Step 1 and insecureCA should be set to false when that ConfigMap is present.

YAML: K10 Custom Resource for OpenShift 4.18 with operator-based install
apiVersion: apik10.kasten.io/v1alpha1
kind: K10
metadata:
  name: k10
  namespace: kasten-io
spec:
  # Required for OpenShift: creates the k10-scc SecurityContextConstraint
  scc:
    create: true

  # OpenShift Route with TLS -- required for dashboard access
  route:
    enabled: true
    tls:
      enabled: true

  # OpenShift OAuth authentication
  auth:
    openshift:
      enabled: true
      serviceAccount: k10-dex-sa
      # Paste the value of $DEX_TOKEN here
      clientSecret: ""
      dashboardURL: "https://k10-route-kasten-io./k10/"
      openshiftURL: ""
      # Set insecureCA to false when the custom-ca-bundle-store ConfigMap is present
      # Set to true only if you have not run k10tools prepare-install
      insecureCA: false
      cacertconfigmap:
        name: custom-ca-bundle-store

  # Uncomment to specify a non-default StorageClass for Kasten catalog storage
  # global:
  #   persistence:
  #     storageClass: "your-storage-class"
bash: Apply the K10 CR and monitor startup
# Replace placeholder values in the YAML first, then apply
oc apply -f k10-cr.yaml -n kasten-io

# Monitor pods -- full startup typically takes 3 to 5 minutes
oc get pods -n kasten-io -w

# Get the dashboard Route URL
oc get route k10-route -n kasten-io -o jsonpath='{.spec.host}'
# Dashboard: https:///k10/

# Verify the SCC was created
oc get scc k10-scc

Modifying the K10 CR After Deployment

When you need to change the Kasten configuration after the initial deployment, patch the K10 CR rather than editing it directly in the console editor. Direct YAML edits in the console can introduce formatting issues that confuse the operator reconciliation loop.

bash: Patch the K10 CR to update a setting
cat < /tmp/k10-patch.yaml
spec:
  scc:
    create: true
  route:
    enabled: true
    tls:
      enabled: true
  auth:
    openshift:
      enabled: true
      serviceAccount: k10-dex-sa
      clientSecret: ""
      dashboardURL: "https://k10-route-kasten-io./k10/"
      openshiftURL: ""
      insecureCA: false
      cacertconfigmap:
        name: custom-ca-bundle-store
EOF

kubectl patch k10s.apik10.kasten.io k10 \
  -n kasten-io \
  --type=merge \
  --patch-file /tmp/k10-patch.yaml

4. Installation Path B: Helm

Path B: Helm Install

The Helm path skips OLM entirely and installs Kasten directly. It gives you complete control over every configuration value at install time and is the preferred approach when the deployment is managed through ArgoCD or OpenShift GitOps. The same OpenShift prerequisites apply: k10tools prepare-install and the k10-dex-sa service account creation both need to happen before you run helm install.

Step 1: Run k10tools prepare-install and Create the Service Account

These two steps are identical to the Operator path. Run k10tools openshift prepare-install and create the k10-dex-sa service account and token secret exactly as described in Section 3 Steps 1 and 2. The k10tools command will output a suggested helm install command based on your cluster's configuration, which you can use directly or adapt into a values file.

Step 2: Add the Helm Repository

bash: Add the Kasten Helm repo and verify availability
helm repo add kasten https://charts.kasten.io/
helm repo update

# Check the latest available chart version
helm search repo kasten/k10 --versions | head -5

Step 3: Install with Full OpenShift Configuration

bash: helm install with all required OpenShift flags
APPS_BASE_DOMAIN=$(oc get ingress.config cluster \
  -o jsonpath='{.spec.domain}')
API_URL=$(oc get infrastructure cluster \
  -o jsonpath='{.status.apiServerURL}')
DEX_TOKEN=$(oc get secret k10-dex-sa-secret -n kasten-io \
  -o jsonpath='{.data.token}' | base64 -d)

helm install k10 kasten/k10 \
  --namespace=kasten-io \
  --set scc.create=true \
  --set route.enabled=true \
  --set route.tls.enabled=true \
  --set auth.openshift.enabled=true \
  --set auth.openshift.serviceAccount=k10-dex-sa \
  --set auth.openshift.clientSecret="${DEX_TOKEN}" \
  --set auth.openshift.dashboardURL="https://k10-route-kasten-io.${APPS_BASE_DOMAIN}/k10/" \
  --set auth.openshift.openshiftURL="${API_URL}" \
  --set auth.openshift.insecureCA=false \
  --set cacertconfigmap.name=custom-ca-bundle-store

oc get pods -n kasten-io -w

Values File for GitOps Deployments

For GitOps workflows, store the configuration in a values file. The clientSecret should come from a sealed secret or external secrets operator in production rather than being stored in plaintext in the repository.

YAML: kasten-values.yaml for GitOps-managed deployment
scc:
  create: true

route:
  enabled: true
  tls:
    enabled: true

auth:
  openshift:
    enabled: true
    serviceAccount: k10-dex-sa
    # Source this from a sealed secret or external secrets operator
    clientSecret: ""
    dashboardURL: "https://k10-route-kasten-io./k10/"
    openshiftURL: ""
    insecureCA: false
    cacertconfigmap:
      name: custom-ca-bundle-store

cacertconfigmap:
  name: custom-ca-bundle-store
bash: Helm upgrade to apply updated values
helm upgrade k10 kasten/k10 \
  --namespace=kasten-io \
  --values kasten-values.yaml \
  --set auth.openshift.clientSecret="${DEX_TOKEN}"

5. Verifying the Installation

bash: Post-install verification
# All pods should reach Running or Completed state
oc get pods -n kasten-io

# Flag anything that is not healthy
oc get pods -n kasten-io | grep -v -E "Running|Completed|NAME"

# Get the dashboard URL
oc get route k10-route -n kasten-io -o jsonpath='{.spec.host}'

# Verify the SCC was created and assigned correctly
oc get scc k10-scc
oc auth can-i use securitycontextconstraints/k10-scc \
  --as=system:serviceaccount:kasten-io:executor-svc

# Use k10tools to debug OAuth if the dashboard is inaccessible after pods are up
./k10tools debug auth -d openshift
Log in to the dashboard using your OpenShift credentials. The Free edition activates without a license key for up to 5 nodes. Enterprise editions require the license key from Veeam. After logging in, Kasten automatically discovers all namespaces containing PVCs and presents them as applications ready for protection.

6. NFS FileStore Location Profile: HPE Alletra 4140 Architecture

A Location Profile tells Kasten where to send exported backup data. Without one, Kasten only creates local cluster snapshots, which are not backups. They exist only on the same cluster that produced them and are gone if the cluster goes down. Local snapshots are useful for short-term rollback but they are not a recovery mechanism. The Location Profile is what turns a snapshot into a durable backup.

For the HPE Alletra 4140 architecture with two firezones, the setup works like this. The Alletra 4140 units expose NFS exports on their data IPs. The RHEL 9.5 hardened Linux repositories deployed on the same hardware are completely separate from what Kasten sees. Kasten connects to the NFS export directly. It does not know about or interact with the Veeam Backup Repository running on that same hardware. They share the same underlying storage but they are independent protection paths: VBR protects VM and file workloads, Kasten protects Kubernetes application workloads. Two different tools, two different data paths, same hardware.

NFS Export Requirements for the Alletra 4140

  • The NFS export on the Alletra 4140 data IP must be reachable from all OpenShift worker nodes. Verify this with a manual mount test from each node type before configuring anything in Kasten.
  • The export must allow root access or have a supplemental group configured with read, write, and execute permissions on the export directory. Kasten uses root by default when accessing NFS. If root squash is enforced on the Alletra, configure the Supplemental Group field in the location profile to use a non-root GID that has access.
  • The export must support ReadWriteMany. Multiple Kasten worker pods on different nodes write to the NFS location concurrently during export operations.
  • Kasten stores data under a k10/{cluster-id}/ path on the NFS share. New exports stop when the share hits 95% utilization and resume automatically as retention cleanup frees space. Size the export accordingly.
  • One important limitation: shareable volume backup and restore workflows are not compatible with NFS FileStore location profiles. If your applications use shared volumes, use an object storage location profile for those workloads instead.
  • For the two-firezone architecture, create a separate PV, PVC, and location profile for each firezone. This gives you two independent location profiles you can assign to policies on different schedules.

Creating the PV and PVC for the NFS Mount

The NFS FileStore location profile references a PVC by name rather than an NFS server IP directly. This means you create a PersistentVolume pointing at the Alletra NFS export, bind a PVC to it in the kasten-io namespace, and then reference the PVC name in the profile. The PVC must be in the kasten-io namespace for Kasten to access it.

YAML: PV and PVC for Alletra 4140 NFS export, firezone 1
apiVersion: v1
kind: PersistentVolume
metadata:
  name: alletra-kasten-nfs-fz1-pv
spec:
  capacity:
    storage: 10Ti
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    server: 
    path: /path/to/nfs/export
  mountOptions:
    - hard
    - nfsvers=4.1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: alletra-kasten-nfs-fz1-pvc
  namespace: kasten-io
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  resources:
    requests:
      storage: 10Ti
  volumeName: alletra-kasten-nfs-fz1-pv
bash: Apply and verify the PVC binds successfully
oc apply -f alletra-nfs-pv-pvc-fz1.yaml

# Confirm Bound state -- should happen within a few seconds
oc get pvc -n kasten-io alletra-kasten-nfs-fz1-pvc

# Test the NFS mount from a worker node
oc debug node/ -- chroot /host \
  mount -t nfs4 :/path/to/nfs/export /mnt/test \
  && echo "Mount OK" \
  && umount /mnt/test

Creating the Location Profile in the Dashboard

  1. In the Kasten dashboard navigate to Settings > Profiles > Location and click New Profile.
  2. Select NFS File Storage as the profile type.
  3. Give the profile a name such as alletra-nfs-fz1.
  4. In the Claim Name field enter alletra-kasten-nfs-fz1-pvc.
  5. If root squash is enforced on the Alletra, enter the GID in the Supplemental Group field and set the Path field to the subdirectory within the PVC that the group has write access to.
  6. Click Save Profile. Kasten validates by writing and reading a test object. A green checkmark confirms the profile is working.
  7. Repeat the entire process for firezone 2 using the second Alletra 4140 data IP, creating a second PV, PVC, and profile named alletra-nfs-fz2.
YAML: NFS FileStore location profile via CRD (alternative to UI)
apiVersion: config.kio.kasten.io/v1alpha1
kind: Profile
metadata:
  name: alletra-nfs-fz1
  namespace: kasten-io
spec:
  type: Location
  locationSpec:
    credential:
      secretType: None
    type: FileStore
    fileStore:
      claimName: alletra-kasten-nfs-fz1-pvc
      # Uncomment if using supplemental group with a specific subdirectory
      # path: /kasten-data
      # supplementalGroup: 1001

7. Creating Backup Policies with Snapshot Export

A policy defines what to protect, how often, and where to send the data. The key setting that separates a real backup policy from a snapshots-only policy is Enable Backups via Snapshot Exports. With that toggle off, data never leaves the cluster. With it on, Kasten copies the snapshot data to the location profile after each snapshot, creating the durable off-cluster backup.

Policy Creation via the Dashboard

  1. Navigate to Policies and click Create New Policy.
  2. Set the policy name and select the application namespace to protect.
  3. Set the Action to Snapshot and configure the backup frequency.
  4. Under Enable Backups via Snapshot Exports, toggle this on. This is the critical step.
  5. Set the Export Location Profile to alletra-nfs-fz1.
  6. For the dual-firezone architecture, consider exporting to both profiles on different schedules: primary export to fz1 daily, secondary export to fz2 weekly. That gives you two independent copies at different retention points across two physical locations.
  7. Set retention separately for local snapshots and exported backups. Local snapshot retention controls how many snapshots are kept on cluster storage. Export retention controls how many exported backups are kept on the NFS share.
  8. Click Create Policy then click Run Once to trigger an immediate backup and confirm the full snapshot-to-export workflow completes successfully.
YAML: Backup policy via Kasten CRD with NFS export
apiVersion: config.kio.kasten.io/v1alpha1
kind: Policy
metadata:
  name: production-app-backup
  namespace: kasten-io
spec:
  comment: "Daily backup with NFS export to Alletra FZ1"
  frequency: "@daily"
  retention:
    daily: 7
    weekly: 4
    monthly: 12
  selector:
    matchExpressions:
      - key: k10.kasten.io/appNamespace
        operator: In
        values:
          - production-app
  actions:
    - action: snapshot
      options:
        snapshotRetention:
          days: 1
          weeks: 1
    - action: export
      exportParameters:
        frequency: "@daily"
        profile:
          name: alletra-nfs-fz1
          namespace: kasten-io
        exportData:
          enabled: true
      retention:
        daily: 7
        weekly: 4
        monthly: 12

8. Application Restore

Restoring to a new namespace alongside the original application is almost always the safer choice when you are diagnosing data corruption or validating a restore point. You can confirm the restored application looks correct before cutting over, without touching what is already running.

  1. In the Kasten dashboard navigate to Applications and open the application you need to restore.
  2. Select the restore point. Local snapshots show a camera icon. Exported backups show a file icon with the profile name.
  3. Click Restore and choose between restoring to the existing namespace (in-place) or to a new namespace (out-of-place).
  4. Review the restore preview, confirm, and monitor the restore action in the Activity tab.
bash: Trigger a restore action via CLI
# List available restore points
kubectl get restorepoints -n production-app \
  --sort-by='.metadata.creationTimestamp' | tail -10

# Initiate a restore
cat <
    namespace: production-app
EOF

kubectl get restoreactions -n production-app -w

9. DR Export for Cross-Cluster Recovery

The cross-cluster DR path uses the same NFS location profile as the backup export. The destination cluster needs to be able to mount the same NFS share that the source cluster wrote to. For the Alletra 4140 dual-firezone architecture this means validating firewall rules between the DR cluster nodes and the Alletra data IPs before you configure anything.

Kasten Disaster Recovery (KDR)

KDR protects Kasten itself so that it can be recovered on a replacement cluster along with its catalog and all the restore point metadata it manages. Configure KDR to use one of the NFS location profiles as its backup target.

The k10restore Helm chart and k10restore OpenShift Operand were removed in Veeam Kasten 8.x. If you are on any current Kasten 8.x release, do not look for the k10restore operand. It does not exist. The current KDR recovery path uses the Kasten Helm chart or Operator with a restore-from-backup flag. Refer to the Veeam Kasten Disaster Recovery documentation for the exact procedure for your installed version.
bash: Configure KDR to use the NFS location profile
kubectl patch k10s.apik10.kasten.io k10 \
  -n kasten-io \
  --type=merge \
  --patch '{
    "spec": {
      "dr": {
        "enabled": true,
        "profile": {
          "name": "alletra-nfs-fz1",
          "namespace": "kasten-io"
        }
      }
    }
  }'

Importing Applications to a DR Cluster

  1. Install Veeam Kasten on the DR OpenShift cluster using either path from this article. The DR cluster Kasten version must be equal to or newer than the source cluster version.
  2. Create a PV and PVC on the DR cluster pointing at the same Alletra 4140 NFS export, or the firezone 2 export if firezone 1 is unavailable. Create a location profile with the same name as on the source cluster.
  3. In the DR cluster dashboard navigate to Policies > Create New Policy and set the action to Import. Set the Import Location Profile to the NFS profile pointing at the source export data.
  4. Paste the migration token from the source cluster export policy into the Config Data field.
  5. Enable Restore After Import if you want the application to start running automatically after import completes.
  6. Run the import policy and monitor progress in the Activity tab.

10. Troubleshooting Common Issues on OpenShift

Dashboard Inaccessible After Operator Install

The first thing to check is whether the k10-dex-sa service account existed before the K10 CR was applied. If it did not, delete the K10 CR, create the service account, then re-apply the CR. The second thing to check is the oauth-redirecturi.dex annotation value. Verify the apps base domain with oc get ingress.config cluster -o jsonpath='{.spec.domain}' and confirm the annotation URL matches exactly. The third thing to check is whether route.enabled was set to true in the K10 CR spec. Use ./k10tools debug auth -d openshift to test the OAuth connection and token directly after pods are running.

auth-svc Pod in CrashLoopBackOff

This is almost always an OAuth configuration problem. Check the auth-svc pod logs specifically. Common causes: the openshiftURL value is missing a subdomain or has an incorrect port, the clientSecret token has expired or was set incorrectly, or the oauth-redirecturi.dex annotation URL does not match the actual Route hostname. Run oc logs -n kasten-io -l app=auth-svc -c dex --tail=50 to get the specific error from the Dex container.

Snapshots Failing with No VolumeSnapshotClass Found

The annotation is missing or applied to the wrong class. Run oc get volumesnapshotclass --show-labels and confirm k10.kasten.io/is-snapshot-class=true is present on the class whose driver matches your PVCs' StorageClass provisioner.

NFS Location Profile Validation Failing

Kasten writes a test object to the NFS share during profile validation. If this fails: confirm the PVC is in Bound state, test the NFS mount manually from a debug pod on a worker node, and check whether root squash is enabled on the Alletra export. If root squash is active, Kasten's root-user writes will be rejected. Either disable root squash for the Kasten worker pod CIDR or configure the Supplemental Group field with a GID that has write access.

Export Actions Stuck in Running State

Export actions that run far beyond their expected duration are usually an NFS throughput bottleneck for large PVCs, or the NFS share is at or near 95% capacity which pauses new exports automatically. Check NFS performance from the exporting pod and check utilization on the Alletra export.

SCC Errors in Pod Logs

If pods log SCC-related errors, verify that scc.create was set to true in the K10 CR or Helm values and that the operator had cluster-admin permissions when it created the SCCs. Re-apply the K10 CR patch with scc.create: true and delete the affected pods to force recreation with the correct SCC assignment.


Key Takeaways

  • The Operator path and the Helm path both install the same Kasten workloads. The Operator path uses a K10 CR for post-install configuration and integrates with OLM for lifecycle management. The Helm path uses values files and flags and is better suited for GitOps pipelines. Both require the same OpenShift-specific setup steps.
  • Run k10tools openshift prepare-install before creating the K10 CR or running helm install. It extracts the cluster CA certificate, stores it as the custom-ca-bundle-store ConfigMap in kasten-io, and outputs the correct installation command for your environment. This replaces the manual CA extraction process entirely.
  • Create the k10-dex-sa service account with the oauth-redirecturi.dex annotation before the K10 CR is applied or before helm install runs. Creating it afterward produces an inaccessible dashboard. This is the most common cause of operator-based installs on OpenShift failing to become accessible.
  • Three settings are required for any functional OpenShift deployment: scc.create: true, route.enabled: true, and auth.openshift.enabled: true. None of these are defaults. Every one of them must be explicitly set in the K10 CR or Helm values.
  • Annotate every VolumeSnapshotClass you want Kasten to use with k10.kasten.io/is-snapshot-class=true. Kasten matches by CSI driver name. If a class for a driver that backs some of your PVCs is not annotated, those PVCs cannot be snapshot-protected.
  • Local snapshots are not backups. Enable Snapshot Export in every policy that needs to survive a cluster failure. Without export enabled, your data never leaves the cluster.
  • NFS FileStore location profiles do not support shareable volume backup and restore workflows. If your applications use shared volumes, use an object storage location profile for those workloads.
  • For the HPE Alletra 4140 dual-firezone architecture: create one PV, PVC, and location profile per firezone, assign both profiles to your policies on different export schedules, and validate NFS connectivity from worker nodes before creating the profiles. Kasten connects directly to the NFS export and has no knowledge of or interaction with the VBR hardened repository running on the same hardware.
  • The k10restore OpenShift Operand and Helm chart were removed in Kasten 8.x. For DR recovery on any current 8.x release, use the documented KDR recovery path, not the old operand approach.
  • If the auth-svc pod is in CrashLoopBackOff after install, check the Dex container logs specifically. The most common causes are an incorrect openshiftURL value, a mismatched oauth-redirecturi.dex annotation, and a clientSecret token that was not set or has expired.

Read more