Azure Local (Azure Stack HCI) End-to-End Setup -- Arc Deployment, S2D, Networking, and Arc VM Management

Azure Local Azure Stack HCI S2D Azure Arc Network ATC Hyper-V 23H2

Standalone Infrastructure | Component: Azure Local (Azure Stack HCI) 23H2 | Audience: Enterprise Architects, Senior Infrastructure Engineers

Azure Local is Microsoft's current name for what was Azure Stack HCI. The rebrand happened at Microsoft Ignite 2024 and it wasn't just cosmetic. Azure Local now encompasses a broader set of deployment scenarios and hardware options, but the core is still the same: Hyper-V as the hypervisor, Storage Spaces Direct for software defined storage, and Azure Arc as the management and integration layer. If you're coming from standalone Hyper-V, most of what you already know still applies. What's new in 23H2 is that Arc integration is now mandatory, deployment goes through the Azure portal, and the Lifecycle Manager handles updates across the full stack: OS, drivers, agents, and firmware.

Azure Stack HCI version 22H2 has reached end of support. If you're still on 22H2, you're no longer receiving security updates. This article covers a current 23H2 deployment: hardware requirements, pre deployment networking decisions, Arc based deployment from the Azure portal, Storage Spaces Direct configuration, Network ATC, VM management, and an honest comparison with standalone Hyper-V so you know when Azure Local earns its overhead.


1. Hardware Requirements

Azure Local requires validated hardware from the Azure Local Catalog. This isn't optional in the same way it is for standalone Hyper-V. The Arc deployment process installs firmware and driver packages that are validated against specific hardware configurations, and running on hardware outside the catalog means those validation tests don't apply. The catalog includes hardware from Dell, HPE, Lenovo, DataON, and Supermicro, so there's broad coverage at most enterprise tier points.

ComponentMinimumRecommendedNotes
CPUIntel Xeon or AMD EPYC with hardware virtualizationCurrent generation Xeon or EPYCAll nodes in a cluster must have matching CPU models for live migration. Mixed generations require CPU compatibility mode which limits exposed features.
RAM64 GB ECC per node256 GB+ per nodeECC RAM is required, not optional. Non ECC RAM isn't a supported configuration for Azure Local 23H2.
Storage2 SSDs (NVMe or SATA) + optional HDDsNVMe SSDs for cache + HDDs or SSDs for capacityS2D uses SSDs as cache tier automatically. All NVMe is supported and offers the best performance. Mixed drive types require at least one SSD per node for cache.
Network2 NICs, 10 GbE each4 NICs, 25 GbE eachRDMA capable NICs (iWARP or RoCE v2) are required for S2D storage traffic when using a switched network for 4+ nodes. Switchless storage is supported for 2 to 3 node clusters.
Cluster size1 node (single node supported)2 to 16 nodesSingle node has no HA. Two nodes require a witness (Azure Cloud Witness recommended). Three nodes minimum for S2D with 3-way mirror.

2. Pre Deployment Networking Decisions

Networking is the most consequential decision in an Azure Local deployment and the hardest to change after the fact. Network ATC, the automated network configuration tool included in 23H2, applies a consistent intent based configuration across all nodes, but it can only work with the physical topology you give it. Get the physical design right first.

Switchless vs Switched Storage

  • Switchless (2 to 3 nodes): Storage traffic flows directly between nodes over dedicated cross connect cables with no switch in the path. Each node needs dedicated NICs for storage that connect directly to the other nodes. Simpler, cheaper, and eliminates the switch as a single point of failure for storage. Microsoft only validates switchless configurations for 2 to 3 node clusters. Beyond 3 nodes, you need a switch for storage.
  • Switched (any cluster size): Storage traffic goes through a physical ToR switch that must support RDMA. Required for clusters of 4 or more nodes. If you plan to scale past 3 nodes, design for switched storage from day one. Converting a switchless cluster to switched after the fact requires recabling and a partial redeployment of the storage network configuration.

Network Intents

Azure Local 23H2 uses Network ATC to configure intents: named groupings of traffic types that get mapped to specific NICs. Two intents are the standard production configuration:

  • Management + Compute intent: Handles management traffic (Arc agents, cluster communication, Hyper-V management) and VM network traffic on one NIC pair.
  • Storage intent: Dedicated NIC pair carrying S2D RDMA traffic. Must be on separate physical NICs from the management intent.

You also need at minimum 6 consecutive free IP addresses on the management network for infrastructure services (Arc Resource Bridge, AKS, and related components). These can't overlap with the APIPA range (169.254.0.0/16) and several other reserved ranges. Plan them before you run the deployment wizard or the Arc based deployment will fail mid process.


3. Deployment via Azure Portal

In 23H2, the primary deployment path for Azure Local goes through the Azure portal, not Windows Admin Center or PowerShell alone. The deployment is Arc driven: you register each node with Azure Arc first, then use the Azure portal to configure and deploy the cluster. That's a fundamental change from previous versions and it means your nodes need internet access during deployment, or a properly configured proxy that doesn't do TLS inspection.

  1. Install the Azure Local OS on each node from the Azure portal download or from hardware that comes preloaded. The OS is based on Windows Server Datacenter but stripped down to only what Azure Local needs.
  2. On each node, run the Arc registration script from the Azure portal. This installs the Azure Connected Machine agent and registers the node as an Arc enabled server in your Azure subscription. All nodes must be in the same Azure region and resource group.
  3. In the Azure portal, navigate to Azure Local and click Create. The deployment wizard discovers your Arc registered nodes and walks through cluster name, network intent assignment, storage configuration, and security baseline settings.
  4. The portal validates your configuration before deploying. Fix anything it flags. A failed validation is much easier to resolve than a failed mid deployment.
  5. Deployment takes roughly 2 to 3 hours for a 3-node cluster. The Lifecycle Manager orchestrates the full stack: OS configuration, S2D enablement, network intent application, and Arc Resource Bridge deployment.
Azure Local only supports non authenticated proxy configurations. If your environment uses an authenticated proxy, the Arc registration and deployment will fail. Configure proxy settings on each node before Arc registration. TLS inspection on the proxy also breaks Arc communication. Add exceptions for all Microsoft endpoints required by Azure Local before starting deployment. The full list of required endpoints is in the Microsoft Azure Local documentation on the Microsoft Learn site.

4. Storage Spaces Direct

S2D is the software defined storage layer that pools all drives from all nodes into cluster shared volumes. You don't configure S2D directly in 23H2: the deployment wizard handles it. What you control are the storage pool configuration choices you make during deployment: resiliency type, cache behavior, and volume layout.

Resiliency Options

  • Two-way mirror: Two copies of data across two or more nodes. Survives one node failure. Requires at least 2 nodes. Uses 50% of raw capacity.
  • Three-way mirror: Three copies across three or more nodes. Survives two simultaneous node failures or one node failure plus one drive failure. Requires at least 3 nodes. Uses 33% of raw capacity.
  • Mirror-accelerated parity: A hybrid that writes hot data as mirror and cold data as parity for better storage efficiency. Requires at least 4 nodes. Offers roughly 50 to 70% storage efficiency depending on the mirror-to-parity ratio. Best for workloads with significant cold data that doesn't need the full IOPS of a pure mirror volume.

Cluster Shared Volumes

VM disk files live on Cluster Shared Volumes (CSVs) in S2D. A CSV is a volume accessible from all nodes simultaneously, which is what allows live migration without shared SAN or NAS hardware. The deployment creates a default CSV, but you'll typically create additional CSVs to separate workload types, manage capacity, or align with different resiliency requirements. Create CSVs through Windows Admin Center or PowerShell after deployment.

PowerShell: Create a new CSV on an Azure Local cluster
# Run on any cluster node or remotely with the cluster name
# Create a new 4 TB volume with three-way mirror resiliency
New-Volume `
    -StoragePoolFriendlyName "S2D on ClusterName" `
    -FriendlyName           "WorkloadVol01" `
    -FileSystem              CSVFS_ReFS `
    -ResiliencySettingName   Mirror `
    -PhysicalDiskRedundancy  2 `
    -Size                    4TB

# Check cluster shared volume status
Get-ClusterSharedVolume | Select-Object Name, State, OwnerNode

# Check S2D pool health
Get-StoragePool -IsPrimordial $false | Get-PhysicalDisk | 
    Select-Object FriendlyName, HealthStatus, OperationalStatus, Usage

5. Arc Integration and VM Management

The Arc Resource Bridge is a lightweight Kubernetes VM that deploys automatically during cluster setup. It connects the Azure Local cluster to Azure and enables VM management through the Azure portal alongside traditional management through Windows Admin Center and Hyper-V Manager. Arc VMs created through the portal use Azure native constructs: VM images stored in an Azure Local image gallery, virtual network configurations managed as Azure resources, and RBAC through Azure roles.

Two management paths coexist in 23H2 and both work:

  • Traditional Hyper-V management: Failover Cluster Manager, Hyper-V Manager, PowerShell, Windows Admin Center. VMs created this way are standard Hyper-V VMs. They live migrate, checkpoint, and replicate exactly as they do on standalone Hyper-V.
  • Arc VM management: Azure portal, Azure CLI, Azure Resource Manager templates. VMs created this way are Arc enabled and show up in Azure as managed resources. They get Azure VM extensions, Azure Monitor integration, and Azure Policy compliance reporting automatically.

You can mix both on the same cluster. VMs created through the traditional path don't automatically get Arc enabled. You can manually onboard them to Arc with the Connected Machine agent if you want Azure management capabilities on VMs that were deployed before Arc VM management was configured.


6. Azure Local vs Standalone Hyper-V

FactorAzure Local 23H2Standalone Hyper-V (Windows Server)
Hardware requirementValidated hardware from Azure Local Catalog. ECC RAM required.Any hardware that supports Windows Server. Much wider hardware compatibility.
StorageS2D built in. No SAN or NAS needed for shared storage.Requires external SAN, NAS, or SMB share for live migration and HA. Or S2D as an optional add-on.
Azure connectivityRequired. Nodes must register with Azure Arc. Ongoing management telemetry flows to Azure.Not required. Fully on-premises, no cloud dependency.
LicensingAzure subscription based billing per physical core. Windows Server guest VMs included with OEM license option.Windows Server license per host. No cloud subscription required.
UpdatesLifecycle Manager orchestrates OS, driver, firmware, and agent updates as a coordinated package.Patching managed separately per layer: Windows Update for OS, vendor tools for firmware and drivers.
Stretched clustersNot supported in 23H2. Removed from this version.Supported with Storage Replica and appropriate networking.
Best forOrganizations already using Azure services, wanting unified management across on-premises and cloud, or needing HCI without separate SAN investment.Organizations needing maximum hardware flexibility, no cloud dependency, or stretched cluster HA across sites.

Key Takeaways

  • Azure Local is the current name for Azure Stack HCI. Version 22H2 has reached end of support. If you're still on 22H2, you're not receiving security updates.
  • ECC RAM is required for 23H2. Non ECC RAM is not a supported configuration. Validate your hardware against the Azure Local Catalog before starting a deployment.
  • Switchless storage is only validated for 2 to 3 node clusters. If you plan to scale past 3 nodes, design for switched storage from day one. Converting after the fact requires recabling and partial redeployment.
  • Deployment goes through the Azure portal in 23H2. Nodes must register with Azure Arc first. Azure Local only supports non authenticated proxy configurations. TLS inspection on the proxy breaks Arc communication.
  • You need at least 6 consecutive free IPs on the management network for infrastructure services before you run the deployment wizard. These can't overlap with APIPA or other reserved ranges.
  • Arc Resource Bridge deploys automatically during cluster setup. It enables VM management through the Azure portal alongside traditional Hyper-V management through Failover Cluster Manager and Windows Admin Center. Both paths coexist on the same cluster.
  • Stretched clusters are not supported in Azure Local 23H2. This was removed from this version. If you need multi site HA with storage replication, standalone Hyper-V with Storage Replica is the correct platform.

Read more