Scale Computing HyperCore End-to-End Setup -- SCRIBE Storage, DR Replication, and Fleet Hub
Standalone Infrastructure | Component: SC//HyperCore 9.x | Audience: SMB Admins, Edge Infrastructure Engineers, IT Generalists
Scale Computing HyperCore occupies a specific and useful position in the hypervisor landscape. It's not trying to compete with vSphere or Nutanix at enterprise scale. It's purpose built for the SMB market, branch offices, edge deployments, and environments where the person managing the infrastructure also handles everything else in IT. The entire philosophy is minimal administration overhead: no separate storage system, no separate management server, no separate HA software. The cluster manages itself.
If you're evaluating HyperCore as a VMware alternative for small sites, or deploying it at edge locations without dedicated IT staff on site, this article covers everything: hardware, cluster formation, networking, storage, VM management, DR with snapshot replication, Fleet Hub for remote management, and an honest comparison against the alternatives at the same market tier.
1. Hardware Requirements and HyperCore Installation
Scale Computing sells HyperCore in two ways: as pre-configured HC3 appliances (their own branded hardware) and as software on certified third party hardware. The appliance path is simpler: you receive nodes with HyperCore already installed and just need to complete cluster initialization. The software path requires you to boot the HyperCore installer on certified hardware and run through the initial node setup.
| Component | Minimum | Recommended | Notes |
|---|---|---|---|
| CPU | Intel or AMD with hardware virtualization | Dual socket, recent generation | HyperCore uses KVM. Modern server CPUs all support it. Older hardware works for test environments but Scale recommends current generation hardware for production. |
| RAM | 32 GB per node | 128 GB+ per node | HyperCore reserves approximately 8 GB per node for the hypervisor and SCRIBE. Plan the remainder for VM workloads. |
| Storage | One SSD + one or more HDDs | Two or more SSDs + HDDs for hybrid, or all SSD | SCRIBE pools all storage across the cluster. Dissimilar storage sizes and types are supported. All-SSD nodes can coexist with hybrid nodes in the same cluster. |
| Network | Two 1 GbE NICs | Two 10 GbE NICs | HyperCore requires at least two NICs: one for management and VM traffic, one for IPMI/out-of-band. Internal cluster communication uses the same network as management by default. |
| Cluster size | 1 node (single node deployments are supported) | 3 nodes for HA | Single node is valid for edge locations where HA isn't required. Two nodes with RF2 provides redundancy. Three nodes is the production minimum for most HA requirements. |
Installing HyperCore on Bare Metal
If you're using Scale Computing hardware, HyperCore comes pre-installed. For third party hardware, download the HyperCore ISO from the Scale Computing portal and boot it on each node. The installer is minimal: it asks for the node's IP address, subnet, gateway, DNS, and hostname, then installs HyperCore and reboots. The node is then accessible via web browser on port 443 at its assigned IP, and you can proceed with cluster initialization from there.
2. Cluster Formation and Node Management
Cluster initialization in HyperCore is done through the web interface on the first node. There's no separate management server, no cluster manager VM to deploy, and no installation wizard to run separately from the hypervisor. The HyperCore UI is the same interface you use for every management task, and it's fully functional on any node in the cluster.
- Open a browser to the first node's IP address. Log in with the default credentials (admin/admin on factory fresh nodes, or whatever was set during installation).
- Navigate to Cluster, then Cluster Settings, and set the cluster name and the cluster virtual IP (the VIP is the single IP you'll use to manage the cluster going forward, regardless of which physical node is responding).
- Navigate to Cluster, then Nodes, and click Join Existing Cluster on the second and third nodes. Each node connects to the cluster VIP and joins automatically. SCRIBE immediately begins pooling storage from all nodes.
- After all nodes have joined, verify cluster health: navigate to Dashboard and confirm all nodes show green, all drives are healthy, and SCRIBE shows the expected total storage capacity.
Adding a Node to an Existing Cluster
Adding a node is non-disruptive. Install HyperCore on the new node, then on the existing cluster go to Cluster, then Nodes, and click Add Node. Enter the new node's IP and credentials. HyperCore validates the node, adds it to the cluster, and SCRIBE immediately begins rebalancing storage across all nodes including the new one. VMs continue running throughout. Scale Computing supports up to 8 nodes per cluster without disruption. For clusters larger than 8 nodes, contact Scale for best practice guidance.
3. Networking: Virtual NICs, Bonds, and VLANs
HyperCore networking is configured through the HyperCore UI and applied consistently across all nodes automatically. You don't configure networking per node. Network changes made in the cluster UI propagate to every member node, which eliminates the configuration drift that plagues multi-node environments managed node by node.
Physical Interface Bonding
HyperCore supports NIC bonding for redundancy and throughput. Configure bonds in the HyperCore UI under Network, then NICs. Available bond modes mirror standard Linux bonding: active-backup for simple failover, and balance-slb (a form of LACP style load balancing) for environments where the switch supports it. Active-backup is the simpler choice and works with any switch configuration.
VLANs and VM Networks
Virtual networks in HyperCore map to VLANs on the physical uplink. You create a virtual network in the HyperCore UI by giving it a name and a VLAN tag. VMs connect to virtual networks, not directly to physical NICs. This is the same model as vSphere port groups and Proxmox bridges. The physical switch port connected to each node must be configured as an 802.1q trunk carrying all the VLANs you'll use for VM networks.
# HyperCore has a full REST API for programmatic management
# Every UI action is available via the API
# Authenticate and get a session token
curl -s -X POST \
https://cluster-ip/rest/v1/login \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"yourpassword"}' \
| python3 -m json.tool
# Create a VLAN-tagged virtual network (replace TOKEN and VLAN as needed)
curl -s -X POST \
https://cluster-ip/rest/v1/VirDomain \
-H "Authorization: Bearer TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "vm-network-400",
"vlanTag": 400,
"type": "VLAN"
}'
# List all virtual networks
curl -s -X GET \
https://cluster-ip/rest/v1/VirDomain \
-H "Authorization: Bearer TOKEN" \
| python3 -m json.tool
4. Storage: SCRIBE, HEAT, and SSD Priority
SCRIBE (Scale Computing Reliable Independent Block Engine) combines the storage drives from each node into a single, logical, cluster wide storage pool. The pooling occurs automatically with no user configuration required. Blocks are stored redundantly across the cluster to allow for the loss of individual drives or an entire node. You don't configure RAID, storage pools, or storage containers. There's nothing to set up. You add nodes, and their storage is immediately part of the cluster pool.
HEAT: Automated Tiering
Every virtual disk gets an SSD priority on a scale from 0 to 11. The default is 4 across the board. Leave everything at 4 and HEAT distributes hot data to SSD and cold data to HDD automatically with no intervention. The practical range you'll actually use is narrower: priority 0 forces a disk to HDD only (useful for a static archive volume that doesn't need flash at all), and priority 11 puts maximum pressure on HEAT to keep that disk's data on SSD. Priority 11 is the right setting for SQL transaction logs or any disk where I/O latency directly affects application performance.
The scale is exponential, not linear. Bumping a disk from 4 to 5 doubles its relative priority for SSD placement against other disks still at 4. You don't need big numbers to make a meaningful difference. Most tuning is subtle: bumping database data disks to 6 or 7, leaving everything else at 4, and letting HEAT do the rest. All VMs share the same pool. You're not creating separate datastores or storage tiers per workload. You're just telling HEAT which virtual disks matter more when there's competition for SSD space.
Storage Profiles
Storage profiles in HyperCore group sets of virtual disk settings (replication, cache tier, SSD priority) into named templates you apply to VMs. Instead of configuring each VM's storage individually, you define a profile once (Database-Performance, for example, with SSD priority 9 and replication to all nodes) and assign it to VMs that need those characteristics. Profiles simplify standardized storage configurations across a large VM estate.
5. VM Management, Templates, and Migration
VM management in HyperCore uses the same web UI as everything else. There's no separate vCenter equivalent. Create, configure, start, stop, and migrate VMs all from the same interface that manages the cluster itself.
Creating VMs and Templates
You can create VMs from ISO (attach an ISO from the HyperCore built-in image repository), from a clone of an existing VM, or from the HyperCore REST API with cloud-init for automated deployments. HyperCore supports REST APIs that enhance the speed and ease with which users can deploy virtual machines at scale using cloud-init. Use common VM templates and provide them with their unique configuration information at first boot via scripting. Avoid the need to manually create and individually customize VMs, but programmatically provide hundreds or thousands of machines with their own settings via script.
HyperCore uses thin cloning for VM templates. HC3 uses a unique thin cloning technique that allows cloned VMs to share the same data blocks as their parent VM for storage optimization, but with no dependencies. If the parent is deleted, the clone is not affected and continues operating without disruption. This means cloning from a template is fast and consumes minimal additional storage until the clone's data diverges from the parent.
Live Migration
VMs on HC3 clusters can be non-disruptively migrated between nodes with no downtime. This not only allows for rebalancing resource allocation across the cluster but also allows VMs to be relocated automatically during rolling update processes for the HyperCore OS firmware. Live migration happens automatically during cluster updates: HyperCore migrates VMs off a node before updating it, updates the node, then allows VMs to migrate back. You don't manage this manually.
6. DR: Snapshot Replication and Site Failover
HyperCore's built-in DR is based on snapshot replication between clusters. You configure replication at the VM level: select a VM, define a remote cluster as the replication target, set the replication schedule, and HyperCore handles the rest. No separate DR software license required. Replication is included in the base HyperCore license.
How Snapshot Replication Works
HyperCore takes a snapshot of the VM at the configured interval, then transfers only the changed blocks since the last replication snapshot to the remote cluster. The remote cluster maintains a copy of the VM in a powered off state. On the first replication run, the full VM disk is transferred. Subsequent runs transfer only the delta since the last successful replication snapshot.
At the remote site, the replicated VM appears as a "VM Snapshot" in the HyperCore UI. It's not runnable directly. To fail over, you convert the snapshot to a running VM. This is a deliberate design: you won't accidentally start a replicated copy while the original is still running.
Configuring Replication
- In the HyperCore UI at the primary site, navigate to VMs, select the VM to replicate, and click Replicate.
- Enter the remote cluster's IP address and credentials. HyperCore establishes a trust relationship between the two clusters for replication traffic.
- Set the replication schedule: how frequently to replicate (hourly, every few hours, daily). More frequent replication means a tighter RPO but more bandwidth consumed.
- Start replication. The first run transfers the full VM disk to the remote cluster. Monitor progress in the HyperCore UI under VM, Replication Status.
- At the remote site, verify the replicated VM appears in the HyperCore UI. Run a test failover (convert to running VM in an isolated network) to verify the VM starts and the data is consistent.
7. Scale Computing Fleet Hub: Remote Management
Fleet Hub (previously called Field Hub and HyperCore Fleet) is Scale Computing's centralized management platform for managing multiple HyperCore clusters from a single interface. It's the answer to the edge deployment question: how do you manage 50 remote sites with HyperCore clusters when you don't have IT staff at each location?
Fleet Hub is a cloud-hosted service. Each HyperCore cluster registers with Fleet Hub using an outbound connection over HTTPS (no inbound firewall rules required at the remote site). From Fleet Hub you can view all cluster health dashboards, deploy and configure VMs, push HyperCore updates, and manage replication schedules across all registered clusters from a single browser session.
Fleet Hub Capabilities
- Unified health dashboard: all clusters visible with alert aggregation. One view shows you which sites have issues without logging into each cluster separately.
- Mass VM operations: deploy VMs from templates to multiple clusters simultaneously. Update VM configurations (CPU, RAM, snapshot schedules) across a fleet in bulk.
- Firmware and software updates: push HyperCore OS updates to clusters on a schedule, with rolling update logic that keeps VMs running during the update process.
- Replication monitoring: view replication status across all sites. Get alerts when replication falls behind or fails before an critical failover event.
8. Comparison: HyperCore vs Other SMB Hypervisors
| Factor | Scale Computing HyperCore | Proxmox VE | VMware vSphere Essentials | Nutanix Community Edition |
|---|---|---|---|---|
| Target market | SMB, edge, branch office, IT generalists | SMB, technical admins, open source preference | SMB with VMware familiarity | Evaluation and lab; not for production |
| Cost model | Per-node subscription or perpetual license | Free with optional subscription | Per-socket license, relatively low cost for small deployments | Free for up to 3 nodes in CE |
| Management complexity | Very low. Single UI, no external tools required. | Low to medium. Web UI is capable but requires Linux familiarity for advanced config. | Medium. Requires vCenter for cluster management, separate from ESXi. | Medium to high. Prism Central add-on needed for many features. |
| Storage | SCRIBE: built-in, automatic, no configuration. | ZFS local or Ceph (requires additional configuration and minimum 3 nodes for Ceph). | vSAN optional, external SAN/NFS otherwise. | Nutanix DSF built-in, similar to HyperCore. |
| Built-in DR | Yes. Snapshot replication built into base license. | No native DR. Requires PBS for backup; replication via third party or manual. | No. Requires separate SRM license or third-party tools. | Yes. Built-in replication. But CE has support limitations. |
| Remote/fleet management | Yes. Fleet Hub included. | Limited. No native multi-site fleet view. | Limited at this scale tier. | Multi-cluster Prism Central: yes, but adds cost. |
| Maximum cluster size | 8 nodes standard | No hard limit (Ceph and Corosync dependent) | 3 hosts per Essentials Plus kit | 4 nodes for CE |
| API/automation | Full REST API, Ansible module, Terraform provider | Full REST API, Ansible, Terraform | PowerCLI, REST API, Terraform | REST API, PowerShell, Terraform |
HyperCore's strongest differentiator for its target market is the combination of built-in DR and Fleet Hub. A two-person IT team managing 30 branch offices can't realistically configure and maintain separate DR software at each site. HyperCore makes snapshot replication a first class feature that works out of the box with no additional licenses, and Fleet Hub makes managing those 30 sites from headquarters operationally feasible. That's the specific problem it solves better than the alternatives at this price tier.
Key Takeaways
- HyperCore uses KVM with SCRIBE as the storage layer. SCRIBE pools all drives from all nodes into a single cluster wide storage pool automatically. No storage configuration required. No separate storage controller, VSA, or NAS.
- HEAT (HyperCore Enhanced Automated Tiering) moves data between SSD and HDD tiers based on access patterns. Individual virtual disk SSD priority is adjustable from 0 (HDD only) to 11 (SSD only) without stopping the VM. Default is 4 for all disks.
- Cluster management uses the same web UI as VM management. No separate management server or appliance. Any node's IP gets you to the full cluster view. The cluster VIP follows the surviving nodes if one fails.
- Live migration and rolling updates are automatic. HyperCore migrates VMs off a node before updating it, updates the node, then allows VMs back. No manual evacuation required.
- Built-in snapshot replication to a remote HyperCore cluster is included in the base license. No separate DR software needed. Configure replication per VM, set the schedule, and HyperCore handles delta transfers automatically.
- Fleet Hub provides centralized management for multiple remote clusters. Outbound HTTPS only, so no inbound firewall rules at remote sites. Mass VM deployment, bulk configuration updates, firmware rollouts, and unified health monitoring across all registered clusters.
- HyperCore's 8-node per-cluster limit fits SMB and edge workloads. For larger environments or enterprise features, Nutanix or vSphere with proper licensing is the right platform. HyperCore doesn't try to compete there and that's the right trade off for its target market.