VMware Cloud Foundation and the Aria Suite

VMWARE-CLOUD-FOUNDATION

How VMware Cloud Foundation bundles ESXi, vCenter, vSAN, and NSX into a single integrated SDDC platform deployed and managed by SDDC Manager — and how the Aria suite of management tools extends VCF with capacity planning, log analytics, network visibility, and cloud automation.

vmwarevcfariansxtanzuvcpdcv

Overview

A conventional vSphere deployment involves independently installing and configuring each component — ESXi on each host, vCenter for management, separate storage configuration, and a physical or software-defined network. Each component has its own lifecycle: patching ESXi, upgrading vCenter, updating NSX, and managing storage firmware are separate operational tracks that must be coordinated carefully to avoid incompatibilities. In large or growing environments, this operational complexity accumulates rapidly.

VMware Cloud Foundation (VCF) is VMware’s answer to this complexity. It is a full-stack software-defined data centre (SDDC) platform that integrates ESXi, vCenter, vSAN, and NSX into a single validated, pre-tested architecture and deploys the entire stack through a single management plane called SDDC Manager. The goal is to eliminate the inconsistencies and manual effort that arise from assembling these components independently, and to bring the operational model of a public cloud — where the infrastructure layer is abstracted and managed as a service — to an on-premises deployment.

SDDC Manager

SDDC Manager is the central management and automation engine for VCF. It handles three phases of the infrastructure lifecycle:

Bring-up: SDDC Manager automates the initial deployment of the management domain — the foundational cluster that hosts all VCF management components. During bring-up, it validates the physical hardware against the VMware Hardware Compatibility List, configures the physical network switches, deploys ESXi on all management hosts, deploys vCenter, configures vSAN as the storage layer, deploys NSX, and connects all components to each other. What would take a skilled infrastructure team days to configure manually can be completed in hours through the SDDC Manager bring-up wizard, with hardware validation providing confidence before any software is deployed.

Expansion: After the management domain is operational, additional capacity is added as workload domains — separate clusters with their own vCenter instances, vSAN storage, and NSX segments. SDDC Manager automates the deployment of each new workload domain, applying the same validated architecture and enforcing the same standards as the initial bring-up.

Lifecycle management: SDDC Manager tracks the version of every VCF component across all domains and orchestrates upgrades in the correct sequence. When VMware releases a new VCF bundle, SDDC Manager downloads the bundle, validates compatibility, and applies the upgrade to each component in the defined order — avoiding the inter-component compatibility issues that arise when administrators upgrade components independently.

Workload Domains

VCF organises compute resources into workload domains, each of which is an isolated administrative boundary with its own vCenter Server, vSAN datastore, and NSX networking segment.

The management domain is created during initial bring-up. It hosts the VCF management components — SDDC Manager itself, the vCenter managing the management cluster, and the NSX management plane. No workload VMs run in the management domain under normal operation; it exists to provide a stable, isolated foundation for the management plane.

Virtual infrastructure (VI) workload domains are created after the management domain is operational. Each workload domain receives its own vCenter, its own vSAN cluster, and its own NSX segment, providing isolation between different tenants, business units, or workload categories. A large VCF deployment might have one workload domain for production workloads, another for development and testing, and a third for a specific business application — each with independently managed capacity and networking, but all lifecycle-managed from the same SDDC Manager.

NSX in VCF

NSX (formerly NSX-T) is a mandatory component of VCF — there is no VCF deployment without NSX. This is a significant architectural commitment: it means all network constructs in a VCF environment are software-defined overlays rather than physical VLANs. Physical switches still exist as underlay infrastructure, but VM-to-VM connectivity, firewall rules, routing, and load balancing are all implemented in software by NSX.

NSX provides several capabilities that become foundational in a VCF environment:

Overlay networking: NSX creates logical networks using Geneve encapsulation over the physical underlay. VMs on the same logical segment communicate as if they are on the same broadcast domain, regardless of which physical host they are running on. Adding a new network segment requires no physical switch configuration — it is created in NSX Manager and becomes available to VMs immediately.

Distributed firewall: NSX implements firewall rules at the vNIC level of every protected VM, distributed across all ESXi hosts in the environment. This is micro-segmentation — firewall enforcement happens at the source of traffic, not at a centralised gateway appliance, so lateral movement between VMs on the same segment is blocked even if those VMs are on the same host. A traditional perimeter firewall provides no protection against this east-west threat vector.

Logical routing: NSX provides Tier-0 (T0) and Tier-1 (T1) gateways. T0 gateways connect the NSX overlay to the physical network (BGP or static routing). T1 gateways are tenant-level routers that connect logical segments to the T0. This two-tier model allows tenant-level routing to be provisioned and managed independently of the physical network edge.

Additional services: NSX also provides distributed load balancing, NAT, VPN (both IPsec and L2VPN), and DNS services — all defined in software and managed from NSX Manager.

NSX Manager is deployed as a cluster of three virtual appliances for high availability. All three nodes actively participate in the control plane, with one acting as the cluster coordinator.

The Aria Suite

The Aria suite (formerly the vRealize suite) is VMware’s management and automation layer, designed to extend VCF with observability, analytics, log management, and self-service provisioning. Each product addresses a specific operational need:

Aria Operations (formerly vRealize Operations / vROps): A capacity management and performance analytics platform that connects to vCenter, NSX, vSAN, and other VMware products via API and continuously collects metrics. It applies machine-learning models to identify performance anomalies, forecast when resources will be exhausted, and recommend right-sizing actions for over-provisioned or under-utilised VMs. Its cost transparency module can assign a dollar value to each VM based on its resource consumption, enabling chargeback or showback reporting for internal tenants. Aria Operations is designed to operate autonomously — it generates workload balance recommendations that, in some modes, it can act on automatically via DRS integration.

Aria Operations for Networks (formerly vRealize Network Insight / vRNI): A network visibility tool that collects flow data from NSX, vSphere, and physical switches to build a complete map of traffic patterns in the environment. Its primary use cases are micro-segmentation planning (understanding which VMs communicate with each other before writing firewall rules) and network troubleshooting (tracing a specific flow to understand where it is being blocked or dropped). It is also valuable for validating that NSX distributed firewall rules are having the intended effect.

Aria Log Insight (formerly vRealize Log Insight / vRLI): A log aggregation and analysis platform that receives syslog from ESXi hosts, vCenter events, NSX logs, and guest VM logs, indexes them for full-text search, and applies content packs that translate raw log lines into structured, searchable events. It integrates with Aria Operations to correlate log events with performance anomalies — when Aria Operations detects a CPU spike, a single click can pull the corresponding log stream from the same time window in Aria Log Insight.

Aria Automation (formerly vRealize Automation / vRA): An IT automation and self-service cloud portal. Infrastructure teams define service blueprints — templates that describe a complete application stack including compute, storage, networking, and application configuration. Consumers (developers, project teams) request deployments from the self-service catalog, and Aria Automation provisions the infrastructure automatically. It supports multi-cloud deployments across vSphere, AWS, Azure, and Google Cloud, and integrates with configuration management tools like Ansible and Terraform for full infrastructure-as-code workflows.

vSphere with Tanzu

VCF includes vSphere with Tanzu, VMware’s integration of Kubernetes into the vSphere platform. The supervisor cluster transforms a vSphere cluster into a Kubernetes control plane: ESXi hosts act simultaneously as VMware hypervisors for traditional VMs and as Kubernetes nodes for containerised workloads.

Within the supervisor cluster, administrators create vSphere Namespaces — administrative units that map directly to Kubernetes namespaces and carry resource quotas, storage policies, and permission assignments. Developers request Tanzu Kubernetes Clusters (TKCs) within a namespace to get a dedicated Kubernetes cluster, managed by vSphere and consuming vSphere resources (compute from the cluster, storage from vSAN, networking from NSX). Alternatively, developers can deploy individual vSphere Pods — Kubernetes pods that run directly in the ESXi hypervisor rather than inside a guest VM, providing stronger isolation than a containerised workload in a shared VM.

vSphere with Tanzu requires vSAN or compatible shared storage, and either NSX or VDS-based networking. In VCF, both dependencies are always satisfied.

Summary

VMware Cloud Foundation represents a significant operational model shift from assembling vSphere components independently to deploying and managing them as an integrated, validated stack. SDDC Manager automates bring-up and lifecycle management, reducing the coordination overhead that grows linearly with the number of independently managed components. Workload domains provide administrative isolation without requiring separate physical infrastructure. NSX provides the mandatory software-defined networking layer that enables micro-segmentation and self-service networking. The Aria suite layers observability, automation, and self-service onto the foundation, addressing the operational management challenges of a large-scale SDDC. Together, these components represent VMware’s vision of what a consistently managed, fully software-defined on-premises data centre looks like at enterprise scale.