Proxmox VE — Overview and Architecture

PROXMOX-VE

What Proxmox VE is — Type 1 hypervisor combining KVM and LXC on Debian Linux, its feature set, subscription tiers, and where it fits in the virtualisation landscape.

proxmoxproxmox-vekvmlxchypervisorvirtualisation

What Is Proxmox VE?

Proxmox Virtual Environment (VE) is an open-source, bare-metal Type 1 hypervisor built on top of Debian Linux. Rather than shipping a custom operating system kernel like some proprietary platforms, Proxmox sits directly on Debian and inherits its stability, package ecosystem, and long-term support model. This design means you are never starting from scratch with an unfamiliar environment — it is Debian underneath, with Proxmox layered on top.

What makes Proxmox distinctive among Type 1 hypervisors is that it unifies two entirely different virtualisation technologies into a single management platform:

Both are managed through the same web interface, on the same host, with unified storage, networking, backup, and firewall policies.

The Web GUI

All Proxmox management is done through a built-in HTML5 web GUI accessible at:

https://<node_ip>:8006

No third-party management tool is needed. From day one, every operation — creating VMs, configuring storage, managing cluster membership, scheduling backups, defining firewall rules — is reachable through the browser. The GUI is organised hierarchically:

The Datacenter level manages cluster-wide settings. Each node level shows per-host resources and configuration. Each VM/Container level shows hardware, snapshots, backup history, and firewall rules for that workload.

Feature Set

FeatureDescription
KVMFull hardware virtualisation; requires Intel VT-x or AMD-V CPU extensions
LXCOS-level container virtualisation; shares host kernel; higher density
Web GUIHTML5 interface at port 8006; no separate management software required
Built-in Firewalliptables-based; three-tier hierarchy (Datacenter, host, VM/container)
Open vSwitchEnterprise virtual switch; installed optionally via apt-get install openvswitch-switch
Storage PluginsDirectory, iSCSI, LVM, LVM-Thin, NFS, ZFS, Ceph RBD, GlusterFS
HA ManagerAutomated VM failover on node failure; uses Corosync quorum and fencing
Backup (vzdump)Snapshot, suspend, and stop backup modes; LZO/gzip/pigz compression
VM ReplicationZFS-backed incremental replication to remote nodes (pvesr)
ClusteringMulti-node cluster with shared config via pmxcfs; live migration

KVM vs LXC — When to Use Each

The choice between KVM and LXC is driven by isolation requirements and guest OS compatibility.

Use KVM when:

Use LXC when:

LXC replaced the older OpenVZ container system starting with Proxmox VE 4.0. Unlike OpenVZ, which required a custom kernel, LXC runs on the standard Linux kernel, making it compatible with any Proxmox node without kernel modifications.

Subscription Tiers and Repositories

Proxmox VE is entirely free to download and use. However, the package repositories are split across three tiers that reflect different levels of testing and support:

RepositoryPackage SourceUse Case
Enterprisepve-enterprisePaid subscription required; packages go through the most comprehensive QA before release; correct choice for production
No-Subscriptionpve-no-subscriptionFree; suitable for home labs and small business; slightly less tested
TestpvetestBleeding-edge pre-release packages; never use in production

The subscription page in the GUI (Datacenter → Node → Subscription) shows the current key status, server ID, socket count, last check date, and renewal date. Without a subscription key, the Enterprise repository will refuse connections with a 401 Unauthorized error, and you must switch to the No-Subscription repository.

Post-installation repository configuration is done by editing /etc/apt/sources.list.d/pve-enterprise.list and adding (or verifying) the correct repository line, then running apt update && apt dist-upgrade.

Hardware Requirements

For KVM virtual machines, the host CPU must support hardware virtualisation extensions:

These extensions must be enabled in the system BIOS/UEFI. Without them, KVM acceleration is unavailable and VMs either will not start or will run at extremely reduced performance.

LXC containers have no such requirement since they share the host kernel and do not emulate hardware.

For production environments, the book Mastering Proxmox recommends:

Comparison with ESXi and Hyper-V

AspectProxmox VEVMware ESXiMicrosoft Hyper-V
CostFree (subscription optional)Paid (free tier removed)Included with Windows Server
Base OSDebian LinuxESXi (custom)Windows Server
Container supportLXC (native)None (separate vSphere)None natively
ZFS supportNativeNoneNone
Ceph supportNativeNone (vSAN is separate product)None
Cluster managementBuilt-in GUIvCenter (separate)SCVMM (separate)
Live migrationYes (shared storage)vMotion (requires vCenter)Live Migration

The key differentiator is that Proxmox ships with ZFS and Ceph integration out of the box, at no extra cost, with no separate management server required. A three-node Proxmox cluster with Ceph storage and HA failover is a complete, enterprise-capable stack built entirely from open-source components.

Key Use Cases

Proxmox VE’s position in the market is as the practical open-source alternative to vSphere: production-capable, actively maintained (Proxmox Server Solutions GmbH), and without the licensing overhead that VMware’s post-Broadcom acquisition pricing has introduced.