Search
StarWind is a hyperconverged (HCI) vendor with focus on Enterprise ROBO, SMB & Edge

VMware vSphere – CPU Hot‑Plug and Memory Hot‑Add Support

  • October 25, 2024
  • 20 min read
Vitalii is a Post-Sales Support Engineer at StarWind about 2 years. Has a broad knowledge of storage, virtualization, backup, and infrastructure implementation. Ping pong as a hobby.
Vitalii is a Post-Sales Support Engineer at StarWind about 2 years. Has a broad knowledge of storage, virtualization, backup, and infrastructure implementation. Ping pong as a hobby.

Overview and Requirements

CPU Hot-Plug and Memory Hot-Add are vSphere features that allow adding virtual CPUs and RAM to a running VM without downtime. These features have been available since ESXi 4.0, but they are disabled by default due to certain overhead and limitations. Key requirements and general limitations include:

  • Virtual Hardware Version 7 or higher: VMs must use hardware version 7+ to enable hot-add/hot-plug​.
  • vSphere Licensing: A vSphere edition that supports hot-add (Enterprise or higher on older versions) is required. (vSphere 8 Standard supports it; earlier “Advanced” edition and below did not).
  • Add-Only (No Hot-Remove): You can add vCPUs or RAM on the fly, but cannot remove them without a reboot​. Hot-plugged CPUs and hot-added memory become a permanent part of the VM until power cycle.
  • vNUMA Incompatibility: Enabling CPU Hot-Plug disables virtual NUMA topology for that VM​. (NUMA still works, but vNUMA presentation is turned off when CPU hot-plug is on.)
  • vCPU Core Count Fixed: You cannot change the number of cores per vCPU socket while the VM is running. The core-per-socket configuration set at boot remains in effect.
  • Potential Kernel Overhead: Memory Hot-Add causes the guest OS to reserve some resources in anticipation of added RAM. This can slightly reduce available kernel memory (paged pool) even if you never add RAM. The impact is minimal (a few percent).
  • Legacy Limits (<3 GB RAM): On some older OS (e.g. 32-bit Windows, early 64-bit Linux), hot-add may not work if the VM started with <3 GB of RAM, or may only allow adding up to 3 GB​. (This is generally not an issue for modern 64-bit OS with more memory.)

To enable these features, shut down the VM and check “Memory Hot Add” and/or “CPU Hot Plug” in the VM’s settings. Once enabled, the guest OS must also support hot-plugging for changes to take effect live​. Below, we examine support in various guest operating systems as of vSphere 8.0 Update 3 (2024).

Guest OS Support and Behavior

Guest OS support is critical – if the OS doesn’t handle dynamic CPU/RAM changes, you will not see the benefit (the new vCPU or RAM might remain unusable until reboot). The VMware Compatibility Guide confirms which OS are officially supported for hot-add/hot-plug, and we supplement that with observed behavior in each OS. All tests assume VMware Tools (or open-vm-tools) are installed in the guest for best compatibility.

Windows Server 2019 and 2022

Windows Server 2019 and Windows Server 2022 fully support CPU hot-plug and memory hot-add (no reboot required), provided the edition supports the feature. In modern Windows Server releases, both Standard and Datacenter editions allow adding memory or vCPUs online​. Official VMware data shows Hot-Add Memory and Hot-Add vCPU are supported for Windows Server 2022 (Standard, Datacenter, Essentials) on vSphere 8.0 U3. In practice, when you hot-add resources to a supported Windows VM:

  • The new vCPUs are detected immediately by the OS. You may need to refresh Task Manager to see the updated CPU count​. No reboot is needed to utilize the additional processors.
  • The added memory is immediately available to the OS. The system will recognize the higher RAM in Tools like Task Manager or systeminfo without reboot.

Previous tests on Windows Server 2012R2/2016 showed smooth operation – all added vCPUs were online and RAM was usable with no reboot​. Windows treats these additions as if new hardware was inserted and handles them gracefully. (Be aware that very old editions like Windows Server 2008 R2 required Enterprise/Datacenter for hot-add, but for 2019/2022 this distinction is no longer applicable according to VMware’s guide​.)

Note: After hot-adding, the NUMA topology option will be grayed out in vCenter for that VM (since vNUMA is disabled). Also, ensure that the VM’s configured CPU and memory limits (if any) are adjusted or removed – a common mistake is hot-adding vCPU/RAM but a VM limit in vSphere is capping the resources.

Linux Distributions

Linux support for CPU and memory hotplug varies by distribution and configuration. All modern 64-bit Linux kernels (e.g. 5.x series) include kernel support for CPU and memory hot-add, and VMware lists these OS as supporting the features​. However, whether the resources become usable immediately can depend on user-space settings (e.g. udev rules) and distro defaults. We evaluate each major distro:

RHEL 9 / CentOS Stream 9 / Rocky Linux 9

Red Hat Enterprise Linux 9 and its clones (CentOS Stream 9, Rocky Linux 9, AlmaLinux 9) have full support for hot-add. VMware’s compatibility guide shows RHEL 9 and Rocky 9 both support Hot-Add Memory and vCPU on vSphere 8.0 U3​. In RHEL-based guests, when you hot-plug CPUs or add memory:

  • The Linux kernel will detect the new CPU(s) and, by default, auto-online them in most enterprise distributions. This means the additional vCPUs show up in /proc/cpuinfo and are schedulable without manual intervention. (Historically, RHEL/CentOS auto-onlined hot-added CPUs and memory via udev rules or systemd services, whereas some other distros did not​.)
  • Newly added memory is also recognized by the kernel. In RHEL 8/9, the kernel memory hotplug mechanism typically onlines the new RAM sections automatically (or can be configured to do so). The total available memory seen in /proc/meminfo increases immediately when you hot-add RAM, and it can be used by applications right away​.

Community tests from earlier versions showed CentOS (RHEL) kernels apply Hot-Add/Hot-Plug changes immediately – all added vCPUs come online and RAM is usable without a reboot. Our 2024 expectation is the same for RHEL 9 family: no manual steps needed. These enterprise Linux guests are designed for dynamic scaling, so both features “just work” out-of-the-box (open-vm-tools will update VM info, etc.).

Ubuntu Server 22.04 LTS and Debian 12 (Bookworm)

Ubuntu 22.04 LTS and Debian 12 (which share similar kernel lineage) are officially supported for hot-add vCPU and RAM in vSphere 8. In practice, however, these distros do not automatically online the new CPUs or memory by default. This leads to some quirks:

  • When you hot-plug new vCPUs on Ubuntu/Debian, the OS kernel detects the CPU, but leaves it in an “offline” state. The extra CPU cores will not be scheduled until you explicitly bring them online (e.g. via echo 1 > /sys/devices/system/cpu/cpu<N>/online for each new CPU) or reboot the VM​. In tests, after adding vCPUs, Ubuntu allowed the hot-plug (no errors), but the new CPUs remained inactive until a reboot or manual online action​.
  • When you hot-add memory on Ubuntu or Debian, the new RAM is not added to the usable pool automatically. The OS will continue to show the old memory amount until the VM is rebooted. Essentially, Ubuntu/Debian do not auto-online the hot-added memory pages by default, so you won’t see the additional RAM in free/top output unless you take manual steps. (It is possible to manually online memory via memhotplug.onboot settings or udev rules, but out-of-the-box behavior is that the memory stays offline.)

In summary, Ubuntu 22.04 and Debian 12 support hot-add in theory but not in an automated way. VMware still marks the feature as supported (the VM won’t crash and the hypervisor shows the new resources), but administrators must intervene to make the OS use the added resources or simply reboot the VM to have it pick up the changes​. This is a known phenomenon on these distros​.

SUSE Linux Enterprise Server 15 SP5

SUSE Linux Enterprise 15 SP5 fully supports both CPU and memory hot-plug, and it handles them gracefully and immediately. VMware lists SLES 15 (all SP levels) as supporting Hot-Add vCPU and Memory​. In SLES (and OpenSUSE), the default configuration will online new CPUs and memory automatically:

  • Hot-added vCPUs appear instantly and are available to the scheduler. SUSE’s tools will show the updated CPU count right away (no reboot needed). In tests with SLES 12, all vCPUs came online immediately and were “up and running” with no manual steps​file-rtakqub8r7hgiikkuxpqjr – the same holds for SLES 15.
  • Hot-added memory is immediately usable by the system. SUSE automatically onlines the new memory and updates /proc/meminfo. One benefit noted in tests was that SUSE’s resource monitors (e.g. top) reflected the new RAM without even needing to restart the monitoring tool​. The OS dynamically refreshed its memory statistics as soon as the hypervisor added RAM.

In short, SLES 15 SP5 behaves as expected: both features work “live” with no special intervention. This makes SUSE a good choice for scenarios requiring frequent hot scaling. (Ensure open-vm-tools is up to date for best results, as SUSE coordinates with VMware on these features​.)

FreeBSD 13 and 14

FreeBSD 13 and FreeBSD 14 do not support CPU hot-plug or memory hot-add in the guest OS. FreeBSD’s ACPI and kernel subsystems currently only detect CPU and memory at boot time. VMware’s compatibility guide does not list Hot-Add support for FreeBSD – the support details omit any mention of hot-add features​.

What happens if you try it? As earlier experiments showed, vSphere will allow you to toggle the CPU/memory while the VM is on, but FreeBSD will simply ignore the added vCPU or RAM until next reboot. There are no error messages or crashes; the hypervisor “accepts” the configuration change, but inside the FreeBSD guest nothing changes:

  • Added vCPUs will not show up in sysctl hw.ncpu or top until after a reboot (FreeBSD 13/14 lack a mechanism to online them live).
  • Added memory will not be added to the FreeBSD kernel’s memory allocator until reboot. FreeBSD will continue reporting the old RAM size until restart​file.

In essence, FreeBSD does not support hot-add/hot-plug at all as of versions 13 and 14. If you need to give a FreeBSD VM more RAM or CPU, you’ll still have to schedule downtime to power cycle the VM.

Conclusion

Live VM reconfiguration is a practical feature, especially when working with test environments or scaling workloads dynamically. Being able to adjust CPU and RAM allocations without shutting down a VM is a clear productivity boost.

In this article, we looked at how Hot-Add and Hot-Plug perform across modern Windows and Linux environments in vSphere 8.0 U3. The decision to enable these features depends on your use case and OS support. Some systems apply changes instantly, others require extra steps, and a few need a reboot.

The table below summarizes which guest operating systems allow you to add vCPUs and memory without a reboot, and where limitations still exist.

 

Guest OS CPU Hot‑Plug Memory Hot‑Add Behavior / Notes
Windows Server 2022 Yes Yes Fully supported in Standard/Datacenter editions. New CPUs/RAM available immediately (refresh Task Manager to see updates)​.
Windows Server 2019 Yes Yes Fully supported (Standard/Datacenter). No reboot needed – OS recognizes added vCPUs and RAM on the fly.
Ubuntu 22.04 LTS Partial No* Officially supported​, but added vCPUs stay offline until manually onlined or rebooted​. Added RAM not usable until reboot (or manual online). Effectively requires reboot to use added memory.
Debian 12 (Bookworm) Partial No* Officially supported​, but same behavior as Ubuntu: CPUs hot-plug but are initially offline​; memory hot-add requires reboot to take effect​. (*)No automatic memory online.
RHEL 9 / Rocky 9 Yes Yes Fully supported (RHEL 9.x, Rocky 9.x)​. New vCPUs and RAM onlined immediately by OS (no manual intervention).
CentOS Stream 9 Yes Yes Same as RHEL 9 (kernel 5.14+). Hot-added resources are auto-detected and onlined. Previous CentOS tests showed instant application of hot-add changes​.
SLES 15 SP5 Yes Yes Fully supported. SUSE auto-onlines new CPUs and memory​. Tested to apply changes immediately with no reboot (all CPUs “up”, RAM added live).
FreeBSD 13 / 14 No No Not supported by FreeBSD OS​. vSphere allows enabling hot-add, but added CPU cores remain unusable and extra memory isn’t recognized until a reboot.

Fin.

Hey! Found Vitalii’s article helpful? Looking to deploy a new, easy-to-manage, and cost-effective hyperconverged infrastructure?
Alex Bykovskyi
Alex Bykovskyi StarWind Virtual HCI Appliance Product Manager
Well, we can help you with this one! Building a new hyperconverged environment is a breeze with StarWind Virtual HCI Appliance (VHCA). It’s a complete hyperconverged infrastructure solution that combines hypervisor (vSphere, Hyper-V, Proxmox, or our custom version of KVM), software-defined storage (StarWind VSAN), and streamlined management tools. Interested in diving deeper into VHCA’s capabilities and features? Book your StarWind Virtual HCI Appliance demo today!