With the release of vSphere 8.0 Update 3, VMware has introduced a new feature in tech preview – Memory Tiering over NVMe. This innovative capability allows users to leverage NVMe devices as an additional layer of memory, significantly enhancing the memory capacity and performance of their ESXi hosts. In this blog post, we’ll delve into the advantages of this setup and provide a step-by-step guide on how to configure the feature.
As you know, workloads run the fastest when in memory. That’s why this feature can really increase the speed of your workloads.
Quote from VMware KB article about this feature:
“Memory tiering over NVMe optimizes performance by intelligently directing VM memory allocations to either NVMe devices or faster dynamic random access memory (DRAM) in the host and performing hot and cold memory page placements. This allows customers to increase their memory footprint, while increasing workload capacity and reducing the overall total cost of ownership (TCO).”
Also, “Memory Tiering is recommended for use by customers who are running specific workload types in test / lab environments and not for use in production environments.”
Advantages of Memory Tiering over NVMe
Increased Memory Capacity – Memory Tiering over NVMe enables the use of PCIe-based Flash NVMe devices as a secondary tier of memory. This results in a substantial increase in the available memory within the ESXi host. For instance, a system with 64GB of DRAM can see its memory capacity expand to a whopping 480GB by incorporating NVMe devices.
Optimized Performance – The feature intelligently directs VM memory allocations to either NVMe devices or faster dynamic random-access memory (DRAM) in the host. This optimization ensures that performance-critical workloads benefit from the speed of DRAM, while less critical workloads utilize the NVMe tier.
Cost Efficiency – By leveraging less expensive NVMe devices as memory, organizations can reduce their total cost of ownership (TCO). This setup allows for an increase in memory footprint and workload capacity without the need for costly DRAM upgrades.
Improved Workload Consolidation – Memory Tiering addresses core-to-memory imbalances, enabling better workload and VM consolidation. This means that more VMs can be run on a single host, maximizing resource utilization and efficiency.
Enhanced Flexibility for Homelabs – For homelab enthusiasts, this feature is a game-changer. It allows for the supercharging of consumer-grade systems, enabling the deployment of complex solutions like VMware Cloud Foundation (VCF) Holodeck on hardware that would otherwise be limited by DRAM capacity.
Configuring Memory Tiering over NVMe
To configure Memory Tiering over NVMe in vSphere 8.0 Update 3, follow these steps:
The configuration is on per-host basis via CLI or PowerCLI, or via vSphere client.
Enabling Memory Tiering for a cluster and hosts requires the following steps:
- Identifying an NVMe device for each host to use as tiered memory.
- Configuring an NVMe device on each host to be used as tiered memory.
- Configuring each host to use Memory Tiering.
- Rebooting each host for Memory Tiering to take effect.
- Checking Memory Tiering has been correctly configured on each host.
In the vSphere Client, navigate to a cluster that you manage with a single image or to a cluster that you manage with baselines.
On the Configure tab, click Desired State > Configuration.
On the Draft tab, choose the method of creating a draft configuration for the cluster. Scroll down until you find the memory_tiering option and click on True > then Save.
From the Configuration pane, verify Memory Tiering is configured in the configuration.
Run the pre-check on Cluster/Host and Apply the Changes.
After reboot, verify that the hardware widget has the Memory Tiering as Software.
Connect to each host via SSH
Then you can list available tier devices via this command (that’s what I tested) and after finding out that there were none, I created a tier device.
Here are the commands:
1 2 3 |
<em>esxcli system tierdevice list</em> <em>esxcli system tierdevice create -d <the_nvme_disk></em> |
note: the nvme disk you can find by going to Configure > Storage devices
Here is the command screenshot:
Note:
Make sure to follow the detailed PDF which you can download from VMware KB article as there are also detailed step-by-steps on how to configure via CLI and PowerCLI, and also how to disable this feature when you no longer wish to test it. Also, there are some explanations for potential error messages as well.
After rebooting, you should verify the hardware page:
Select host > Configure > Hardware > Overview
The Tier 0 is the DRAM memory where the Tier 1 is the NVMe memory.
By default, the ratio tier0:Tier1 is 4:1, but it’s configurable.
By default, the hosts are configured to use a DRAM to NVMe ratio of 4:1.
The host advanced setting for Mem.TierNvmePct sets the amount of NVMe to be used as tiered memory using a percentage equivalent of the total amount of DRAM. A value between 1 and 400. The default value is 25.
Here is the command:
1 |
<em>esxcfg-advcfg -s 400 /Mem/TierNvmePct</em> |
The soft recommendation is to not configure more than you have DRAM. So, after a reboot, we have still our 16Gb of RAM however we now have 30 Gb of NNMe tier that is also used as a RAM giving us 46 Gb of capacity on our ESXi hosts that were normally running only with 16 Gb of RAM.
Here is the summary screenshot. Note that the lab is just a nested lab for testing.
Monitor Performance – Continuously monitor the performance of your VMs and workloads to ensure that the Memory Tiering feature is providing the expected benefits. Adjust the configuration as needed based on your specific use case and workload requirements.
Final Words
The Memory Tiering over NVMe feature in vSphere 8.0 Update 3 is a powerful tool that offers significant advantages in terms of memory capacity, performance optimization, cost efficiency, and workload consolidation. By following the steps outlined above, you can configure and leverage this feature to enhance the capabilities of your ESXi hosts.
Whether you’re running a production environment or experimenting with a homelab, Memory Tiering over NVMe provides a flexible and cost-effective solution to meet your memory needs. As this feature is currently in tech preview, you should not run production environments on it, but it’s an exciting opportunity to explore its potential and provide feedback to VMware for future enhancements.