In today’s topic, I’d like to talk about the Meltdown and Spectre vulnerabilities. But not about the harm they cause, this has been covered widely in numerous articles, but how Microsoft patches intended to protect you from the vulnerabilities, affect (if they do) the hardware performance. Before we take a deep dive into the tests and numbers, let me tell a few words about Meltdown and Spectre and outline the testing scope to make sure we speak one language.
IMPORTANT NOTICE
The hyper-v.io blog was acquired by StarWind Software, Inc. on March 1st, 2023.
We are currently reviewing the content of the blog, but please note that any opinions expressed before the effective date of the acquisition are solely those of the original owner(s). We will not provide any comments or opinions on the previous content. You are welcome to post comments on the original posts, but we are not obligated to respond to your inquiries.
The idea Behind Node Fairness in Hyper-V: How it works and why you need it?
For quite a long time, System Center Virtual Machine Manager (SCVMM) has a feature called Dynamic Optimization. Its main goal is to automatically rebalance VMs between the participating cluster nodes in case the placement is unequal. Now, this feature has partially became available in Windows Server 2016 in the form of Node Fairness. It balances the workloads among the hosts in a Hyper-V Failover Cluster and automatically live migrates guests from an overloaded node to a less busy one with zero downtime.
Node Fairness goes embedded in Windows Server 2016 and is intended for deployments without SCVMM. SCVMM Dynamic Optimization delivers more versatile functionality than Node Fairness. Regarding this fact, Dynamic Optimization is recommended for balancing workloads among the cluster hosts. However, to use this feature, you need an additional license from the main operating system.
Now that we know what Node Fairness is, let’s take a look at how this service works.
Who’s got bigger balls? Testing NFS vs iSCSI performance. Part 4: testing NFS on Linux
In the previous article, I’ve measured the performance of NFS vs iSCSI to find out which network protocol is faster as a storage for virtual machines on VMware ESXi. Well, iSCSI beats NFS under all testing patterns. Additionally, I’ve evaluated and compared the performance of NFS client connected to Linux (Ubuntu Server 17.10 distributive) and to Windows Server 2016. According to the results, NFS server performance on Linux was higher than that on Windows.
Who’s got bigger balls? Testing NFS vs iSCSI performance. Part 3: test results
In the previous parts, I’ve shown you the process of configuring NFS and iSCSI protocols between our servers. So now, we’ve got everything ready for running our performance tests and finally finding out which network protocol is faster as a storage for virtual machines on VMware ESXi: NFS or iSCSI.
So to benchmark the iSCSI performance, I’ve created the StarWind device on the server and connected it to the ESXi host over the iSCSI protocol. As to OS for running further tests, I’ve used Windows Server 2016.
Search
-
Recent Posts
Categories