Performance: Data Locality

Lowers latency and boosts performance by keeping I/O on the same node as the data, reducing cross-node traffic.
Intro
More and more businesses are considering switching to all NVMe configurations as most future-proof and performant storage medium. However, the IO, throughput and latency are bound to each other. Having fast storage will not guarantee you the best experience from the VMs/applications perspective if the latency is high.
Problem
For TCP/IP-based protocols such as iSCSI and NVMe-over TCP latency on the interconnect fabric largely determines the overall system performance. Especially, when these protocols are used in geographically distributed (stretched) clusters. This makes latency the major “bottleneck” of a system, because no matter how fast the storage is, the data still must go through RAM, CPU, Networking, and each layer adds its own latency. The issue negates all the benefits of fast storage like NVMe.
Solution
StarWind VSAN ensures that the VMs or applications use “local path” as the most optimal for storage operations. This lowers the possibility of using not-optimal storage path (over interconnect fabrics) to make the changes or write data therefore lowering latency that is vital for high-performance cluster systems. As a result, such a system can operate close to its full hardware potential.
Conclusion
“Data locality” keeps most of the I/O for each VM or application locally, ensuring low latency for performance-demanding workloads. Furthermore, “data locality” in StarWind VSAN makes sure that the system hardware resources are not wasted, and you get what you paid for.