StarWind Virtual SAN: Configuration Guide for VMware vSphere [ESXi], VSAN Deployed as a Controller Virtual Machine (CVM) using PowerShell CLI
- July 31, 2024
- 30 min read
- Download as PDF
Annotation
Relevant Products
This guide is applicable to StarWind Virtual SAN and StarWind Virtual SAN Free (Version V8 (Build 15260, CVM Version 20231016) and later).
For older versions of StarWind Virtual SAN (Version V8 (Build 15260, OVF Version 20230901) and earlier), please refer to this configuration guide: StarWind Virtual SAN: Configuration Guide for VMware vSphere [ESXi], VSAN Deployed as a Controller VM using PowerShell CLI
Purpose
This guide provides a comprehensive outline on how to deploy and configure StarWind Virtual SAN within the VMware vSphere environment and create StarWind devices using the Web UI. It includes links to the system requirements, RAID settings, best practices, and steps to ensure a seamless setup and integration.
Audience
The guide is created for IT specialists, system administrators, and VMware professionals who are keen on deploying and configuring StarWind Virtual SAN on VMware vSphere.
Expected Result
Users will possess a robust understanding of the steps and best practices for deploying and configuring StarWind Virtual SAN in a VMware vSphere environment.
StarWind Virtual SAN for vSphere VM requirements
Prior to installing StarWind Virtual SAN Virtual Machines, please make sure that the system meets the requirements, which are available via the following link: https://www.starwindsoftware.com/system-requirements
Storage provisioning guidelines: https://knowledgebase.starwindsoftware.com/guidance/how-to-provision-physical-storage-to-starwind-virtual-san-controller-virtual-machine/
Recommended RAID settings for HDD and SSD disks:
https://knowledgebase.starwindsoftware.com/guidance/recommended-raid-settings-for-hdd-and-ssd-disks/
Please read StarWind Virtual SAN Best Practices document for additional information: https://www.starwindsoftware.com/resource-library/starwind-virtual-san-best-practices
Pre-Configuring the Servers
The diagram below illustrates the network and storage configuration of the solution:
1. ESXi hypervisor should be installed on each host.
2. StarWind Virtual SAN for vSphere VM should be deployed on each ESXi host from an OVF template, downloaded on this page: https://www.starwindsoftware.com/release-notes-build
3. The network interfaces on each node for Synchronization and iSCSI/StarWind heartbeat interfaces should be in different subnets and connected directly according to the network diagram above. Here, the 172.16.10.x subnet is used for the iSCSI/StarWind heartbeat traffic, while the 172.16.20.x subnet is used for the Synchronization traffic.
NOTE: Do not use iSCSI/Heartbeat and Synchronization channels over the same physical link. Synchronization and iSCSI/Heartbeat links and can be connected either via redundant switches or directly between the nodes.
vCenter Server can be deployed separately on another host or as VCSA on StarWind VSAN highly-available storage, created in this guide.
Preparing Environment for StarWind VSAN Deployment
Configuring Networks
Configure network interfaces on each node to make sure that Synchronization and iSCSI/StarWind heartbeat interfaces are in different subnets and connected physically according to the network diagram above. All actions below should be applied to each ESXi server.
NOTE: Virtual Machine Port Group should be created for both iSCSI/ StarWind Heartbeat and the Synchronization vSwitches. VMKernel port should be created only for iSCSI traffic. Static IP addresses should be assigned to VMKernel ports.
NOTE: It is recommended to set MTU to 9000 on vSwitches and VMKernel ports for iSCSI and Synchronization traffic. Additionally, vMotion can be enabled on VMKernel ports.
1. Using the VMware ESXi web console, create two standard vSwitches: one for the iSCSI/ StarWind Heartbeat channel (vSwitch1) and the other one for the Synchronization channel (vSwitch2).
2. Create a VMKernel port for the iSCSI/ StarWind Heartbeat channel.
3. Add a Virtual Machine Port Groups on the vSwitch for iSCSI traffic (vSwtich1) and on the vSwitch for Synchronization traffic (vSwitch2).
4. Repeat steps 1-3 for any other links intended for Synchronization and iSCSI/Heartbeat traffic on ESXi hosts.
Deploying StarWind Virtual SAN for vSphere
1. Download zip archive that contains StarWind Virtual SAN for vSphere: https://www.starwindsoftware.com/starwind-virtual-san#download
2. Extract the virtual machine files.
3. Deploy the control virtual machine to the VMware vSphere. Right-click on the Datacenter, cluster, or node menu and select the “Deploy OVF Template…” option from a drop-down menu.
4. In the first step of the wizard, point to the location of the OVF template. Select the VM files and click Next.
5. Specify the VM name and target location.
6. Select a compute resource intended to run the StarWind vSAN CVM
7. Review the template details. Click Next.
8. In the second step of the wizard, specify the virtual machine provisioning type, VM Storage Policy, and select the direct-attached storage for the appliance system drive. Click Next.
9. Select the destination network for each network adapter assigned to the VM.
The default naming for virtual switches:
- the Management virtual switch is “Management vSwitch”,
- the iSCSI virtual switch is “Data/iSCSI vSwitch”,
- the Synchronization virtual switch is “Replication/Sync vSwitch “.
Specify corresponding network connections according to your virtual network naming. Click Next.
10. Specify the hostname, static IPv4 address, gateway, DNS, and additional network settings for Management and iSCSI/Data network interfaces:
NOTE: To manage the StarWind appliances via the StarWind vCenter plugin, the static IPv4 address must be assigned.
NOTE: if a DHCP server is available on the given network, you can skip setting the additional parameters for that interface.
11. Review the deployment summary information and click to start the VM creation.
12. Repeat the VM deployment on each other ESXi hosts.
NOTE: In some cases, it’s recommended to reserve memory for StarWind VSAN VM.
NOTE: When using StarWind with the synchronous replication feature inside of a Virtual Machine, it is recommended not to make backups and/or snapshots of the Virtual Machine with the StarWind VSAN service installed, as this could pause the StarWind Virtual Machine. Pausing the Virtual Machines while the StarWind VSAN service is under load may lead to split-brain issues in synchronous replication devices, thus to data corruption.
Initial Configuration Wizard
1. Start StarWind Virtual SAN CVM.
2. Launch VM console to see the VM boot process and get the IPv4 address of the Management network interface.
NOTE: in case VM has no IPv4 address obtained from a DHCP server, use the Text-based User Interface (TUI) to set up a Management network.
Default credentials for TUI: user/rds123RDS
3. Using the web browser, open a new tab and enter the VM IPv4 address to open StarWind VSAN Web Interface. Click “Advanced” and then “Continue to…”
4. StarWind VSAN web UI welcomes you, and the “Initial Configuration” wizard will guide you through the deployment process.
5. In the following step, upload the license file.
6. Read and accept the End User License Agreement to proceed.
7. Review or edit the Network settings and click Next.
NOTE: Static network settings are recommended for the configuration.
8. Specify the hostname for the virtual machine and click Next.
9. Create an administrator account. Click Next.
10. Review your settings selection before setting up StarWind VSAN.
11. Please standby until the Initial Configuration Wizard configures StarWind VSAN for you.
12. The appliance is set and ready. Click on the Done button to install the StarWind vCenter Plugin right now or uncheck the checkbox to skip this step and proceed to the Login page.
13. Repeat the initial configuration on other StarWind CVMs that will be used to create 2-node or 3-node HA shared storage.
Add Appliance
To create 2-way or 3-way synchronously replicated highly available storage, add partner appliances that use the same license key.
1. Add StarWind appliance(s) in the web console, on the Appliances page.
NOTE: The newly added appliance will be linked to already connected partners.
2. Provide credentials of partner appliance.
3. Wait for connection and validation of settings.
4. Review the summary and click “Add appliance”.
Configure HA networking
1. Launch the “Configure HA Networking” wizard.
2. Select appliances for network configuration.
NOTE: the number of appliances to select is limited by your license, so can be either two or three appliances at a time.
3. Configure the “Data” network. Select interfaces to carry storage traffic, configure them with static IP addresses in unique networks, and specify subnet masks:
- assign and configure at least one interface on each node
- for redundant configuration, select two interfaces on each node
- ensure interfaces are connected to client hosts directly or through redundant switches
4. Assign MTU value to all selected network adapters, e.g. 1500 or 9000. Ensure the switches have the same MTU value set.
5. Click Next to validate Data network settings.
6. Configure the “Replication” network. Select interfaces to carry storage traffic, configure them with static IP addresses in unique networks, and specify subnet masks:
- assign and configure at least one interface on each node
- for redundant configuration, select two interfaces on each node
- ensure interfaces are connected to client hosts directly or through redundant switches
7. Assign MTU value to all selected network adapters, e.g. 1500 or 9000. Ensure the switches have the same MTU value set.
8. Click Next to validate the Replication network settings completion.
9. Review the summary and click Configure.
Add physical disks
Attach physical storage to StarWind Virtual SAN Controller VM:
- Ensure that all physical drives are connected through an HBA or RAID controller.
- Deploy StarWind VSAN CVM on each server that will be used to configure fault-tolerant standalone or highly available storage.
- Store StarWind VSAN CVM on a separate storage device accessible to the hypervisor host (e.g., SSD, HDD).
- Add HBA, RAID controllers, or NVMe SSD drives to StarWind CVM via a passthrough device.
Learn more about storage provisioning guidelines in the KB article.
Create Storage Pool
1. Click the “Add” button to create a storage pool.
2. Select two storage nodes to create a storage pool on them simultaneously.
3. Select physical disks to include in the storage pool name and click the “Next” button.
NOTE: Select identical type and number of disks on each storage node to create identical storage pools.
4. Select one of the preconfigured storage profiles or create a redundancy layout for the new storage pool manually according to your redundancy, capacity, and performance requirements.
Hardware RAID, Linux Software RAID, and ZFS storage pools are supported and integrated into the StarWind CVM web interface. To make easier the storage pool configuration, the preconfigured storage profiles are provided to configure the recommended pool type and layout according to the direct-attached storage:
- hardware RAID – configures Hardware RAID’s virtual disk as a storage pool. It is available only if a hardware RAID controller is passed through to the CVM
- high performance – creates Linux Software RAID-10 to maximize storage performance while maintaining redundancy
- high capacity – creates Linux Software RAID-5 to maximize storage capacity while maintaining
redundancy - better redundancy – creates ZFS Stripped RAID-Z2 (RAID 60)) to maximize redundancy while maintaining high storage capacity
- manual – allows users to configure any storage pool type and layout with attached storage
5. Review “Summary” and click the “Create” button to create the pools on storage servers simultaneously.
Create Volume
1. To create volumes, click the “Add” button.
2. Select two identical storage pools to create a volume simultaneously.
3. Specify volume name and capacity.
4. Select the Standard volume type.
5. Review “Summary” and click the “Create” button to create the pool.
Install PowerShell samples
1. Open Settings and go to the Downloads tab.
2. Click Download samples to download the installer to any Windows machine.
3. Run the downloaded installation file.
4. Choose “Install StarWindX” to install the StarWindX PowerShell IDE and sample scripts.
Creating StarWind HA LUNs using PowerShell
1. Open PowerShell ISE as Administrator.
2. Open StarWindX sample CreateHA_2.ps1 using PowerShell ISE. It can be found here:
C:\Program Files\StarWind Software\StarWind\StarWindX\Samples\
2. Configure script parameters according to the following example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
param($addr="192.168.12.10", $port=3261, $user="root", $password="starwind", $addr2="192.168.12.11", $port2=$port, $user2=$user, $password2=$password, #common $initMethod="Clear", $size=2048, $sectorSize=512, $failover=0, $bmpType=1, $bmpStrategy=0, #primary node $imagePath="VSA Storage\mnt\crypted1", $imageName="testha02", $createImage=$true, $storageName="", $targetAlias="target02", $autoSynch=$true, $poolName="pool1", $syncSessionCount=1, $aluaOptimized=$true, $cacheMode="none", $cacheSize=0, $syncInterface="#p2={0}:3260" -f "172.16.20.20", $hbInterface="#p2={0}:3260" -f "172.16.10.20", $createTarget=$true, $bmpFolderPath="", #secondary node $imagePath2="VSA Storage\mnt\crypted1", $imageName2="testha02", $createImage2=$true, $storageName2="", $targetAlias2="target02", $autoSynch2=$true, $poolName2="pool1", $syncSessionCount2=1, $aluaOptimized2=$false, $cacheMode2=$cacheMode, $cacheSize2=$cacheSize, $syncInterface2="#p1={0}:3260" -f "172.16.20.10", $hbInterface2="#p1={0}:3260" -f "172.16.10.10", $createTarget2=$true, $bmpFolderPath2="" ) Import-Module StarWindX try { Enable-SWXLog -level SW_LOG_LEVEL_DEBUG $server = New-SWServer -host $addr -port $port -user $user -password $password $server.Connect() $firstNode = new-Object Node $firstNode.HostName = $addr $firstNode.HostPort = $port $firstNode.Login = $user $firstNode.Password = $password $firstNode.ImagePath = $imagePath $firstNode.ImageName = $imageName $firstNode.Size = $size $firstNode.CreateImage = $createImage $firstNode.StorageName = $storageName $firstNode.TargetAlias = $targetAlias $firstNode.AutoSynch = $autoSynch $firstNode.SyncInterface = $syncInterface $firstNode.HBInterface = $hbInterface $firstNode.PoolName = $poolName $firstNode.SyncSessionCount = $syncSessionCount $firstNode.ALUAOptimized = $aluaOptimized $firstNode.CacheMode = $cacheMode $firstNode.CacheSize = $cacheSize $firstNode.FailoverStrategy = $failover $firstNode.CreateTarget = $createTarget $firstNode.BitmapStoreType = $bmpType $firstNode.BitmapStrategy = $bmpStrategy $firstNode.BitmapFolderPath = $bmpFolderPath # # device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. # $firstNode.SectorSize = $sectorSize $secondNode = new-Object Node $secondNode.HostName = $addr2 $secondNode.HostPort = $port2 $secondNode.Login = $user2 $secondNode.Password = $password2 $secondNode.ImagePath = $imagePath2 $secondNode.ImageName = $imageName2 $secondNode.CreateImage = $createImage2 $secondNode.StorageName = $storageName2 $secondNode.TargetAlias = $targetAlias2 $secondNode.AutoSynch = $autoSynch2 $secondNode.SyncInterface = $syncInterface2 $secondNode.HBInterface = $hbInterface2 $secondNode.SyncSessionCount = $syncSessionCount2 $secondNode.ALUAOptimized = $aluaOptimized2 $secondNode.CacheMode = $cacheMode2 $secondNode.CacheSize = $cacheSize2 $secondNode.FailoverStrategy = $failover $secondNode.CreateTarget = $createTarget2 $secondNode.BitmapFolderPath = $bmpFolderPath2 $device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod $initMethod while ($device.SyncStatus -ne [SwHaSyncStatus]::SW_HA_SYNC_STATUS_SYNC) { $syncPercent = $device.GetPropertyValue("ha_synch_percent") Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow Start-Sleep -m 2000 $device.Refresh() } } catch { Write-Host $_ -foreground red } finally { $server.Disconnect() } |
Detailed explanation of script parameters:
-addr, -addr2 — partner nodes IP address.
Format: string. Default value: 192.168.0.1, 192.168.0.1
allowed values: localhost, IP-address
-port, -port2 — local and partner node port.
Format: string. Default value: 3261
-user, -user2 — local and partner node user name.
Format: string. Default value: root
-password, -password2 — local and partner node user password.
Format: string. Default value: starwind
#common
-initMethod –
Format: string. Default value: Clear
-size – set size for HA-devcie (MB)
Format: integer. Default value: 12
-sectorSize – set sector size for HA-device
Format: integer. Default value: 512
allowed values: 512, 4096
-failover – set type failover strategy
Format: integer. Default value: 0 (Heartbeat)
allowed values: 0, 1 (Node Majority)
-bmpType – set bitmap type, is set for both partners at once
Format: integer. Default value: 1 (RAM)
allowed values: 1, 2 (DISK)
-bmpStrategy – set journal strategy, is set for both partners at once
Format: integer. Default value: 0
allowed values: 0, 1 – Best Performance (Failure), 2 – Fast Recovery (Continuous)
#primary node
-imagePath – set path to store the device file
Format: string. Default value: My computer\C\starwind”. For Linux the following format should be used: “VSA Storage\mnt\mount_point”
-imageName – set name device
Format: string. Default value: masterImg21
-createImage – set create image file
Format: boolean. Default value: true
-targetAlias – set alias for target
Format: string. Default value: targetha21
-poolName – set storage pool
Format: string. Default value: pool1
-aluaOptimized – set Alua Optimized
Format: boolean. Default value: true
-cacheMode – set type L1 cache (optional parameter)
Format: string. Default value: wb
allowed values: none, wb, wt
-cacheSize – set size for L1 cache in MB (optional parameter)
Format: integer. Default value: 128
allowed values: 1 and more
-syncInterface – set sync channel IP-address from partner node
Format: string. Default value: “#p2={0}:3260”
-hbInterface – set heartbeat channel IP-address from partner node
Format: string. Default value: “”
-createTarget – set creating target
Format: string. Default value: true
Even if you do not specify the parameter -createTarget, the target will be created automatically.
If the parameter is set as -createTarget $false, then an attempt will be made to create the device with existing targets, the names of which are specified in the -targetAlias (targets must already be created)
-bmpFolderPath – set path to save bitmap file
Format: string.
#secondary node
-imagePath2 – set path to store the device file
Format: string. Default value: “My computer\C\starwind”. For Linux the following format should be used: “VSA Storage\mnt\mount_point”
-imageName2 – set name device
Format: string. Default value: masterImg21
-createImage2 – set create image file
Format: boolean. Default value: true
-targetAlias2 – set alias for targetFormat: string.
Default value: targetha22
-poolName2 – set storage pool
Format: string. Default value: pool1
-aluaOptimized2 – set Alua Optimized
Format: boolean. Default value: true
-cacheMode2 – set type L1 cache (optional parameter)
Format: string. Default value: wb
allowed values: wb, wt
-cacheSize2 – set size for L1 cache in MB (optional parameter)
Format: integer. Default value: 128
allowed values: 1 and more
-syncInterface2 – set sync channel IP-address from partner node
Format: string. Default value: “#p1={0}:3260”
-hbInterface2 – set heartbeat channel IP-address from partner node
Format: string. Default value: “”
-createTarget2 – set creating target
Format: string. Default value: true
Even if you do not specify the parameter -createTarget, the target will be created automatically.If the parameter is set as -createTarget $false, then an attempt will be made to create the device with existing targets, the names of which are specified in the -targetAlias (targets must already be created)
-bmpFolderPath2 – set path to save bitmap file
Format: string.
Selecting the Failover Strategy
StarWind provides 2 options for configuring a failover strategy:
Heartbeat
The Heartbeat failover strategy allows avoiding the “split-brain” scenario when the HA cluster nodes are unable to synchronize but continue to accept write commands from the initiators independently. It can occur when all synchronization and heartbeat channels disconnect simultaneously, and the partner nodes do not respond to the node’s requests. As a result, StarWind service assumes the partner nodes to be offline and continues operations on a single-node mode using data written to it.
If at least one heartbeat link is online, StarWind services can communicate with each other via this link. The device with the lowest priority will be marked as not synchronized and get subsequently blocked for the further read and write operations until the synchronization channel resumption. At the same time, the partner device on the synchronized node flushes data from the cache to the disk to preserve data integrity in case the node goes down unexpectedly. It is recommended to assign more independent heartbeat channels during the replica creation to improve system stability and avoid the “split-brain” issue.
With the heartbeat failover strategy, the storage cluster will continue working with only one StarWind node available.
Node Majority
The Node Majority failover strategy ensures the synchronization connection without any additional heartbeat links. The failure-handling process occurs when the node has detected the absence of the connection with the partner.
The main requirement for keeping the node operational is an active connection with more than half of the HA device’s nodes. Calculation of the available partners is based on their “votes”.
In case of a two-node HA storage, all nodes will be disconnected if there is a problem on the node itself, or in communication between them. Therefore, the Node Majority failover strategy requires the addition of the third Witness node or file share (SMB) which participates in the nodes count for the majority, but neither contains data on it nor is involved in processing clients’ requests. In case an HA device is replicated between 3 nodes, no Witness node is required.
With Node Majority failover strategy, failure of only one node can be tolerated. If two nodes fail, the third node will also become unavailable to clients’ requests.
Please select the required option:
Preparing Datastores
Adding Discover Portals
1. To connect the previously created devices to the ESXi host, click on the Storage -> Adapters -> Software iSCSI and in the appeared window choose the Enabled option to enable Software iSCSI storage adapter. Push the Save configuration button.
2. In the Configure iSCSI window, under Dynamic Targets, click on the Add dynamic target button to specify iSCSI interfaces.
3. Enter the iSCSI IP addresses of all StarWind nodes for the iSCSI traffic.
Confirm the actions by pressing Save configuration.
4. The result should look like in the image below.
5. Click on the Rescan button to rescan storage.
6. Now, the previously created StarWind devices are visible to the system.
7. Repeat all the steps from this section on the other ESXi host, specifying corresponding IP addresses for the iSCSI subnet.
Creating Datastores
1. Open the Storage tab on one of your hosts and click on New Datastore.
2. Specify the Datastore name, select the previously discovered StarWind device, and click Next.
3. Enter datastore size and click Next.
4. Verify the settings and click Finish.
5. Add another Datastore (DS2) in the same way but select the second device for the second datastore.
6. Verify that your storages (DS1, DS2) are connected to both hosts. Otherwise, rescan the storage adapter.
NOTE: Path Selection Policy changing for Datastores from Most Recently Used (VMware) to Round Robin (VMware) is added into the Rescan Script, and this action is performed automatically. For checking and changing this parameter manually, the hosts should be connected to vCenter.
Multipathing configuration can be checked only from vCenter. To check it, click the Configure button, choose the Storage Devices tab, select the device, and click the Edit Multipathing button.
Configuring an Automatic Storage Rescan
1. Connect to the appliance via Shell Terminal in a Text-based User Interface (TUI) or using a remote SSH terminal.
2. Edit file /opt/starwind/starwind-virtual-san/drive_c/starwind/hba_rescan_config.py with the following command: sudo nano /opt/starwind/starwind-virtual-san/drive_c/starwind/hba_rescan_config.py
3. In the appropriate lines, specify the IP address and login credentials of the single or multiple ESXi hosts (see NOTE below) on which the current StarWind VM is stored and will trigger the storage rescan task:
$esxi_host_list = [‘IP address’]
$username = ‘Login’
$password = ‘Password’
NOTE: In some cases, it makes sense to create a separate ESXi user for storage rescans. To create the user, please follow the steps below:
4. Log in to ESXi with the VMware Host Client. Click Manage, and under Security & users tab, in the Users section click Add user button. In the appeared window, enter a user name, and a password.
5. Create a new Role, under Roles section, and click New Role button. Type a name for the new role. Select privileges for the role and click OK.
The following privileges might be assigned: Host – Inventory, Config, Local Cim, and Global – Settings.
6. Assign permission to the storage rescan user for an ESXi host – right-click Host in the VMware Host Client inventory and click Permissions. In the appeared window click Add user.
7. Click the arrow next to the Select a user text box and select the user that you want to assign a role to. Click the arrow next to the Select a role text box and select a role from the list.
(Optional) Select Propagate to all children or Add as group. Click Add user and click Close.
Make sure that rescan script is working and execute it from the VM: sudo python3 /opt/starwind/starwind-virtual-san/drive_c/starwind/hba_rescan.py
4. Repeat all steps from this section on the other ESXi hosts.
Performance Tweaks
1. Click on the Configuration tab on all of the ESXi hosts and choose Advanced Settings.
2. Select Disk and change the Disk.DiskMaxIOSize parameter to 512.
3. To optimize performance change I/O scheduler options according to the article below:
https://knowledgebase.starwindsoftware.com/guidance/starwind-vsan-for-vsphere-changing-linux-i-o-scheduler-to-optimize-storage-performance/
NOTE: Changing Disk.DiskMaxIOSize to 512 might cause startup issues with Windows-based VMs, located on the datastore where specific ESX builds are installed. If the issue with VMs start appears, leave this parameter as default or update the ESXi host to the next available build.
NOTE: To provide high availability for clustered VMs, deploy vCenter and add ESXi hosts to the cluster.
Click on Cluster -> Configure -> Edit and check the turn on vSphere HA option if it’s licensed.
Conclusion
By following this guide the end-user can get a StarWind Virtual SAN deployed on VMware vSphere, with VSAN set up as a Controller Virtual Machine (CVM). The guide offers key insights and steps to ensure a seamless deployment.