StarWind Virtual SAN: Configuration Guide for Microsoft Windows Server [Hyper-V], StarWind Deployed as a Windows Application
- January 19, 2023
- 49 min read
- Download as PDF
Prerequisites
StarWind Virtual SAN system requirements
Prior to installing StarWind Virtual SAN, please make sure that the system meets the requirements, which are available via the following link:
https://www.starwindsoftware.com/system-requirements
Recommended RAID settings for HDD and SSD disks:
https://knowledgebase.starwindsoftware.com/guidance/recommended-raid-settings-for-hdd-and-ssd-disks/
Please read StarWind Virtual SAN Best Practices document for additional information:
https://www.starwindsoftware.com/resource-library/starwind-virtual-san-best-practices
Solution diagram
The diagrams below illustrate the network and storage configuration of the solution:
2-node cluster
3-node cluster
Preconfiguring cluster nodes
1. Make sure that a domain controller is configured and the servers are added to the domain.
NOTE: Please follow the recommendation in KB article on how to place a DC in case of StarWind Virtual SAN usage.
2. Deploy Windows Server on each server and install Failover Clustering and Multipath I/O features, as well as the Hyper-V role on both servers. This can be done through Server Manager (Add Roles and Features menu item).
3. Define at least 2x network interfaces (2 node scenario) or 4x network interfaces (3 node scenario) on each node that will be used for the Synchronization and iSCSI/StarWind heartbeat traffic. Do not use iSCSI/Heartbeat and Synchronization channels over the same physical link. Synchronization and iSCSI/Heartbeat links can be connected either via redundant switches or directly between the nodes (see diagram above).
For 2-node scenario, 172.16.10.x subnet is used for iSCSI/StarWind heartbeat traffic, while 172.16.20.x subnet is used for the Synchronization traffic.
For 3-node scenario, 172.16.10.x, 172.16.11.x,172.16.12.x subnets are used for the iSCSI/StarWind heartbeat traffic, while 172.16.20.x,172.16.21.x, 172.16.22.x subnets are used for the Synchronization traffic.
4. Set MTU size to 9014 or 9000 depending on network cards vendor recommendations on iSCSI and Sync interfaces using the following Powershell script.
It will apply MTU 9014 (9000) to all iSCSI and Sync interfaces if they have iSCSI or Sync as part of their name.
NOTE: MTU setting should be applied on the adapters only if there is no live production running through the NICs.
5. Open the MPIO Properties manager: Start -> Windows Administrative Tools -> MPIO. Alternatively, run the following PowerShell command :
6. In the Discover Multi-Paths tab, select the Add support for iSCSI devices checkbox and click Add.
7. When prompted to restart the server, click Yes to proceed.
8. Repeat the same procedure on the other server.
Installing File Server Roles
Please follow the steps below if file shares configuration is required
Select the Required Replication Mode
The replication can be configured using Synchronous “Two-Way” Replication mode:
Synchronous or active-active replication ensures real-time synchronization and load balancing of data between two or three cluster nodes. Such a configuration tolerates the failure of two out of three storage nodes and enables the creation of an effective business continuity plan. With synchronous mirroring, each write operation requires control confirmation from both storage nodes. It guarantees the reliability of data transfers but is demanding in bandwidth since mirroring will not work on high-latency networks.
Selecting the Failover Strategy
StarWind provides 2 options for configuring a failover strategy:
Heartbeat
The Heartbeat failover strategy allows avoiding the “split-brain” scenario when the HA cluster nodes are unable to synchronize but continue to accept write commands from the initiators independently. It can occur when all synchronization and heartbeat channels disconnect simultaneously, and the partner nodes do not respond to the node’s requests. As a result, StarWind service assumes the partner nodes to be offline and continues operations on a single-node mode using data written to it.
If at least one heartbeat link is online, StarWind services can communicate with each other via this link. The device with the lowest priority will be marked as not synchronized and get subsequently blocked for the further read and write operations until the synchronization channel resumption. At the same time, the partner device on the synchronized node flushes data from the cache to the disk to preserve data integrity in case the node goes down unexpectedly. It is recommended to assign more independent heartbeat channels during the replica creation to improve system stability and avoid the “split-brain” issue.
With the heartbeat failover strategy, the storage cluster will continue working with only one StarWind node available.
Node Majority
The Node Majority failover strategy ensures the synchronization connection without any additional heartbeat links. The failure-handling process occurs when the node has detected the absence of the connection with the partner.
The main requirement for keeping the node operational is an active connection with more than half of the HA device’s nodes. Calculation of the available partners is based on their “votes”.
In case of a two-node HA storage, all nodes will be disconnected if there is a problem on the node itself, or in communication between them. Therefore, the Node Majority failover strategy requires the addition of the third Witness node or file share (SMB) which participates in the nodes count for the majority, but neither contains data on it nor is involved in processing clients’ requests. In case an HA device is replicated between 3 nodes, no Witness node is required.
With Node Majority failover strategy, failure of only one node can be tolerated. If two nodes fail, the third node will also become unavailable to clients’ requests.
Please select the required option:
Configuring File Shares
Please follow the steps below if file shares should be configured on cluster nodes.
Configuring the File Server for General Use Role
NOTE: To configure File Server for General Use, the cluster should have available storage
1. To configure the File Server for General Use role, open Failover Cluster Manager.
2. Right-click on the cluster name, then click Configure Role and click Next to continue.
3. Select the File Server item from the list in High Availability Wizard and click Next to continue.
4. Select File Server for general use and click Next.
5. On the Client Access Point page, in the Name text field, type the NETBIOS name that will be used to access the File Server and IP for it.
Click Next to continue.
6. Select the Cluster disk and click Next.
7. Check whether the specified information is correct. Click Next to proceed or Previous to change the settings.
8. Once the installation has been finished successfully, the Wizard should now look like the screenshot below.
Click Finish to close the Wizard.
9. The newly created role should now look like the screenshot below.
NOTE: If the role status is Failed and it is unable to Start, please, follow the next steps:
- open Active Directory Users and Computers
- enable the Advanced view if it is not enabled
- edit the properties of the OU containing the cluster computer object (in this case – Production)
- open the Security tab and click Advanced
- in the appeared window, press Add (the Permission Entry dialog box opens), click Select a principal
- in the appeared window, click Object Types, select Computers, and click OK
- enter the name of the cluster computer object (in this case – Production)
- go back to Permission Entry dialog, scroll down, and select Create Computer Objects
- click OK on all opened windows to confirm the changes
- open Failover Cluster Manager, right-click File Share role and click Start Role