Search

StarWind Virtual HCI Appliance: Configuration Guide for Proxmox Virtual Environment

  • November 20, 2024
  • 21 min read

Annotation

Relevant products

StarWind Virtual HCI Appliance (VHCA)

Purpose

This document outlines how to configure a StarWind Virtual HCI Appliance (VHCA) based on Proxmox Virtual Environment, with VSAN running as a Controller Virtual Machine (CVM). The guide includes steps to prepare Proxmox VE hosts for clustering, configure physical and virtual networking, and set up the Virtual SAN Controller Virtual Machine.

Audience

This technical guide is intended for storage and virtualization architects, system administrators, and partners designing virtualized environments using StarWind Virtual HCI Appliance (VHCA).

Expected result

The end result of following this guide will be a fully configured high-availability StarWind Virtual HCI Appliance (VHCA) powered by Microsoft Windows Server that includes virtual machine shared storage provided by StarWind VSAN.

 

Prerequisites

Prior to configuring StarWind Virtual HCI Appliance (VHCA), please make sure that the system meets the requirements, which are available via the following link:

https://www.starwindsoftware.com/system-requirements

Recommended RAID settings for HDD and SSD disks:

https://knowledgebase.starwindsoftware.com/guidance/recommended-raid-settings-for-hdd-and-ssd-disks/

Please read StarWind Virtual SAN Best Practices document for additional information:

https://www.starwindsoftware.com/resource-library/starwind-virtual-san-best-practices

Solution Diagram:
Зображення, що містить текст, знімок екрана, схема Автоматично згенерований опис

 

Prerequisites:

1. 2 servers with local storage, which have direct network connections for Synchronization and iSCSI/StarWind heartbeat traffic.

2. Servers should have local storage available for Microsoft Windows Server and StarWind VSAN Controller Virtual Machine. CVM utilizes local storage to create replicated shared storage connected to Proxmox VE nodes via iSCSI.

3. StarWind HA devices require at least 2 separate network links between the nodes. The first one is used for iSCSI traffic, the second one is used for Synchronization traffic.

NOTE. The network interfaces on each node for Synchronization and iSCSI/StarWind heartbeat interfaces should be in different subnets and connected directly according to the network diagram above. Here, the 172.16.10.x subnet is used for the iSCSI/StarWind heartbeat traffic, while the 172.16.20.x subnet is used for the Synchronization traffic.

4. 2-nodes cluster requires quorum. iSCSI/SMB/NFS cannot be used for this purposes. QDevice-Net package must be installed on 3rd Linux server, which will act as a witness.

https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support

Hardware Configuration

Access the BIOS on each server:

1. Change “Boot mode select” to [UEFI]

1. Change Boot mode select

2. Enable AC Power Recovery to On;

2.Enable AC Power Recovery

3. Set System Profile Settings to Performance;

3.Set System Profile

4. Disable Patrol Read in case of SSD disks;

4.Disable Patrol Read

5. Enable SR-IOV for network cards;

5.Enable SR-IOV

6. Configure the storage for OS and for data, or single RAID for OS and Data according to Supported RAID configurations here.

Settings for OS RAID1:

Virtual disk name: OS

Disk cache policy: Default (enabled by default)

Write policy: Write Through

Read policy: No read ahead

Stripe Size: 64K

6. Configure the storage

Storage for data:

Supported RAID configurations for main data storage you can find here.

6.1 Storage for data

Deploying Proxmox VE

1. Download Proxmox VE:

https://www.proxmox.com/en/downloads

2. Boot from the downloaded ISO.

3. Press I agree to accept EULA.

Зображення, що містить текст, знімок екрана, Шрифт, дизайн Автоматично згенерований опис

 

4. Choose harddisk for PVE installation. Click Next

Зображення, що містить текст, знімок екрана, Шрифт, Веб-сайт Автоматично згенерований опис

 

5. Choose Time Zone. Click Next

Зображення, що містить текст, знімок екрана, Веб-сайт, Шрифт Автоматично згенерований опис

 

6. Set Administration Password and Email Address. Click Next

Зображення, що містить текст, знімок екрана, Шрифт, Веб-сайт Автоматично згенерований опис

 

7. Configure Management Network. Click Next

Зображення, що містить текст, знімок екрана, Веб-сайт, Веб-сторінка Автоматично згенерований опис

 

8. Verify settings. Click Install

Зображення, що містить текст, знімок екрана, програмне забезпечення, Веб-сторінка Автоматично згенерований опис

Preconfiguring Proxmox VE hosts

1. ProxMox cluster should be created before deploying any virtual machines.

2. 2-nodes cluster requires quorum. iSCSI/SMB/NFS cannot be used for this purposes. QDevice-Net package must be installed on 3rd Linux server, which will act as a witness.

https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support

3. Install qdevice on witness server:

4. Install qdevice on both cluster nodes:

5. Configure quorum running the following command on one of the ProxMox node (change IP address)

6. Configure network interfaces on each node to make sure that Synchronization and iSCSI/StarWind heartbeat interfaces are in different subnets and connected according to the network diagram above. In this document, 172.16.10.x subnet is used for iSCSI/StarWind heartbeat traffic, while 172.16.20.x subnet is used for the Synchronization traffic. Choose node and open System -> Network page.

 

7. Click Create. Choose Linux Bridge.

 

8. Create Linux Bridge and set IP address. Set MTU to 9000. Click Create.

 

9. Repeat step 8 for all network adapters, which will be used for Synchronization and iSCSI/StarWind heartbeat traffic.

10. Verify network configuration in /etc/network/interfaces file. Login to the node via SSH and check the contents of the file.

 

11. Enable IOMMU support in kernel, if PCIe passthourgh will be used to pass RAID Controller, HBA or NVMe drives to the VM. Update grub configuration file.

For Intel CPU:

Add “intel_iommu=on iommu=pt” to GRUB_CMDLINE_LINUX_DEFAULT line in /etc/default/grub file.

For AMD CPU:

Add “iommu=pt” to GRUB_CMDLINE_LINUX_DEFAULT line in /etc/default/grub file.

12. Reboot the host.

13. Repeat steps 6-12 an all nodes.

Deploying StarWind VSAN Controller VM

1. Download StarWind VSAN CVM KVM: VSAN by StarWind: Overview

2. Extract the VM CVM.qcow2 file from the downloaded archive.

3. Upload CVM.qcow2 file to the Proxmox Host via any SFTP client (e.g. WinSCP) to /root/ directory.

02_SFTP

 

4. Create a VM without OS. Login to Proxmox host via Web GUI. Click Create VM.

02_Create_VM

 

5. Choose node to create VM. Enable Start at boot checkbox and set Start/Shutdown order to 1. Click Next.

03_VM_general

 

6. Choose Do not use any media and choose Guest OS Linux. Click Next.

04_Create_VM

 

7. Specify system options. Choose Machine type q35 and check the Qemu Agent box. Click Next.

04_VM_system

 

8. Remove all disks from the VM. Click Next.

06_Create_VM

 

9. Assign 8 cores to the VM and choose Host CPU type. Click Next.

05_VM_CPU

 

10. Assign at least 8GB of RAM to the VM. Click Next.

08_Create_VM

 

11. Configure Management network for the VM. Click Next.

09_Create_VM_Networking

 

12. Confirm settings. Click Finish.

06_VM_Confirm

 

13. Connect to Proxmox host via SSH. Attach StarWindAppliance.qcow2 file to the VM.

14. Open VM and go to Hardware page. Add unused SCSI disk to the VM.

15. Attach Network interfaces for Synchronization and iSCSI/Heartbeat traffic.
11_Add_device_VM

 

16. Open Options page of the VM. Select Boot Order and click Edit.

07_1_Boot_option

 

17. Move scsi0 device as #1 to boot from.

07_2_Boot_option

 

18. Repeat all the steps from this section on other Proxmox hosts.

Attaching storage StarWind Virtual SAN CVM

Please follow the steps below to attach desired storage type to the CVM

Attaching Virtual disk

1. Open VM Hardware page in Proxmox and add drive to the VM, which going to be used by StarWind service. Specify size of the Virtual disk and click OK.

07_Add_virtual_disk

NOTE. It is recommended to use VirtIO SCSI single controller for better performance. If multiple virtual disks are needed to be used in a software RAID inside of the CVM, VirtIO SCSI controller should be used.

2. Repeat step 1 to attach additional Virtual Disks.

3. Start VM.

4. Repeat steps 1-2 on all nodes

Attaching PCIe device

1. Shutdown StarWind VSAN CVM.

2. Open VM Hardware page in Proxmox and click Add -> PCI Device.

08_Add_PCIe_device

 

3. Choose PCIe Device from drop-down list.

08_1_Add_PCIe_device

 

4. Click Add.

08_2_Add_PCIe_device

 

5. Edit Memory. Uncheck Ballooning Device. Click OK.

08_3_Add_PCIe_device

 

6. Start VM.

7. Repeat steps 1-6 on all nodes.

Initial Configuration Wizard

1. Start the StarWind Virtual SAN Controller Virtual Machine.

2. Launch the VM console to view the VM boot process and obtain the IPv4 address of the Management network interface.

NOTE: If the VM does not acquire an IPv4 address from a DHCP server, use the Text-based User Interface (TUI) to set up the Management network manually.

Default credentials for TUI: user/rds123RDS

3. Using a web browser, open a new tab and enter the VM’s IPv4 address to access the StarWind VSAN Web Interface. On the Your connection is not private screen, click Advanced and then select Continue to…

4. On the Welcome to StarWind Appliance screen, click Start to launch the Initial Configuration Wizard.

5. On the License step, upload the StarWind Virtual SAN license file.

6. On the EULA step, read and accept the End User License Agreement to continue.

7. On the Management network step, review or edit the network settings and click Next.

IMPORTANT: The use of Static IP mode is highly recommended.

8. On the Static hostname, specify the hostname for the virtual machine and click Next.

9. On the Administrator account step, specify the credentials for the new StarWind Virtual SAN administrator account and click Next.

10. Wait until the Initial Configuration Wizard configures StarWind Virtual SAN for you.

11. Please standby until the Initial Configuration Wizard configures StarWind VSAN for you.

12. After the configuration process is completed, click Finish to install the StarWind vCenter Plugin immediately, or uncheck the checkbox to skip this step and proceed to the Login page.

13. Repeat steps 1 through 12 on each Windows Server host.

Add Appliance

To create replicated, highly available storage, add partner appliances that use the same StarWind Virtual SAN license key.

1. Navigate to the Appliances page and click Add to open the Add appliance wizard.

2. On the Credentials step, enter the IP address and credentials for the partner StarWind Virtual SAN appliance, then click Next.

3. Provide credentials of partner appliance.

3. Wait for the connection to be established and the settings to be validated

4. On the Summary step, review the properties of the partner appliance, then click Add Appliance.

Configure HA networking

1. Launch the “Configure HA Networking” wizard.

2. Select appliances for network configuration.
NOTE: the number of appliances to select is limited by your license, so can be either two or three appliances at a time.

3. Configure the “Data” network. Select interfaces to carry storage traffic, configure them with static IP addresses in unique networks, and specify subnet masks:

  • assign and configure at least one interface on each node
  • for redundant configuration, select two interfaces on each node
  • ensure interfaces are connected to client hosts directly or through redundant switches

4. Assign MTU value to all selected network adapters, e.g. 1500 or 9000. Ensure the switches have the same MTU value set.

5. Click Next to validate Data network settings.

6. Configure the “Replication” network. Select interfaces to carry storage traffic, configure them with static IP addresses in unique networks, and specify subnet masks:

  • assign and configure at least one interface on each node
  • for redundant configuration, select two interfaces on each node
  • ensure interfaces are connected to client hosts directly or through redundant switches

7. Assign MTU value to all selected network adapters, e.g. 1500 or 9000. Ensure the switches have the same MTU value set.

8. Click Next to validate the Replication network settings completion.

 

9. Review the summary and click Configure.

Add physical disks

Attach physical storage to StarWind Virtual SAN Controller VM:

  • Ensure that all physical drives are connected through an HBA or RAID controller.
  • To get the optimal storage performance, add HBA, RAID controllers, or NVMe SSD drives to StarWind CVM via a passthrough device.

For detailed instructions, refer to Microsoft’s documentation on DDA. Also, find the storage provisioning guidelines in the KB article.

Create Storage Pool

1. Click the “Add” button to create a storage pool.

2. Select two storage nodes to create a storage pool on them simultaneously.

 

3. Select physical disks to include in the storage pool name and click the “Next” button.
NOTE: Select identical type and number of disks on each storage node to create identical storage pools.

4. Select one of the preconfigured storage profiles or create a redundancy layout for the new storage pool manually according to your redundancy, capacity, and performance requirements.

Hardware RAID, Linux Software RAID, and ZFS storage pools are supported and integrated into the StarWind CVM web interface. To make easier the storage pool configuration, the preconfigured storage profiles are provided to configure the recommended pool type and layout according to the direct-attached storage:

  • hardware RAID – configures Hardware RAID’s virtual disk as a storage pool. It is available only if a hardware RAID controller is passed through to the CVM
  • high performance – creates Linux Software RAID-10 to maximize storage performance while maintaining redundancy
  • high capacity – creates Linux Software RAID-5 to maximize storage capacity while maintaining
    redundancy
  • better redundancy – creates ZFS Stripped RAID-Z2 (RAID 60)) to maximize redundancy while maintaining high storage capacity
  • manual – allows users to configure any storage pool type and layout with attached storage

5. Review “Summary” and click the “Create” button to create the pools on storage servers simultaneously.

Create Volume

1. To create volumes, click the “Add” button.

2. Select two identical storage pools to create a volume simultaneously.

 

3. Specify volume name and capacity.

4. Select the Standard volume type.

5. Review “Summary” and click the “Create” button to create the pool.

Create HA LUN

The LUN availability for StarWind LUN can be Standalone and High availability (2-way or 3-way replication) and is narrowed by your license.

1. To create a virtual disk, click the Add button.

2. Select the protocol.

3. Choose the “High availability” LUN availability type.

4. Select the appliances that will host the LUN. Partner appliances must have identical hardware configurations, including CPU, RAM, storage, and networking.

5. Select a volume to store the LUN data. Selected volumes must have identical storage configurations.

6. Select the “Heartbeat” failover strategy.
NOTE:  To use the Node witness or the File share witness failover strategies, the appliances should have these features licensed.

7. Specify the HA LUN settings, e.g. name, size, and block size. Click Next.

8. Review “Summary” and click the “Create” button to create the LUN.

Connecting StarWind HA Storage to Proxmox Hosts

1. Connect to Proxmox host via SSH and install multipathing tools.

 

2. Edit nano /etc/iscsi/initiatorname.iscsi setting the initiator name.

09_initiator_name

 

3. Edit /etc/iscsi/iscsid.conf setting the following parameters:

 

5. Edit /etc/multipath.conf adding the following content:

 

6. Run iSCSI discovery on both nodes:

 

7. Connect iSCSI LUNs:

 

8. Get WWID of StarWind HA device:

 

9. The wwid must be added to the file ‘/etc/multipath/wwids’. To do this, run the following command with the appropriate wwid:

 

10. Restart multipath service.

 

11. Check if multipathing is running correctly:

 

12. Repeat steps 1-11 on every Proxmox host.

13. Create LVM PV on multipathing device:

where mpatha – alias for StarWind LUN

 

14. Create VG on LVM PV:

 

15. Login to Proxmox via Web and go to Datacenter -> Storage. Add new LVM storage based on VG created on top of StarWind HA Device. Enable Shared checkbox. Click Add.

15_Add_LVM_iSCSI

 

16. Login via SSH to all hosts and run the following command:

Configure storage rescan

1. Download archive with rescan scripts.

https://tmplink.starwind.com/proxmox_rescan.zip

3. Login to StarWind CVM via SSH.

4. Install sshpass package.

 

5. Extract proxmox_rescan.zip archive.

6. Upload logwatcher.py and rescan_px.sh to /opt/starwind/starwind-virtual-san/drive_c/starwind/ directory.
Set host IP address and password in rescan_px.sh.

Proxmox Storage Rescan

Conclusion

Following this guide, a StarWind Virtual HCI Appliance (VHCA) powered by Proxmox Virtual Environment was deployed and configured with StarWind Virtual SAN (VSAN) running in a CVM on each host. As a result, a virtual shared storage “pool” accessible by all cluster nodes was created for storing highly available virtual machines.

Hey! Don’t want to tinker with configuring all the settings? Looking for a fast-track to VSAN deployment?
Dmytro Malynka
Dmytro Malynka StarWind Virtual SAN Product Manager
We've got you covered! First off, all trial and commercial StarWind customers are eligible for installation and configuration assistance services. StarWind engineers will help you spin up the PoC setup to properly evaluate the solution and will assist with the production deployment after the purchase. Secondly, once deployed, StarWind VSAN is exceptionally easy to use and maintain. Hard to believe? Wait no more and book a StarWind VSAN demo now to see it in action!