Hello,
I have been testing the Starwind 8 Beta for a vSphere lab and noticed yesterday things were a bit slower than usual (been testing for about a week). Everything has been performing really well but yesterday I noticed in vSphere that latency on the LUN went up to about 5000 ms. The average according to the graph is about 2-5 ms. I only had about 5 active VM's running on this LUN and they dont really do much as far as disk access. I shut down the VM's on this particular datastore and rebooted the Starwind server. After reboot I opened the Starwind console and connected to the Starwind service which indicated that it was mounting the lsfs device. After about 15-20 minutes I received an errror "Device Mount Failed!". I tried rebooting again, still no luck. Looks to me like my VM's that were running on this datastore are history... Anyway to force mount the lsfs disk?
On another note, when I initially setup this test environment I created a lsfs disk and an imagefile disk to test performance. The imagefile disk never had an issue and mounts fine. Unfortunately I had my VM's running on the LSFS disk at the time. I also checked my RAID controller and it shows all disks healthy and RAID status Operational. My Windows server running Starwind never showed more than 2 GB's of memory utilized.
Info about my Starwind setup:
Windows Server 2012 R2
Starwind 8 Beta 3
Raid controller: Perc 6i
RAID: (4) disk RAID 10 Write Back enabled
Server has 4Gb RAM
CPU: Xeon L5639
Network: (3) 1Gbps NIC's (1 NIC is management, 2 NIC's for iSCSI)
iSCSI Setup: A side and b side on different VLAN's /seperate subnets - also modified the "<iScsiDiscoveryListInterfaces value="1"/" per the MPIO document
VMware showed both targets in the LUN path and I configured Round Robin for the Path Selection
LSFS Device Info:
Size 300GB
Cache enable: 256MB
Dedupe: No
Thanks in advance, look forward to your response.
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software