Poor Read Performance

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Francesco
Posts: 25
Joined: Sun Jul 13, 2014 12:31 pm

Sun Jul 13, 2014 5:31 pm

I have a new SAN server and having poor read performance on all VMs and a test Windows server. SAN network is a 10Gbe LAN with just the 2 servers connected to a Netgear XS708E 8-port 10Gbe switch. I originally had the servers connected together using crossover cables and performance did not change.
Performance in a VM
Performance in a VM
Disk.jpg (112.74 KiB) Viewed 8248 times
Performance through locally installed StarWind iSCSI initiator on SAN server
Performance through locally installed StarWind iSCSI initiator on SAN server
SAN iSCSI Local.png (94.46 KiB) Viewed 8248 times
Performance of RAID 10
Performance of RAID 10
SAN Local.jpg (115.09 KiB) Viewed 8248 times
Tests performed:

IP address for the iSCSI network configured with a /30 subnet
No internet access
Standalone Server
Removed Link-Layer Topology Discovery items
Unchecked Microsoft Networks items
Tested with Adaptive Load Balancing and as standalone NICs
Virtual disks have 512MB Write-Back cache and set to allow multiple iSCSI connections
Connected virtual disk locally with StarWind iSCSI connector

SAN server with these specifications:

StarWind 6.0.6399
Windows Server 2008 R2 Standard x64 6.1.7601 Service Pack 1 Build 7601
Supermicro X9DRi-LN4+/X9DR3-LN4+
Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz, 2400 Mhz, 4 Core(s), 4 Logical Processor(s)
Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz, 2400 Mhz, 4 Core(s), 4 Logical Processor(s)
Installed Physical Memory (RAM) 32.0 GB
Total Physical Memory 32.0 GB
Available Physical Memory 21.1 GB
Total Virtual Memory 63.9 GB
Available Virtual Memory 53.0 GB
Page File Space 32.0 GB
C:\ Drive is 120GB RAID 1 SSD drives

Intel(R) C600/X79 series chipset
Matrox G200eW (Nuvoton)
LSI MegaRAID SAS Adapter
- RAID 10 8x2TB SAS 6Gb/s drives with Write-Back Cache
Intel(R) C600 series chipset SATA AHCI Controller
Intel(R) C600 Series Chipset SAS RAID (SATA mode)
Intel(R) I350 Gigabit Network Connection
Intel(R) I350 Gigabit Network Connection #2
Intel(R) I350 Gigabit Network Connection #3
Intel(R) I350 Gigabit Network Connection #4

Team #0 - Intel(R) Ethernet Converged Network Adapter X540-T1
Team #0 - Intel(R) Ethernet Converged Network Adapter X540-T1 #2
- Dynamic Link Aggregation
- Aggregation not set at the switch because it will not work
- Driver settings:
Jumbo Packet: 9014
Max Number of RSS Processors: 16
Preformance Options (seems to have improved speed but only slightly):
• Receive Buffers: 2048
• Transmit Buffers: 8192
RSS Queues: 16
Starting RSS CPU: 0
Disabled Data Center Bridging option
- The only thing that helped slightly was the buffer settings

Ran the following commands:

netsh int tcp set heuristics disabled
netsh int tcp set global autotuninglevel=normal
netsh int tcp set global congestionprovider=ctcp
netsh int tcp set global ecncapability=enabled
netsh int tcp set global rss=enabled
netsh int tcp set global chimney=enabled
netsh int tcp set global dca=enabled
netsh int ipv4 set subint “<Name of NIC>” mtu=9000 store=persistent

VMWare ESXi 5.5 Server

Dell PowerEdge 2900 III
Intel(R) Xeon(R) E5420 @ 2.50GHz, 2493 Mhz, 4 Core(s), 4 Logical Processor(s)
Intel(R) Xeon(R) E5420 @ 2.50GHz, 2493 Mhz, 4 Core(s), 4 Logical Processor(s)
BIOS Version/Date Dell Inc. 2.7.0, 10/30/2010
Installed Physical Memory (RAM) 48.0 GB
Broadcom BCM5708C NetXtreme II GigE
Intel(R) Ethernet Converged Network Adapter X540-T1
- MTU: 9000


I have a second server with the same hardware specifications as the ESXi server and currently has Windows Server 2008 R2 for testing:

Dell PowerEdge 2900 III
Intel(R) Xeon(R) E5420 @ 2.50GHz, 2493 Mhz, 4 Core(s), 4 Logical Processor(s)
Intel(R) Xeon(R) E5420 @ 2.50GHz, 2493 Mhz, 4 Core(s), 4 Logical Processor(s)
BIOS Version/Date Dell Inc. 2.7.0, 10/30/2010
Installed Physical Memory (RAM) 48.0 GB
Broadcom BCM5708C NetXtreme II GigE
Intel(R) Ethernet Converged Network Adapter X540-T1
- Same configuration as SAN server
Last edited by Francesco on Sun Jul 13, 2014 5:37 pm, edited 1 time in total.
Francesco
Posts: 25
Joined: Sun Jul 13, 2014 12:31 pm

Sun Jul 13, 2014 5:36 pm

In addition, I created a RAM drive on the SAN the test shows that I am getting about 9.4Gbps write and about 3.2Gbps read.

What could be the problem?
RAM Drive.jpg
RAM Drive.jpg (108.24 KiB) Viewed 8247 times
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Jul 14, 2014 2:09 am

1) V6 is old. Please replace with V8. Tons of things changed. V6 is EOL-ed since V8 released.

2) StarWind iSCSI initiator is EOL-ed. Please use MSFT one. I know it's not production but worst thing when testing a loopback our accelerator driver is not going to pick up StarWind one so numbers would be low.

3) NIC teaming is not supported for iSCSI. You can (in theory) use some scenarios with Windows Server 2012 R2 but again, no way for VMware ESXi / vSphere. Please unteam them and properly configure MPIO.

That's for beginning...
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Francesco
Posts: 25
Joined: Sun Jul 13, 2014 12:31 pm

Mon Jul 14, 2014 2:20 am

How old is V8 and will V8 install seamlessly over V6?

NIC teaming not supported with iSCSI: is that the case even when using Intel's software?
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Jul 14, 2014 2:39 am

Couple of months old.

V8-over-V6 upgrade is transparent. No downtime.

Intel LACP stack is not supported even with MSFT-all-around config. Don't do it. Ever.
Francesco wrote:How old is V8 and will V8 install seamlessly over V6?

NIC teaming not supported with iSCSI: is that the case even when using Intel's software?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Francesco
Posts: 25
Joined: Sun Jul 13, 2014 12:31 pm

Mon Jul 14, 2014 3:26 am

Would I be able to downgrade if V8 didn't work?
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Jul 14, 2014 3:35 am

If you want to have an unsupported V6-based config I don't see why not.
Francesco wrote:Would I be able to downgrade if V8 didn't work?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Francesco
Posts: 25
Joined: Sun Jul 13, 2014 12:31 pm

Mon Jul 14, 2014 3:45 am

OK, I just need to look at all the possibilities just in case.

Also, is it better to use the Intel driver Jumbo Packet of 9014 option or to use the Windows command: netsh int ipv4 set subint "SAN" mtu=1500 store=persistent? I tried the Windows command and seems to cause problems.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Jul 14, 2014 3:52 am

With a new version it's not req'd to use Jumbo frames you should be able to saturate you Ethernet uplinks with 1500 byte ones. If you'd enable 9K Jumbo it would not hurt however. Just make sure all gear (esp. switch, cheap Netgears are known to have issues) can do them.
Francesco wrote:OK, I just need to look at all the possibilities just in case.

Also, is it better to use the Intel driver Jumbo Packet of 9014 option or to use the Windows command: netsh int ipv4 set subint "SAN" mtu=1500 store=persistent? I tried the Windows command and seems to cause problems.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Francesco
Posts: 25
Joined: Sun Jul 13, 2014 12:31 pm

Mon Jul 14, 2014 4:00 am

OK, I meant "netsh int ipv4 set subint "SAN" mtu=9000 store=persistent" not 1500.

The Netgear 10GbE switch I got suppose to support Jumbo frames

I'll try it and get back with results as soon as I can.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Jul 14, 2014 4:09 am

Sure. If you'd have similar issues we'd be happy to tale a look @ your StarWind logs. Also a bit more detailed info about your config (StarWind side, like capacity, underlying disk subsystem, cache etc) is req'd. Thanks!
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Francesco
Posts: 25
Joined: Sun Jul 13, 2014 12:31 pm

Wed Jul 16, 2014 2:34 pm

When you say the upgrade from V6 to V8 can be done with no downtime, do you mean that the upgrade can be done without shutting down the VMs and nobody will notice anything?
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Jul 16, 2014 3:20 pm

Yes. Assuming you run fault tolerant configuration with StarWind of course.
Francesco wrote:When you say the upgrade from V6 to V8 can be done with no downtime, do you mean that the upgrade can be done without shutting down the VMs and nobody will notice anything?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Francesco
Posts: 25
Joined: Sun Jul 13, 2014 12:31 pm

Wed Jul 16, 2014 3:26 pm

I'm not using fault tolerance. It's a single server and using .ibv files.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Jul 16, 2014 4:38 pm

You cannot upgrade single controller setups without any downtime. Nobody I know can do anything like that. Sorry about that.
Francesco wrote:I'm not using fault tolerance. It's a single server and using .ibv files.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply