Sat Nov 27, 2004 11:22 am
I thought I'd share some of my performance test results.
Though the changes seem minor they do yield some nice results.
Test systems
Client (StarPort)
- Pentium 4 2.6
- Intel D865BF motherboard
- Intel 1000 GB Nic
- Windows XP Pro
- Maxtor Raptor Sata Drive as Root
- HDS 200GB ATA-100
Server (StarWind)
- AMD 64 3200
- MSI K8N Neo Platium
- NVidia nForce3 Nic
- Windows XP Pro
- Raid 0, 2 Maxtor 80GB Sata Drives as Root
- Raid 0, 2 Seagate Baracudas 160GB Sata Drives (Image0)
Network was a simple crossover cable eliminating the overhead of a switch.
IPerf and HD Tach were used for tuning.
I started with fresh installs for both machines, and left the default network settings.
Using IPerf to begin testing the network, I found I was only transmitting at 250Mb per second, at a quarter of the expected throughput I didn't bother with HD Tach.
On the both machines I enabled Jumbo Packets. Intel's setting was 9014, and offered a setting at near 16k... I used the 9014 setting as NVidia for some reason only offered a 9000 byte configuration.
This resulted in a throughput of 320Mb/second.
Next I changed a setting on the NVidia card to maximize for throughput instead of the default (CPU).
The result was a 33% increase in CPU usage but throughput of 508Mb.
Not ready to quit, I played with the tcp window size ,-w commandline option for iperf.
- at 8k, throughput 508Mb , CPU 30% (default)
- at 16k, throughput 626Mb, CPU 35%
- at 32k, throughput 870Mb, , CPU 48%
- at 64k. throughput 939Mb, CPU 60%
- at 128k, throughput 940Mb, CPU 68%
- at 256k, throughput 942Mb, CPU 72%
- at 512k, throughput 942Mb, CPU 74%
Statisfied with these results I set HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
Tcpip\Parameters\TcpWindowSize as a dword and a value of 65536
(the most efficient cpu/tcp as shown above)
At 3 times my orginal throughput numbers It was now time to test iSCSI throughput.
I tested a my RAID 0, 2xSeagate drive locally to determine my theorical maximum throughput... which was 95.3MB/sec Average and 246MB/sec Burst.
I then ran the same tests remotely... resulting in 52MB/sec Average and 78.2MB/sec burst.
A remote RAM disk has an average throughput of 77MB/sec with a burst of 91.5MB/sec
Since the local access of the Raid system is greater then the burst for a remote Ram disk, a difference of 39.5MB/sec , I conclude that this represents the overhead of the physical disk IO and network bandwidth is still available for exploitation.
At this point I break the Raid 0 configuration and create to JBODs of of 160GB each.... for some reason my Nvidia motherboard wouldn't show the drives seperatly without the JBOD config (I need to investigate this further)... however for the purposes of this test we can ignore this behavior.
What I was interested in was the performance of EACH drive seperately... after reconfiguration
- Drive A average
- Locally accessed throughput was 48.1MB/sec with a burst of 133.4MB /sec
- Remotely accessed throughput was 42.9MB/sec with a burst of 67.9MB /sec
- Drive B average
- Locally accessed throughput was 48.1MB/sec with a burst of 133MB /sec
- Remotely accessed throughput was 42.6MB/sec with a burst of 68.2MB /sec
Strange that the percentage overhead doesn't remain constant between the RAID 0, and JBOD configurations. At this point I can't explain the discripency but must question the previous conclusion wrt to disk IO overhead... something else appears to be happing..
I figured that since Windows can do software striping I could mount Drive A and Drive B via iSCSI and stripe them as Software Raid 0. It was my hope that the extra network bandwidth could be utilized to gain better disk throughput. Unfortunately this didn't work... not sure why... perhaps it's because the disks were image files, but WIndows couldn't convert the disks to dynamic....
I'll save that test for another day.
Bottom line... 50MB/s average throughput... Though no where near the performance my EMC San, is respectible and I have a feeling I'm just scratching the surface.
I look forward to hearing other ideas and results!
Chuck
[/list]