Search
StarWind is a hyperconverged (HCI) vendor with focus on Enterprise ROBO, SMB & Edge

Intel SPDK NVMe-oF Target Performance Tuning. Part 2: Preparing testing environment

  • December 27, 2017
  • 20 min read
Director of Sales Engineering with more than 15 years of professional IT experience. Almost 2 years of Technical Support and Engineering at StarWind. Storage and virtualization expert. IT systems engineer. Web designer as a hobby.
Director of Sales Engineering with more than 15 years of professional IT experience. Almost 2 years of Technical Support and Engineering at StarWind. Storage and virtualization expert. IT systems engineer. Web designer as a hobby.

Introduction

In the previous article, I’ve described 3 scenarios for testing NVMe-oF performance and skimmed through their hardware and software configuration. Again, what I wanna do, is measure how virtualization influences the NVMe-oF performance (maybe, it doesn’t at all). For this, I’m gonna examine how NVMe-oF performs on a bare metal configuration, and on an infrastructure with Hyper-V and ESXi deployed. In each case, I’ll also evaluate the performance of iSER transport using LIO and SPDK iSCSI. Now that you have the overall understanding of the project, it’s time to move on to configuring our testing environment.

So here are all the steps.

Step 1. Preparing the server

First things first – installing all the required packages:

Now, installing nvme-cli. To do this, we need to execute the following commands:

Afterward, we should configure our NIC Mellanox ConnectX-4, but first, check the NIC availability:

check NIC availability via command

Next, adding the support of the required modules:

add the support of modules via command

We can also add these modules to the /etc/modules file so that they could be loaded automatically when booting the system:

add modules to /etc/modules file

Assigning the IP address to the vi interface /etc/network/interfaces:

img

Re-starting the networking service:

Restart networking service

Now, let’s check if our interface is up and running using the ibv_devinfo utility.

check an interface running using the ibv_devinfo utility

Everything seems fine, and we can proceed to the next step.

Step 2. Preparing the Client

The process here is the same as in the previous part (preparing the server), so we just perform all the same actions.

Step 3. Checking RDMA

Now, we must check if the RDMA transport works between our hosts. For this, run the following command on the server:

and on the client:

If everything’s OK, we should see the information as on the screenshot below:

check an RDMA transport via command

Step4. Installing SPDK

Now that we know that the RDMA transport is working correctly, we can proceed to install SPDK and LIO. This won’t take much time.

First, getting the source code:

Next, installing prerequisites. The scripts/pkgdep.sh script will automatically install the full set of dependencies required to build and develop SPDK:

Note that not all features are enabled by default. For example, RDMA support (and hence NVMe over Fabrics) is not enabled by default. You can enable it by doing the following:

And finally, to make sure that everything works correctly, run unit tests:

You will see several error messages when running the unit tests, but they are part of the test suite. The final message at the end of the script indicates success or failure.

Step 5. Configuring and preparing NVMe-oF / SPDK iSCSI Target / LIO for testing

Here, I’ll show you how I’ve configured and prepared our targets for testing. Please note that I describe the configuration and preparation process for NVMe-oF, SPDK iSCSI Target, and LIO one by one. However, it shouldn’t be considered as a step-by-step process and each target must be configured and prepared separately.

The first one is NVMe-oF.

Let’s go to the catalog with default configurations:

We should copy the file with default configurations to the catalogue /home/sw/spdk/app/nvmf_tgt

Now, editing the configuration file nvme.conf

In our case, the configuration file looks as following:

To get PCIe traddr, we should run the following command:

get PCIe traddr command

Now, preparing NVMe-oF for performance testing.

Before initiating NVMe-oF Target on the server, it is necessary to run the following command from the catalog:

Initiate NVMe-oF target on server

SPDK should connect NVMe and it should disappear from the list of block devices in the system.

Then, execute the following command from the catalog spdk/app/nvmf_tgt as demonstrated below:

launch NVMe-oF target on the server

NVMe-oF Target has been launched successfully:

NVMe-oF launching process

In case you want to finish your job with NVMe and return it to the system, we can stop NVMe-oF Target and execute the command in the catalog with SPDK:

reset NVMe-oF Target

Next, we have to connect the device to the client by running the following command:

connect the device to the client

To connect the required device, execute the following command:

For RAM Drive:

For NVMe:

After executing the command, let’s check that the device is available in the system:

check the device availability

To disconnect the device, run:

disconnect the device

Configuring SPDK iSCSI Target

We’ve finished with NVMe-oF. Now, to SPDK iSCSI Target. The process is already quite familiar. Go to the catalog with default configurations:

Copying the file with default configuration to the catalog /home/sw/spdk/app/iscsi_tgt

Edit the iscsi.conf configuration file

In order to get PCIe traddr, execute the following command:

get PCIe traddr

Preparing SPDK iSCSI Target for performance testing

On the server:

Before launching SPDK iSCSI Target on the server, we need to execute the # HUGEMEM=32765 scripts/setup.sh command from the catalog with SPDK as shown below:

launch SPDK iSCSI Target on the server

SPDK should connect NVMe and it should disappear from the list of block devices in the system.

Further, we should run the following command from the catalog spdk/app/iscsi_tgt:

The successful launch of iSCSI Target should look like this:

launch of iSCSI Target

If there is a need to return NVMe to the system, stop SPDK iSCSI Target and execute the command in the catalog with SPDK:

reset SPDK iSCSI Target

On the client:

After that, we should execute the command for changes to take effect:

To connect the devices, run the following commands:

discover devices via command

connect SPDK iSCSI target

Next, Finding /dev/sdX nodes for iSCSI LUNs:

Find nodes for iSCSI LUNs:

After the targets are connected, they can be tuned.

Tuning

If you need to disconnect from the target, run:

Configuring LIO:

We’re done with NVMe-oF and SPDK iSCSI and can now move on to iSER. Just to remind you, SPDK doesn’t allow using iSER, so I’ve decided to use LIO.

On the server:

Prior to configuring LIO, it is necessary to perform all the steps described in the previous part. After that, connect devices locally and check whether they are available in the system. The procedure is the same as with NVMe-oF.

connect NVMe-oF locally

Now, we can proceed to LIO configuration.

First, run the targetcli command to enter the LIO CLI console:

run the targetcli command

Next, connecting our drives:

connecting NVMe and RAM drives

Creating iscsi target:

Creating iscsi target

Allowing access from any server:

Allowing access from any server

Creating LUN:

Create LUN

Creating portal:

Create portal

Enabling iSER on this portal:

Enable iSER on the portal

Saving our configuration and exiting:

Save configuration

Preparing iSER for performance testing

Steps here are practically the same as when we were preparing SPDK iSCSI.

On the client:

To confirm the changes, run the following command:

To connect the devices, run:

connect the devices

NOTE: we have to change transport to iSER:

 change transport to iSER

change configuration target

Finding /dev/sdX nodes for iSCSI LUNs:

configuration LUNs

Tuning

If there is a need to disconnect from a target

Conclusion

Everything is ready for testing. We’ve prepared the server and the client and configured NVME-oF, SPDK iSCSI and iSER transport. The only few things that are left to do are to create and prepare a VM in Hyper-V and ESXi, but I’ll cover these steps in the corresponding posts where I’ll be benchmarking the NVMe-oF performance. In my next post, I’ll run the first test and measure how well NVMe-oF performs on a bare metal configuration.

Intel SPDK NVMe over Fabrics [NVMe-oF] Target Performance Tuning. Part 1: Jump into the fire©

Benchmarking Samsung NVMe SSD 960 EVO M.2

Hey! Found Taras’s article helpful? Looking to deploy a new, easy-to-manage, and cost-effective hyperconverged infrastructure?
Alex Bykovskyi
Alex Bykovskyi StarWind Virtual HCI Appliance Product Manager
Well, we can help you with this one! Building a new hyperconverged environment is a breeze with StarWind Virtual HCI Appliance (VHCA). It’s a complete hyperconverged infrastructure solution that combines hypervisor (vSphere, Hyper-V, Proxmox, or our custom version of KVM), software-defined storage (StarWind VSAN), and streamlined management tools. Interested in diving deeper into VHCA’s capabilities and features? Book your StarWind Virtual HCI Appliance demo today!