A while a go i was involved in a project were the customer wanted to deploy a 2 node Scale out file server with storage spaces on Windows Server 2012 R2. Since this was my first actual storage spaces and scale out file server deployment (besides a training and testing it in a lab with vm’s) i ran in to some issue’s and problems.
First of all there is a lot of stuff on the internet that is not always entirely true for all scenarios. Settings or test results that are used and reported are not necessarily true in your environment. For example DCB/PFC/QOS settings with Mellanox RoCe adapters are different than with Emulex RoCe adapters. And even with Emulex and HP or Fujitsu branded Emulex cards configuration is not the same. This could be because the vendors don’t know it exactly either and are still struggling with their drivers, firmware and settings, I don’t know. But it’s makes it pretty tough to get it all right. And as I said before, this is the first implementation I did, so I am not an expert on this either, so feel very free to give feedback if needed.
In my setup I was using Emulex OCe14102-U 10Gb CNA adapters which are SFP+ adapters for the SMB traffic. Since it was all Fujitsu hardware these card where Fujitsu branded to.
- Drivers and Firmware
Before you start, verify drivers and firmware. I was using Firmware version 10.6.193.15 and the driver version 10.6.126.0. Which are released in October 2015. My dear colleague Hans Vredevoort experienced some issue’s earlier this year with HP Branded Emulex cards drivers and firmware version 10.5.x.x. They had to roll back to 10.2.x.x drivers and firmware to make SMB Direct work. So look very closely to drivers and firmware.
- Configure the switch
Configuring Virtual LAN (VLAN) on the switch, please refer to your switch vendor’s configuration guide to create VLANs on the switch and tagging or trunking poorts on the vlan's. Below is a sample configuration for a Cisco Nexus switch: #Enter configuration mode by typing “configure terminal” configure terminal #Enter configuration mode for a connected port using the “interface <type> <slot/port>” command. interface ethernet 1/1 #Select the PFC mode using the “priority-flow-control <auto/on>” command. CHOOSE “auto.” Then get back to base configuration mode using the “exit” command. priority-flow-control auto exit #Repeat steps for the other ports that you are going to use for SMB Direct to. #Configure Priority Flow Control (PFC) on the switch #Please refer to your switch vendor’s guide for creating priority group and enabling PFC. This step enlists the detailed steps for configuring #PFC and QoS for RoCE traffic for a Cisco Nexus switch: #Create a priority group for RoCE traffic with a priority of 5. There are several commands needed to accomplish this task. Type the #following commands listed in the order shown below: class-map type qos roce match cos 5 exit class-map type queuing roce match qos-group 5 exit class-map type network-qos roce match qos-group 5 exit #Assign the Quality of Service (QoS) group for the different types of traffic. Enter into QoS policy map configuration mode for #RoCE using the “policy-map type <mode> <group>” command. Type the following commands listed in the order shown below: policy-map type qos roce class roce set qos-group 5 exit class class-default exit # If you are going to use fcoe toe add fcoe classes to. If not you can skip de fcoe class. class class-fcoe set qos-group 1 exit exit #Allocate the appropriate bandwidth for the types of traffic. Enter into queuing policy map configuration mode for RoCE using the “policy-map type <mode> <group>” command. Type the following commands listed in the order shown below: policy-map type queuing roce class type queuing roce bandwidth percent 80 exit class type queuing class-default bandwidth percent 10 exit # Same here for fcoe, create it when you need it. If you don not need it configure 20% for the class-default. class type queuing class-foce bandwidth percent 10 exit exit #Set the Maximum Transition Unit (MTU) for the separate types of traffic. Enter into the network policy map configuration mode for RoCE using the “policy-map type <mode> <group>” command. Type the following commands listed in the order shown below: policy-map type network-qos roce class type network-qos roce pause no-drop mtu 5000 class type network-qos class-default mtu 9216 # Again class-foce only needed when you are going to use foce. class type network-qos class-foce pause no-drop mtu 2158 exit exit #Configure the switches service policies. Enter into the system QoS configuration mode for the switch using the “system <mode>” command. Type the following commands listed in the order shown below: system qos service-policy type qos input roce service-policy type queuing input roce service-policy type queuing output roce service-policy type network-qos roce exit #Save the running configuration. copy running-config startup-config
Emulex best practice is Priority of 5 for the priority group.
- Configure the Network cards
Beware! RoCE-1 and RoCE-2 are Emulex performance profiles and not routable versus non routable RoCE.
On the ports of the Emulex card go to the DCB tab and make sure DCB-X is enabled (which is default). With DCB-X enabled the NIC looks at the switch for PFC and QOS configuration and adopts it. Below it is disabled.
After DCB-X is enabled and the server is rebooted you will see that it has received PFC configuration that is configured in the switch (like we did earlier).
Set you’re SMB Nics in tagged VLANs. This is because the PFC/QOS information is added to the vlan part of the packet. So make sure the switch ports are tagged on the SMB VLAN (1 vlan per SMB NIC because of Multi Channel SMB) or in Cisco language the ports are trunk ports.
And basically that’s it as far as the configuration goes for SMB Direct with Emulex Adapters. No need to disable DCB-X with powershell commands because you need it (and emulex uses it’s own hardware DCB-X and does not look at the OS DCB-X Settings). In the Emulex 14000 manuals their is not a single word on NetQOSpolicies because Emulex does not used it. All PFC and QOS is done by the hardware of the Emulex cards. And that information is past on from the switch.
Now that all settings are in place we need to run some command’s to verify if all is configured correctly.
Get-NetadapterRDMA to verify if you NICS can be used for RDMA. (Of course you already checked this in an earlier state, because there is no point in configuring al this is your network card don’t support RDMA 😉 )
Get-SmbServerNetworkInterface to verify if the interfaces report “True” in the colum “RDMA capable”. It could report “False” if that is the case, their is probably something wrong with the settings, driver or firmware)
If this is all good, you can start testing!
The first thing that really got me off track is various blogs and video’s throughout the internet that show fast file copies with RDMA and zero network throughput. Well it turns out, that is not the case with Emulex.
As you can see, lots of RDMA Outbound bytes and a 190MB per second file copy, but no network traffic.
The same copy on a system with an Emulex OCe14102-U 10Gb CNA Adapters, lots of RDMA Outbound Bytes, 310MB per second and 2.8Gbit network throughput. And all this time I was thinking that my RDMA/SMB Direct configuration was wrong because I keeped seeing lots of network traffic…
It turns out that Emulex is bypassing the TCP/IP stack but is still using Ethernet so there is still network traffic. But it is definitely using RDMA!
So in the end, when configuring SMB Direct with RDMA make sure you do some descend research and check the vendor for support. And also don’t stop reading after the first blog you find (including this one 🙂 ).