Configuring ESXi 4 iSCSI initiator for Multipathing and Jumbo Frames

I recently needed to configure an ESXi 4.0 server to use the new multi-pathing capability along with jumbo frames. I have done this in the past with “classic” ESX using this fantastic post by Chad Sakac and friends which uses the ESXCFG commands within the service console. As ESXi doesn’t have a service console I had to do a little research to figure it out. The information is out there but it took a bit of finding so I decided to post the process here.

Yes I know I can hack ESXi so I can use SSH and use the regular esxcfg commands. However since VMware keep telling us that the service console is going away, I figure I may as well do it “right” and use the vSphere CLI.

On a side note I originally tried to accomplish this using power CLI (based on power shell) but ran into issues with setting up vmknics and their MTU settings. I also couldn’t bind thevkernel ports to the iSCSI HBA. There is most likely a way of doing it using power CLI but in the end I found it was easier to use the regular vSphere CLI.

This guide assumes that you have already –

  • Got a reasonable knowledge of ESX already
  • Have read this post and understand the concepts
  • Enabled Jumbo frames on the relevant physical switch ports
  • Ensured that your iSCSI target and server supports jumbo frames
  • That you have a base install of ESXi 4.0 up and running
  • You know the name of your iSCSI HBA (Should be something like vmhba33)
  • That you are able to substitute anything between the with your own relevant information J

Preparation

To get the job done we will be using a combination of the sphere CLI which you can get from here and the vSphere windows client which you can get by connecting to the IP of your ESXi host using your web browser.

Step 1 – Create the vSwitch and set the MTU

In this section we will create the vSwitch , assign the physical NIC’s that will be used for iSCSI traffic using the GUI and then switch to the CLI to set the MTU to 9000 (Jumbo Frames) as it can’t be done using the GUI.

  1. Log into the ESX host with the vSphere Client
  2. Create a vSwitch and take a note of its name (ie “vSwitch1”)
  3. Attach the NICS you intend to use for iSCSI traffic. Be sure these are plugged into switch ports with jumbo frames enabled. In this example I am using two NICs.

1. If you choose all the defaults you will end up with a port group on the vSwitch. You can safely delete that as you don’t need it.

2. If you haven’t already, install the vSphere CLI and choose all the defaults

3. Fire up the vSphere CLI command prompt from the start menu

4. The command prompt defaults to c:\Program Files\Vmware\Vmware vSphere CLI\. Change to the” bin” directory.. You should now be at c:\Program Files\Vmware\Vmware vSphere CLI\bin.

5. To configure the switch we just created with jumbo frame support type –

vicfg-vswitch.pl -server -m 9000

eg. vicfg-vswitch.pl -server ESX01 –m 9000 vSwitch1

6. To confirm it worked correctly run the following –

vicfg-vswitch.pl -server -l

eg. vicfg-vswitch.pl -server ESX01 -l

Your switch should appear with MTU of 9000 as shown.

  1. Keep the prompt open as we will be using it a few more times yet

Step 2 – Setup vkernel ports with jumbo frames support

We have to do this part entirely from the CLI as we can’t create vmknics in the GUI and set the MTU later on like we did with the vSwitchs. The MTU can only be set on the creation of a vkernel port.

1. Before you can create the vmknics and assign them an IP address and MTU setting you need first create a port group with the names that you intend to use for each vkernel port. For each vkernel port type –

vicfg-vswitch.pl -server -add-pg

eg. vicfg-vswitch.pl -server ESX01 -add-pg iSCSI_1 vSwitch1

2. To confirm it worked type-

vicfg-vswitch.pl -server -l

eg. vicfg-vswitch.pl -server ESX01 -l

You should get something like this –

3. Now create the vkernel ports and attach them to the relevant port group by typing –

vicfg-vmknic.pl -server -add –ip -netmask -p “PortGroup” –mtu 9000

eg. vicfg-vmknic.pl -server ESX01 -add -ip 192.168.254.12 -netmask 255.255.255.0 -p “iSCSI_1” –mtu 9000

4. To confirm it worked type –

vicfg-vmknic.pl -server -l

eg. vicfg-vmknic.pl -server ESX01 -l

Step 3 – Binding the vkernel ports to the physical NIC’s

At this point we need to switch back to the GUI and configure each vKernel port so that it only uses one active adaptor.. This allows the NMP driver within ESX to handle all the load balancing and failover. Once that is done we go back to the command line one more time and then the job is done.

1. Connect to you ESXi host with the vsphere client

2. Go to the properties of the vSwitch that you have created.

3. Highlight the first vKernel port and click edit, then go to the “Nic Teaming” tab.

4. Check the “override vSwitch fail over order” box

5. Move all but one of the physical adaptors from the “active” list to the unused list. Do this for each adaptor so that each vKernel port uses a different physical adapter.

6. Go back to the CLI prompt and “bind” each vKernel port to the iSCSI initiator by running the following command –

esxcli –server swiscsi nic add -n -d

eg. esxcli –server ESX01 swiscsi nic add -n vmk1 -d vmhba34

7. To confirm it worked run the following-

esxcli –server swiscsi nic list -d

eg. esxcli –server esx01 swiscsi nic list -d vmhba34

You should see a whole bunch of details (IP, MTU etc) for each vKernel port that is bound to the iSCSI HBA.

Wrapping it up

So that’s it. If everything worked you should now be able to point your jumbo frame enabled ESXi iSCSI initiator at your target and run a discovery. Each target device should now have at least two paths to the storage. Keep in mind that you can only have a maximum of 8 paths to a device when using ISCSI on ESX.

Once you can see your LUNS you should be able to configure the NMP diver to use Round Robin for each of the accessible devices.

Advertisements
  1. #1 by Wade Kilgore on July 12, 2010 - 1:54 pm

    Great article, thanks for the information. One question that came to mind, if you are using multiple NICs on the vSwitch can those be re-enabled after you’ve bound the VMKernel port? For example, we’re wanting to bond 4 NICs to each ESXi host to the iSCSI SAN….if the NICs are set to unused on the switch will they actually be used or can they be put back to enabled after completing the above process?

    • #2 by Ben Karciauskas on October 19, 2010 - 8:36 pm

      Hi Wade. This reply is probably way too late but for the record you shouldn’t and you don’t need too set additional NICs on the iSCSI kernel ports as the native multipathing (NMP) takes care of balancing the load.

      Sorry for the late reply but I decided to finally take a look at this site after a few months of neglect and found your comment.

      Ben

  2. #3 by Raleigh Moody on August 6, 2010 - 7:11 pm

    You have an error in the CLI commands above:
    —————————————–
    vicfg-vmknic.pl -server -add –ip -netmask “” –mtu 9000

    eg. vicfg-vmknic.pl -server ESX01 -add -ip 192.168.254.12 -netmask 255.255.255.0 “iSCSI_1” –mtu 9000
    —————————————–

    The above commands are missing the -p (or -portgroup) option flag in front of the portgroup name. Thus, these should be:
    —————————————–
    vicfg-vmknic.pl -server -add –ip -netmask -portgroup “” –mtu 9000

    eg. vicfg-vmknic.pl -server ESX01 -add -ip 192.168.254.12 -netmask 255.255.255.0 -portgroup “iSCSI_1” –mtu 9000
    —————————————–

    –Raleigh

    • #4 by Ben Karciauskas on October 19, 2010 - 8:32 pm

      Thanks for the feedback Raleigh. I have updated the post.

  3. #5 by Jack on September 16, 2010 - 1:06 pm

    I can create VMKernel and then modify/update it to MTU=9000 just like what I did with vSwitch.

    See my post:
    http://www.modelcar.hk/?p=2736

    and also got a feedback frmo Equallogic.

    The document on the web site is the supported method of setting Jumbo Frames on the switch. This is the method that we have tested and confirmed to work.

    Of course, as with many things, there is typically a method of doing this through the GUI as well. The method you are following appears to work in my tests as well, but we cannot confirm if it is a viable operation as it has not been tested through our QA process.

    My suggestion would be to utilize the tested method. You may also want to check with VMware directly as it is possible that the GUI method you are utilizing simply calls the CLI commands we provide, but we cannot confirm that for certain (we do not have access to their code).

    Thanks!

    Name Removed
    Enterprise Technical Support Consultant
    Dell EqualLogic, Inc.

    • #6 by Ben Karciauskas on October 19, 2010 - 8:29 pm

      Thanks for the feedback Jack. What ever works I guess 🙂

      Sorry for the late reply but I decided to finally take a look at this site after a few months of neglect and found your comment.

      Ben

  4. #7 by Fredrik Karlsson on October 19, 2010 - 7:05 am

    Great article! VERY helpful!
    Rgs,
    Fredrik

    • #8 by Ben Karciauskas on October 19, 2010 - 8:24 pm

      Glad to help 🙂

      Sorry for the late reply but I decided to finally take a look at this site after a few months of neglect and found your comment.

      Ben

  5. #9 by Lee on May 6, 2011 - 1:41 pm

    One slight correction, under point#3, you don’t want quotes around the port group name.

    You show:
    eg. vicfg-vmknic.pl -server ESX01 -add -ip 192.168.254.12 -netmask 255.255.255.0 -p “iSCSI_1” –mtu 9000

    But that will throw and error

    Instead:
    eg. vicfg-vmknic.pl -server ESX01 -add -ip 192.168.254.12 -netmask 255.255.255.0 -p iSCSI_1 –mtu 9000

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: