Dell EqualLogic – vSphere iSCSI Setup 3

As follow up to my little EqualLogic deepdive, I’ll now take a look at the basic array setup as well as the vSphere iSCSI configuration to bring both worlds together.

Let’s start with a basic EqualLogic setup. If you configure the array for the very first time (via CLI or Remote Setup Wizard) you need to set an array Name, an IP for eth0 on controller 0, a group name as well as a group IP address. If the group already exists and is accessible you can join the group otherwise you can create a new one.

Important: Before run the initial setup make sure eth0 is connected and has a link, otherwise the setup won’t complete.

Excerpt from the CLI based setup:

Enter the network configuration for the array.

Member name []: Array1

Network interface [eth0]:

IP address for network interface []:

Netmask []:

Default gateway []:

Initializing interface eth0.  This may take a minute…

Enter the IP address and name of the group that the array will join.

Group name []: vTricks

Group IP address []:

Searching to see if the group exists.  This may take a few minutes.


The group does not exist or currently cannot be reached. Make sure

you have entered the correct group IP address and group name.

Do you want to create a new group (yes | no) [yes]:

Group Configuration

Group Name:                     vTricks

Group IP address:     

Do you want to use the group settings shown above (yes | no) [yes]:

 Password for managing group membership:

Once this is done, you can access the webgui via the just configured group IP. There for example you could also change the group IP if neededEQL_GRP_Settings

Then you can configure eth, the second iSCSI interfaces as well as eth2 a dedicated management port.EQL_IP_CFG

Important: The EqualLogic design requires all interfaces to be connected to the same subnet including that all participating switches are stacked/interconnected.

In case of a PS6510X with 2 x 10GbE interfaces per controller, you only need two IP addresses per array.  Due to the active/passive controller architecture you only need to configure the interfaces on controller 0, the passive controller will take over the MAC & IP in case of a failover.

By the way, it seems not to be possible to configure the dedicate management port before the initial setup has been completed. This means during the setup you may enter the iSCSI addresses you want to use later on.

As depicted above the MTU size if always 9000, but the interfaces supports also smaller MTU sizes. This depends on the negotiation between the endpoints. If the EqualLogic detects that flow control is not enabled on the switch port you will receives a warning. So I would recommend configuring all participating switch ports (including the vSwitch) accordingly. And don’t forget the R-STP configuration on the switches.




Now let’s take a look at the vSphere part.

According to the VMware KB2038869, we have a setup where all VMKernel ports connect to a single target IP. This in our case is the SAN group IP which we configured during the setup.

So we have to create a single vSwitch with at least two VMKernel ports. These need to be within the same subnet as the EqualLogic group and interface IP addresses.vSwitch

To be able to use the VMKernel ports for VMK binding, you need to allocate a single physical NIC to each VMKernel interface. This can be achieved by moving down all but one vmnic down to the unused adapters on the “NIC Teaming” tab.


In this case I need to do this twice, once for VK “iSCSI_1” and a second time for “iSCSI_2”.

And don’t forget to set the MTU to 9000 on the vSwitch used for the iSCSI connections.MTU

You could also use a distributed switch instead of a vSwitch, this will speed up the setup in case you got to many hosts. But this requires an Enterprise Plus license which I don’t have for this environment.

Then you should verify that the vmnics used for the VMKernel ports have flow control enabled. By default the physical vmnics are set to 802.3x Flow Control negotiation, so if the switch has flow control enabled, the vmnics should it also.


Once this is all done you can head on and configure the storage adapters.

In this case I’m using the dependent iSCSI initiator provided by the Broadcom BCM57810 adapters to leverage their TCP offload engines (TOE).  This will reduce CPU overhead on the host CPUs caused by TCP/IP operations like checksum calculation.

You can identify the right vmhba (storage adapter) by the MAC address. Then you can move on to the properties to configure the VMK port binding.vmhba_mac

On the “Network Configuration” tab you can add the previously configured VMKernel port. As you can imagine you will only be able to add a single VMKernep port.  This is because there is a 1:1 relationship between VMKernel/vmnic and the vmhba which is in the end the same device.VMKbinding

On the “Dynamic Discovery” tab you have to enter the group IP.iSCSI_Dyn_Discovery

This step needs to be done for every VMKernel / vmhba you want to use. In this case I have to do this twice.

Alternately you can also use the Software iSCSI adapter. The process is pretty the same, except that you have to add both VMKernel ports on the “Network Configuration” tab of the software iSCSI adapter.

Now you need something to present to your ESXi hosts. So create one or multiple volumes and configure the access control list accordingly.EQL_Vol_Access

You can provide access to the whole iSCSI network or to individual IQNs.EQL_access_list2

So if the access configuration is done, you can rescan the storage adapters on you ESXi host(s) and they should be able to discover the EqualLogic volumes.

Each vmhba should see one path to the EqualLogic SAN.vmhba38

So if you got two controller interfaces and two VMKernel ports, you should end up with 2 paths for a single datastore.RR2

On the EqualLogic array you can see that the group will distribute the iSCSI sessions across the array Interfaces.EQL_Sessions

In case your hosts aren’t licensed with an enterprise plus license, you need to leverage the default vSphere path selection policies, which by default is set to Fixed (VMware) for the SATP “VMT_SATP_EQL”. I would recommend setting the default to Round Robin:

esxcli storage nmp satp set –default-psp “VMW_PSP_RR” –satp “VMW_SATP_EQL”


Print Friendly, PDF & Email

Related Post

It's only fair to share...Tweet about this on TwitterShare on LinkedInEmail this to someone

Leave a comment

Your email address will not be published. Required fields are marked *

3 thoughts on “Dell EqualLogic – vSphere iSCSI Setup

  • Chris

    You should install the Dell EqualLogic MEM VIB this will enable the VAAI options in ESXi. You will should also disable DelayedAck and consider increasing your iSCSI LoginTimeout from the Default 5 to 30 or 60 to avoid a login storm during heavy iSCSI load. (Mainly during degraded operations such as replacement of disks. Finally if you have DCB capable switch you should strongly consider this option.

    • Patrick Post author

      Hey Chris

      Somehow I missed you comment, sorry for the late approval & response!
      You are pointing out some good tips here, thanks for that.


  • Donald Williams

    Hello there. Nice write up. There is a PDF from Dell. TR1091. That covers how to configure ESXi for best performance e.g.tune the Round Robin IOs per path to three for better port usage.

    TR1049 shows how to connect ESXi to EQL as well.

    For VMware, especially clusters I find that using CHAP is the best way to set the Volume ACL. This way you only have to set it once on the array. Adding a new node, means just adding the CHAP credentials, vs having to go to each volume and add in another ACL entry. The old ACL entries have a limit on 16 per volumes. After that you must use the Access Policy feature. Another reason I like using CHAP. CHAP info is encrypted which prevents spoofing.

    When using the Broadcom (now Qlogic) iSCSI HBAs. Making sure they are up to date is important and match the driver version. Older FW didn’t support Jumbo Frames for example, or would cause instability and dropped connections.

    Also “more” smaller volumes work better than one or two MEGA sized volumes. Especially since in ESXi clusters there are times when a node will exclusively reserve a Volume. If you have other unlocked volumes IO will continue as normal on those.

    In the comments, Chris mentioned installing MEM to enable VAAI. MEM (MPIO Extension Module) is an add on for MPIO only. It does not enable/disable VAAI. VAAI is an API for enhanced storage integration. In order to use both with ESXi 4.1-5.5 you must have an Enterprise or Enterprise+ license. That’s a VMware requirement. With ESXi v6.x that moved down to Standard License leve.
    VAAI is negotiated at connection time. The various features of VAAI are negotiated one by one. Not all storage devices support all the features of VAAI. Dell EQL and CML do.