As follow up to my little EqualLogic deepdive, I’ll now take a look at the basic array setup as well as the vSphere iSCSI configuration to bring both worlds together.
Let’s start with a basic EqualLogic setup. If you configure the array for the very first time (via CLI or Remote Setup Wizard) you need to set an array Name, an IP for eth0 on controller 0, a group name as well as a group IP address. If the group already exists and is accessible you can join the group otherwise you can create a new one.
Important: Before run the initial setup make sure eth0 is connected and has a link, otherwise the setup won’t complete.
Excerpt from the CLI based setup:
Enter the network configuration for the array.
Member name : Array1
Network interface [eth0]:
IP address for network interface : 192.168.100.11
Default gateway [192.168.100.1]:
Initializing interface eth0. This may take a minute…
Enter the IP address and name of the group that the array will join.
Group name : vTricks
Group IP address : 192.168.100.10
Searching to see if the group exists. This may take a few minutes.
The group does not exist or currently cannot be reached. Make sure
you have entered the correct group IP address and group name.
Do you want to create a new group (yes | no) [yes]:
Group Name: vTricks
Group IP address: 192.168.100.10
Do you want to use the group settings shown above (yes | no) [yes]:
Password for managing group membership:
Once this is done, you can access the webgui via the just configured group IP. There for example you could also change the group IP if needed
Important: The EqualLogic design requires all interfaces to be connected to the same subnet including that all participating switches are stacked/interconnected.
In case of a PS6510X with 2 x 10GbE interfaces per controller, you only need two IP addresses per array. Due to the active/passive controller architecture you only need to configure the interfaces on controller 0, the passive controller will take over the MAC & IP in case of a failover.
By the way, it seems not to be possible to configure the dedicate management port before the initial setup has been completed. This means during the setup you may enter the iSCSI addresses you want to use later on.
As depicted above the MTU size if always 9000, but the interfaces supports also smaller MTU sizes. This depends on the negotiation between the endpoints. If the EqualLogic detects that flow control is not enabled on the switch port you will receives a warning. So I would recommend configuring all participating switch ports (including the vSwitch) accordingly. And don’t forget the R-STP configuration on the switches.
Now let’s take a look at the vSphere part.
According to the VMware KB2038869, we have a setup where all VMKernel ports connect to a single target IP. This in our case is the SAN group IP which we configured during the setup.
So we have to create a single vSwitch with at least two VMKernel ports. These need to be within the same subnet as the EqualLogic group and interface IP addresses.
To be able to use the VMKernel ports for VMK binding, you need to allocate a single physical NIC to each VMKernel interface. This can be achieved by moving down all but one vmnic down to the unused adapters on the “NIC Teaming” tab.
In this case I need to do this twice, once for VK “iSCSI_1” and a second time for “iSCSI_2”.
And don’t forget to set the MTU to 9000 on the vSwitch used for the iSCSI connections.
You could also use a distributed switch instead of a vSwitch, this will speed up the setup in case you got to many hosts. But this requires an Enterprise Plus license which I don’t have for this environment.
Then you should verify that the vmnics used for the VMKernel ports have flow control enabled. By default the physical vmnics are set to 802.3x Flow Control negotiation, so if the switch has flow control enabled, the vmnics should it also.
Once this is all done you can head on and configure the storage adapters.
In this case I’m using the dependent iSCSI initiator provided by the Broadcom BCM57810 adapters to leverage their TCP offload engines (TOE). This will reduce CPU overhead on the host CPUs caused by TCP/IP operations like checksum calculation.
On the “Network Configuration” tab you can add the previously configured VMKernel port. As you can imagine you will only be able to add a single VMKernep port. This is because there is a 1:1 relationship between VMKernel/vmnic and the vmhba which is in the end the same device.
On the “Dynamic Discovery” tab you have to enter the group IP.
This step needs to be done for every VMKernel / vmhba you want to use. In this case I have to do this twice.
Alternately you can also use the Software iSCSI adapter. The process is pretty the same, except that you have to add both VMKernel ports on the “Network Configuration” tab of the software iSCSI adapter.
You can provide access to the whole iSCSI network or to individual IQNs.
So if the access configuration is done, you can rescan the storage adapters on you ESXi host(s) and they should be able to discover the EqualLogic volumes.
In case your hosts aren’t licensed with an enterprise plus license, you need to leverage the default vSphere path selection policies, which by default is set to Fixed (VMware) for the SATP “VMT_SATP_EQL”. I would recommend setting the default to Round Robin:
esxcli storage nmp satp set –default-psp “VMW_PSP_RR” –satp “VMW_SATP_EQL”