In my last post about Dell’s EqualLogic series I wrote about the vSphere iSCSI setup based on VMware’s default path selection policies. This time I want to add some details about the behavior of these default PSPs as well as a short how to on Dell’s Multipathing Extension Module.
But first let’s make a step back. The last time I used two dependent iSCSI adapters to leverage their TCP Offload Engine (TOE). What I’m going to describe is independent if you use the software iSCSI initiator of the dependent adapters.
I almost forgot to mention that in this case we have two PS6510X arrays within a single pool, each with 2 active iSCSI interfaces. So all volumes are distributed 50/50 across both arrays.
However, in case you already configured your environment with a default VMware path selection policies, you should take an eye on the behavior of your ESXi hosts, because VMware at this point can only use the paths (iSCSI sessions) which the group reports back.
Until now there were three scenarios I was able to observe:
A) Two paths leading to two different arrays and interfaces (e.g. eth0 on Array1 and eth1 on Array2)
So depending on the scenarios I just mentioned:
1) You are connected to both arrays but then there is no path redundancy. In case a link goes down the volume stays online with a single path what directly leads us to case 2)
2) You got no simultaneous connections to both arrays. When you write to a distributed Volume the target array which receives the I/O needs to forward it to the second array.
Yes I admit, this sounds a bit weird but it is how it is 🙂
As I was performing some IOmeter benchmarks I observed the SAN with Dells SAN HQ monitoring software. So even the ESXi host was only connected to a single array, both arrays processed 50% of the IO/s. I guess that’s the way SAN HQ displays the I/O stats, whereas in theory one array has to process 100% and to forward 50% of them to the second array. I have no insights on what’s going on under the hood of an EqualLogic SAN group, so it’s just an assumption at this point.
At least we can define a short rule of thumb:
Number of VMKernel ports = number of paths / iSCSI sessions
But what can we do to add some more intelligence to our virtual environment?
The answer can be given as fast as it is implemented:
Dell’s MEM (Multipathing Extension Module) which is designed for VMware Pluggable Storage Architecture. This plug-in is able to talk to the SAN group to add additional iSCSI sessions as needed. So I/Os can be routed more intelligent. For example the I/Os will be processed based on the least queue depth.
To be able to use the MEM you need two things:
- At least an enterprise license for your ESXi hosts to be able to leverage the storage APIs and
- A account for eqlsupport.dell.com to be able to download the plug-in
The deployment is pretty easy:
Download the latest MEM plug-in.
Extract the files from the downloaded zip “EqualLogic-ESX-Multipathing-Module.zip”
Once the installation is done, it will take a little while but then the host will claim all “VMW_SATP_EQL” volumes with the new PSP “DELL_PSP_EQL_ROUTED”.
This will double the number of available paths without any further configuration steps.
We can see that the host is now logged in on all available interfaces. This is what we usually want to see.
By default the MEM establishes two sessions per volume and array which leads to the exact same scenario depicted above. You can also say it establishes two sessions per slice, which is the part of a volume that is stored on a single array.
As I said I was performing some benchmarks, so I performed some tests before I installed the MEM. The max IOPS increased from ~13k to ~14.5k.
So at this point I can only recommend use the MEM plug-in if possible, it will add more intelligence to the I/O handling, performance as well as reliability will benefit from that. Or at least use some more VMKernel ports get some more sessions / paths to use with VMware’s PSPs.