Dell EqualLogic – A closer look *updated* 1


At the moment I have the possibility to set up two Dell Equallogic PS6510X arrays and I’m sure I’m not the only one who always compares capabilities & features of a system with those you are very familiar with. So obviously I compared it with DataCore’s SANsymphony-V. Ok I admit SANsymphony-V is a piece of software or better a storage hypervisor, whereas Dell offers a solution including storage hardware & virtualization in a box but that’s a typical competitive situation out on the market.

So in the following article I want to provide you with some technical details around Dell’s Equallogic storage systems as well as some comparison with DataCore’s storage hypervisor.

Dell’s Equallogic storage systems come with a built in storage virtualization based on the Fluid Data Architecture. Not only Equallogic systems make use of this architecture, Dell’s complete storage portfolio (Compellent, PowerVault, DR- as well as the DX systems) contain the Fluid Data DNA. This virtualization layer enables core technologies like Pooling, Automated Tiering and Dynmaic Provisioning. More details about the Fluid Data Architecture can be found here.

The Equallogic arrays are pure iSCSI solutions with support for 1 and 10Gbe Ethernet. One thing I recognized when I was looking through the specs, was the huge number of different models & configurations. The reason for this is that you can’t buy an array and pack it with disks at will. All EqualLogic models come with a predefined type and number of disks.

PS-series arrays models

Let me provide you with an abstract from the configuration guide to give you an overview about the available arrays:Pre_PS4100_PS6100 PS4100_PS6100 Dell EqualLogic Configuration Guide v14.1 Page 1-2 + 3

As you can see there are a lot of different models and predefined configurations, there is not much to say about that. You need to find an array/configuration that meets your needs or at least one which is close to it. You won’t be able to add something like an expansion enclosure to a PS-series array, you can only add additional Equallogic arrays into the existing “SAN group”. This is how the Equallogic line scales out.

 

Storage controller

As we are already at the hardware layer, let’s take a look at the storage controller.

All Equallogic arrays with a dual controller configuration run in active/passive mode. The second controller is just for failover purposes.

ActivePassive

The single controller configuration will provide the same level of I/O performance as a dual controller configuration.” – Quote Dell EqualLogic Configuration Guide v14.1 Page 1-11

Whereas all links/ports of the standby controller are electrically inactive and only get activated during failover, the cache is constantly being mirrored. To avoid data loss due to a power outage, the cache can be written to non-volatile memory (cache to flash).

The cabling depicted on the screenshot above already indicates the failover behavior. The Equallogic arrays use a vertical port failover.

VerticalPortGroups

Dell EqualLogic Configuration Guide v14.1 Page 1-13

Always two ports, for example ETH0 on CM0 and ETH0 on CM1, build a single logical port. So you should follow the cabling recommendations and put the vertical port members on different switches. A link down of a single interface will trigger a vertical port failover for the affected logical interface.

In this case both controllers receive I/Os, but just one is able to process them. So I/O will be internally re-routed to the active controller.

###UPDATE###

I just realized, that this only applies to controller of PS4100/PS6100 arrays. So for example NOT for a PS6510X. For array prior to the PS4100/PS6100 series a link down will trigger the host to re-login to the group, which then will redirect the session to another available interface. In case all links go down (not the controller itself) the array will be offline!

###/UPDATE###

You even need just 2 IPs addresses for both controllers, you just have to configure the interfaces on controller 0 and in case of a failover the standby port takes over the MAC as well as the IP address.

Cache varies between the different array model between 1,2 or 4GB per controller. In case of the PS6510X series the controllers offers 2GB of cache.

 

Storage virtualization

Now we know some of the basics and can take a look at a more interesting part, storage groups, pools and synchronous replication.

The top instance within an EqualLogic setup is the “SAN group”, which in essence is your SAN comprised of at least one EqualLogic PS series array. Each array within a group is called member and each member is part of a pool. Each pool can have up to 8 members and there can be up to 4 pools per group. The maximum number of arrays per group is 16. Well, enough numbers for the moment J

A volume within a pool can be provided by a single member or by multiple members as long as the members use the same RAID policy. So volumes are bound to a single array or are spanned across multiple arrays. So even a pool can have 8 members, a single volume can only be distributed across maximum 3 arrays.

Each array in a group gets always configured with a single! RAID policy. This means you complete array runs a specific RAID level, there is no possibility to combine different RAID types within a single array. Having just a single RAID wouldn’t make much sense so the array creates multiple RAIDs of the same type based on a predefined schema which depends on the number of disks. Here an example of a RAID50 on a PS6510X with 48 disks:

2 hot spare disks + (6+1 6+1 6+1 6+1 6+1 6+1) (3+1) = 48 disks in total

This unfortunately leads to a setup where one of the RAIDs contains less disks than the others and so it doesn’t provide the same level of performance.

With APLB (more about that in a second) you can have member arrays with different RAID configurations to provide different storage tiers to a single volume.

Vol_Pool_Rel

To be able to provide multiple storage tiers within your pool you can mix different PS-series arrays. As you can imagine, if one of the arrays in a pool goes down, all volumes that leveraged storage resources of that array will be offline.

 

Load balancers

But how does the application hosts access a volume on spanned across multiple arrays?

First of all there is a single IP address for the complete SAN group which is accessible by the hosts. You don’t enter IPs of single storage arrays to the host’s iSCSI initiators. The iSCSI initiator(s) will always connect to the group IP and the integrated load balancers will take care of the rest.

And not only network connections, also disk resources including I/Os need to be managed. Therefore the PS-series runs up to three load balancers in order to automate the management of its resources.

NLB (Network Load Balancer): Manages network interfaces & assignment of individual iSCSI connections to pool members. NLB will take also care of the interface utilization of pool members and if needed, iSCSI connections will be redirected to spread the load evenly across all interfaces.

CLB (Capacity Load Balancer): Manages disk capacity inside the pool to use all storage resources evenly. This helps not to overload a single storage resource (array). If a pool contains different RAID configurations, it’s possible to set a preferred type for a volume and CLB attempts to follow the preference request.

APLB (Automatic Performance Load Balancer): Manages the distribution of pages (blocks) within a pool. This load balancer works primarily based on latency metrics. If APLB detects that a pool array has unevenly more workload to handle it will take care of it as well as when it detects high latencies on a pool member (> 20ms). So called pages (15MB blocks) with high I/O needs will be migrated to disks with highest I/O capabilities whereas inactive data (like those of snapshots) can be moved to low cost disks.

There can be some scenarios where APLB doesn’t move data around like if all of the members of the storage pool are showing low latencies or if a user has chosen to override the automatic placement of volumes to a specific tier.

The load balancers enable also features like to seamlessly move volumes between pools. More Details around the load balancers can be found here.

At this point I don’t want to dive too deep into things like thin provisioning, snapshots, clones and stuff to limit the length of this post.

 

Replication

No doubt that each modern array is capable of providing some form of replication, even if it’s just an asynchronous replication. The technical hurdles to implement such a replication are rather low.

It gets more complicated when you want synchronous replicated storage system and that is what I want to focus on for now.

Dell calls this feature “SyncRep” which writes data to two different pools within the same group. So you end up with two independent copies of a volume what doubles the storage requirements over a normal volume. To avoid data loss or corruption, each array has to acknowledge incoming I/Os before the application hosts get the final OK for the write.

Of course there are some general and EqualLogic specific requirements which can be found in the configuration guide, here an excerpt:

  • PS Series Array firmware v6.0.0 or higher
  • Two pools, each containing at least one array member
  • Since Synchronous Replication implementation is two pools in the same PS Series group and all members must be in the same VLAN/subnet, and a “flat network” is required
  • Volumes cannot be configured for both traditional replication and Synchronous Replication
  • Supports up to 32 SyncRep enabled volumes in a PS Series group (PS4xx0 supports up to 4 Sync Rep volumes)
  • SyncRep is not available for volumes bound to members

The SyncRep feature works on a per-volume basis. The volumes which are being mirrored synchronously are stored in two different pools. The primary pool is called “SyncActive”, this is the pool where the application hosts are connected to. The second one is the “SyncAlternate” pool which is in standby mode. The EqualLogic SyncRep feature doesn’t offer active/active volumes which can be accessed/altered simultaneously on both pools. So to leverage both pools actively, the SyncRep enabled volumes need to be spread 50 / 50 across both pools. This highly favors stretched cluster scenarios to serve reads locally and not across the datacenter interconnect. Write of course have to travel at least once across the interconnect. Dell’s EqualLogic (as well as SANsymphony-V) are by the way no certified Metro Cluster Storage Solutions.

So this is how the I/O path looks like:SyncRep_Vol_Pool_Rel_2

  1. Application host sends I/O to the SAN group (single IP address)
  2. Data is simultaneously written to both pools
  3. The SyncActive and SyncAlternate acknowledge the write to the group
  4. Host receives acknowledgement from SAN group

This is how the writes are processed as long as pools are in sync. This leads us to the states the SyncRep can run into:

In-Sync: Both, the SyncActive and the SyncAlternate pools contain the same data.

Paused: Hosts can access the SyncActive pool but the pools are out of sync until the replication continues and the SyncAlternate pool catches up to the SyncActive pool

Out-of-Sync: Due to the fact that just the SyncActive pool is being accessed primarily this holds the most recent data whereas the SyncAlternate can’t keep up for some reason and lags behind.

So what in case a pool isn’t available?

A)    As long as both pools are in sync, the pool relationship can be swapped. So the SyncActive becomes the SyncAlternate pool and vice versa. This process is transparent to the application hosts.

B)    If the SyncAlternate pool is offline, hosts can process I/Os as usual to the SyncActive pool. The group will track changes to re-write them to the SyncAlternate pool when it comes back online.

C)    In case the SyncActive pool is offline a MANUAL failover is required. A failover can only be done if a fault occurred and the SyncRep is in out-of-sync state. This manual failover is not transparent to the application hosts.

More details around the replication can be found in the paper „TR1085-Synchronous-Replication-v1 0

 

vSphere 

Multipathing Extension Module (MEM)

The EqualLogic arrays come with a third party mutipathing plug-in for VMware’s Pluggable Storage Architecture. This plugin will improve load balancing as well connection management. To be able to install the plug-in, a vSphere enterprise or enterprise plus license is required! In case the environment isn’t licensed that way, you need to leverage the standard vSphere path selection policies.

MEM is not a requirement for SyncRep to work.

Once installed, the EqualLogic MEM will open iSCSI sessions to each member that provides resources to a volume. MEM will intelligently route I/Os to the member best suited to handle the request.

VMware usually allocates five seconds for an iSCSI session to login to an iSCSI target, Dell recommends to increase the iSCSI login timeout value to 60 seconds. This can be done via:

esxcli iscsi adapter param set –adapter=vmhba –key=LoginTimeout –value=60

More details as well as the corresponding ESXi patch can be found in KB2009330

Here screenshot of the latest VMware HCL (starting with firmware 6.x):

VMwareHCL

In addition to the supported MEM plugin, you can see that the EqualLogic arrays support all VAAI primitives.

And last but not least, Dell provides you the Virtual Storage Manager which allows the EqualLogic administrator to manage the SAN within vSphere. Some of the advantages of a good vSphere integration is the ability to take VM / application consistent snapshots of storage volumes or to get support via an orchestrated process to deploy new datastores.

More details can be found in the “EqualLogic Virtual Storage Manager: Installation Considerations and Datastores Manager

Unified Storage (CFIS/NAS)?

To round up this post just a brief look at the unified storage part. Dell offers a complementary solution called “EqualLogic FSxxxx” which comes in form of an appliance based on standard Dell server hardware. These appliances leverage the Dell Fluid File System (FluidFS) v2 to provide a highly available Network Attached Storage.  More details about the FS-series can be found here.

I hope you enjoyed this EqualLogic deepdive. Please forgive me the number of links to Dell documents or my own posts but to put everything in a single post would have been be way too much 🙂

Print Friendly, PDF & Email

Related Post

It's only fair to share...Tweet about this on TwitterShare on LinkedInEmail this to someone

Leave a comment

Your email address will not be published. Required fields are marked *

One thought on “Dell EqualLogic – A closer look *updated*