Fibre Channel


HP StorageWorks 8/8 – Firmware Upgrade to v7.1.0a 6

Today I had to upgrade some HP StorageWorks 8/8 Fibre Channel switches. These are just an OEM version of Brocade’s E300 series. The switches were running on FOS v6.4.3 and we planned to upgrade to v.7.1.0a. Fortunately you can get the latest firmware pretty easily on the HP website

As far as I know all Brocade switches got two partitions, a primary and a secondary one, which stores the firmware.  The primary partition is the one the system boots from and onto the secondary partition you can download a new firmware at any time. Usually the system keeps them in sync to be able to perform so called high availability reboots. This means that the system will swap partitions, and reinitializes the FOS with the new firmware.

Usually you got two possibilities to upgrade you switch:

  • Web interface
  • CLI

For both methods you can decide if you want to download the firmware from a USB stick (Brocade branded only) or via FTP/SCP.

In this case we upgraded from v6.4.3 to v7.1.0a which skips the v7.0.x release. This doesn’t allow you to upgrade non-disruptively:

“Any 8 Gb/s or 16 Gb/s platform running any Fabric OS 7.0.0x release can be non-disruptively upgraded to Fabric OS 7.1.0a.”

“Any 8 Gb/s or 16 Gb/s platform running Fabric OS 7.0.1x or 7.0.2x release can be non-disruptively upgraded to Fabric OS 7.1.0a.”

“Any 8 Gb/s platforms (other than HP StorageWorks DC SAN Backbone Director Switch/HP

StorageWorks DC04 SAN Director Switch) operating at Fabric OS 6.4.1a or later must be upgraded to Fabric OS 7.0.0x or later before non-disruptively upgrading to Fabric OS 7.1.0a.”

You need to run the “firmwaredownload” command with the “ –s” parameter, which is used to explicit download the firmware only to the secondary partition.

firmwaredownload

Once downloaded to the secondary partition you have to perform a reboot and as depicted in the screenshot you will need to commit the firmware via the “firmwarecommit” command, or by enabling the “Auto-Commit”.  Whereas the reboot will disrupt FC connections, the commit will only update the secondary partition with the new firmware.

Instead of instantaneously committing the firmware, it’s possible to wait for some days and keep the old firmware on the secondary partition. In case of problems you can execute a “firmwarerestore” to quickly downgrade to the previous version. Otherwise you need to re-run the firmwaredownload with the firmware release you want to downgrade to.

Before starting with upgrade s/downgrades, you might want to check some documents/requirements:

That’s it for today.

 

 

 


Nexsan – The E-Series & latest Firmware enhancements 2

In the last couple of months I had the chance to work more with Nexsan’s E-Series arrays, especially in combination with DataCore. So I would like to give you a brief introduction to Nexsan’s latest storage portfolio and what they can do for you.

Let’s start with a closer look at the E-Series and in future posts we will also see other series more in detail, especially the NST-series.

Nexsan’s portfolio basically contains three lines of storage systems which cover several use cases.

 

NST5000

UNIFIED

CIFS

NFS

iSCSI Block

Assureon

ARCHIVE

Secure Archive

Non-changing Files

CAS-based

E-Series

BLOCK

Fibre Channel

 iSCSI

SAS

So let’s take a look at the specs and the latest feature set of the E-series.

NEXSAN E18™

NEXSAN E48™

NEXSAN E60™

 
 Controllers

Dual Active/Active controllers with dual RAID engines per controller

 Density

2U – 18 disks

4U – 48 disks

4U – 60 disks

Available drives

1 / 2 / 3 / 4 TB SATA

450 / 600 GB 15K SAS

100 / 200 / 400 GB SSD

Capacity in chassis

72 TB

192 TB

240 TB

Expansion chassis

E18X

E48X

E60X

With 1 expansion

144 TB

384 TB

480 TB

With 2 expansions

216 TB

576 TB

720 TB

Host I/O ports

4     x 1GbE iSCSI +

0-4 x 1 GbE iSCSI    OR

0-4 x 8 Gb FC           OR

0-4 x 10 GbE iSCSI  OR

0-4 x 6 Gb SAS

4     x 1GbE iSCSI +

0-8 x 1 GbE iSCSI    OR

0-8 x 8 Gb FC           OR

0-8 x 10 GbE iSCSI  OR

0-8 x 6 Gb SAS

As you can see in the spec overview, the Nexsan E-Series is equipped with two fully redundant active/active controllers. In case of a replacement you don’t have the need to do any configuration or zoning changes, simply because the controller configuration is replicated between the controller and raid arrays. As such the new controller will instantly continue to work with the exact same FC/WWN and IP configuration.

With the latest enhancements Nexsan was able to further improve performance by unlocking an additional CPU core which provides more computing resources as well as more PCIe lines to go up to 8 host ports per controller. And as you now can imagine, with the additional CPU power they could improve throughput up to 50%, which also adds up to 10% performance in terms of IOPS.

Each array can be extended with two additional chassis with the exact same amount of drives as the main array. With the E60 array you can grow up to 720 TB in just 12 HU, which is very interesting if high capacity as well as density is required. Or if you are rather looking for IO density, you can also go with a bunch of SSDs and 15k SAS drives.

Now let’s check out where and how you can benefit from Nexsan’s storage arrays in particular.  With the increased number of host ports you are more flexible when it comes to direct” attached storage to your application hosts, like your virtual infrastructure or DataCore layer.

One major enhancement is that all arrays achieved the VMware ready certification now offering full VAAI support!

  • Block Zero
  • Full Copy
  • ATS – Hardware-Assisted Locking
  • Thin Provisioning – UNMAP

But in most cases I’ve seen Nexsan E-Series arrays in combination with DataCore, because they complement each other every well.  Especially in environments where the need arises to work with different storage tiers, you can make use of the good performance and the capability of mixing multiple disk types within the array.  As you probably know I’m a fan of the DataCore solution so let me give you a short explanation on how this works together.

Basically it starts with the creation of an array, also known as a RAID set or RAID group. This array defines the physical layout / RAID & protection level you would like to use, like a RAID10 with 15k SAS drives. On these arrays you now create Volumes/LUNs, which is Nexsan’s approach of flexible volumes to separate the physical RAID layer from the logical volume layer, for example when it comes to operations like extending volumes. In theory a single array, can hold” multiple volumes but I prefer a 1:1 mapping from arrays and volumes, this will prevent one volume from affecting performance of other volumes on the same array. DataCore in the end will see these volumes you’ve created and take them and into any disk pool you like.

Now imagine you have the need to provide multiple storage tiers to your users or clients, you can easily achieve this by combining different disk drives in your Nexsan storage system with the DataCore’s Auto-Tiering feature. Simply create multiple arrays” based on different disk drives (SSD, SAS or SATA) and put them in your Auto-Tiering” disk pool(s).

Maybe you want to start with some flash based storage and combine it with some fast RAID10 15k SAS drives. Then you could add some RAID5 SAS RAIDs or some SATA storage. Then just let the storage hypervisor do the work for you, to balance all the storage allocation units based on their I/O pattern. Independent of how your initial design looks like, you can easily scale out by replacing or adding more disk drives, or even adding new expansion shelves/arrays.  Sounds good, doesn’t it?

In addition to that you can benefit from the high capacity and density and use them also as backup to disk target.  However, once your data landed on the array you can use of the replication and AutoMAID feature to complete your DR or energy saving plan.

 

AutoMAID

This feature comes into play if you have areas of data which are static or rarely accessed. AutoMAID is Nexsan’s solution to achieve low power consumption on your backup or archive arrays. In multiple steps, it’s is possible to….

1. Unload heads:
2. Slow down to 4000 RPM:
3. Stop rotating:
4. Turn drive electronics off:
5.Turn chassis off:
15% Power Savings @ Sub-second recovery time
30% Power Savings @ 15 seconds recovery time
50% Power Savings @ 30-45 seconds recovery time
75% Power Savings @ 30-60 seconds recovery time
95% Power Savings @ 30-60 seconds recovery time

Here a little example to provide you with some numbers

AutoMAID1

Let’s do a quick step back to DataCore’s Auto-Tiering feature. You could combine both techniques, where as your high level (fast) storage tiers are heavily accessed, AutoMAID can slow down your low level (slow) disk tiers with rare access. So you can make use of power saving features on your production storage. 

 

Snapshots & Replication

If you have the need of taking snapshots or replication of your volumes to another array, you can do so by purchasing the required license.

Then the E-Series provides you with the capability of taking up to 4096 snapshots of your volumes which later on can be deleted in any order.Even if snapshots are nothing new, allow me provide you with some details about copy-on-write” based snapshots, for those who are not familiar with them:

To use copy-on-write based snapshots you will need a so called Reserved Snapshot Space” when you create a virtual volume, to reserve storage capacity exclusively for snapshots.

In the first step of the snapshot creation only the metadata, containing the information about the original blocks, will be copied.

Copy-On-Write

At this point no physical blocks will be moved or copied, so the creation of the snapshot is almost instantaneous. From now on the array will track incoming changes. Before block can be altered, the original block which should be changed, has to be copied to the designated snapshot reserve space, before the original block can be overwritten.

Copy-On-Write_2

As soon as the original block has been copied and the new block has been written to disk the snapshot now starts consuming disk space. The snapshot now points to these blocks which represent the consistent data at the moment of time at which the snapshot was taken.

Copy-On-Write_3_new

And as mentioned, you can also replicate your data, by setting up an asynchronous Replication of individual iSCSI, Fibre Channel or SAS connected volumes. There is no maximum distance between source and target devices, because the arrays will use a TCP/IP based iSCSI connection for the replication traffic. You can chose a one-to-one mapping or also a many-to-one mapping which works perfect with the fact that only changed blocks will be transferred. This is ideal if you got multiple arrays like the E18 in branch offices and you want to keep the data backed up in your datacenter.

The replication is based on snapshots which will be transferred to the remote system. This requires at least a minimum of snapshot capacity on the source and depending on the numbers of snapshots to keep, some more on the destination volume. With that information I already revealed that you can keep multiple snapshots of your volume on the target array.

There are some more really useful improvements within the new firmware like an enhanced monitoring or improved alerting but I haven‘t had the chance to play around with it, so I’m going to put this in another post, then I can provide you with some mite details about it. The new firmware is available now for all E-Series arrays and is designated Q011.1100.