Nexsan


A view on the storage market – 2014

The dust of VMworld 2014 hasn’t even settled down yet and while I was following the event via Twitter and blogs, I’ve realized how glad I’m to work in the field of virtualization and especially in the area of storage. Why you may ask? Because I think the storage market is one of the most interesting areas in IT to work in. There are so many different solutions and approaches out there, all trying to solve the problems and challenges that come along with the need of storing your most critical business assets, your data. More and more startups are coming out of stealth to challenge the big players like EMC while those are not standing still.

It’s quite a challenge to keep pace with the evolving market and to stay up to date with about all those vendors and solutions, but it’s a fun one! That’s why I thought it might be a good idea to provide those of you who are not following the market as closely with a quick walkthrough across today’s major approaches and solutions.

Let’s start with the most basic solution, traditional “Block Storage” which still makes up a big portion of the market. They come many different varieties, ranging from rather stupid block storage like Dell’s MD3, HP’s MSA or Nexsan’s E-Series up to very well known solutions like NetApp’s FAS series or EMC’s VNX. Those include way more intelligence and also an own layer of abstraction to offer features like Thin-Provisioning, Storage Tiering, De-Duplication, etc. Over time they have built up a rich ecosystem, especially around data protection solutions which makes them still very attractive for many customers. Those arrays usually scale up by adding more disks or by replacing the existing dual controller configuration with faster ones.

More about storage architectures can be found on Chad’s blog.

However the range in this category is pretty big, from entry level systems up to really advanced enterprise grade solutions. Just for the overview I’ve put them into a single category.

No doubt that customers will still go for them as long as other solution I’m about to cover are still in the process scaling down own to SMB customers.

Combined with a “Storage Hypervisor” to abstract the provided capacity and performance like DataCore’s SANsymphony-V or FalconStor they can be even more attractive to SMB customers. These software solutions can act as hypervisor for your storage, which enables customers to turn all sorts of storage units into a smart solution with features like Thin-Provisioning, Storage Tiering, etc. The software layer abstracts the provided capacity and pools it together. This allows spreading your data across multiple physical storage arrays and so to accelerate performance. They even compete with solutions like EMC VPLEX because this software layer allows you to create stretched cluster across sites. And because those solutions are not tied to any hardware, they can also run as virtual machine to create your own virtual SAN/hyper-converged solution. But this usually goes hand in hand with additional complexity due to the multiple layers that are required to build such a solution.

This problem has been targeted by (native) “Hyper-Converged” solutions which combine computing resources (CPU & RAM) with local storage. They eliminate the central SAN as component of your datacenter by scaling it out across all computing (hypervisor) nodes. In my opinion especially VMware’s own VSAN technology will play an important role in the SMB market. For SME up to Enterprise customers they are now in a race with the leader Nutanix and SimpliVity which are on the market for quite a while now and they have made their self already a name. That’s why VMware just announced an OEM program called EVO:RAIL, which combines VMware vSphere (VSAN + vCenter Server) with a new interface to simplify the deployment. The solution will be provided by partners like Dell, Supermicro and even EMC which all want a piece of the booming hyper-converged cake.  You may ask why this market is going crazy? Simply because those solutions allow you to scale-out pretty easy and setup efforts have been reduced to a minimum. However the implementations vary, especially in terms of scalability and features like data reduction, just keep that in mind.

All those solution have one thing in common, FLASH! No matter which approach it is, all of them integrate Flash in some form to accelerate storage performance. No matter if as read cache in entry level SANs, as storage Tier or as Flash first approach in hyper-converged solutions.

A hyper-converged solution wouldn’t even be possible without Flash, due to the limited amount of physical disks available to each node. To achieve a reasonable level of performance they use Flash devices as read & write cache or also as storage tier. Even I haven’t used a solution other than VSAN myself; I see a potential performance bottleneck because of the limited number of SSDs/disks per node.

This leads me directly to the so called “Hybrid-Arrays” which combine Flash and spinning disks in a central storage array. A good example that comes to my mind is Nimble Storage. Due to the intelligence they’ve put into their arrays, they can even use slower capacity disks like 7.2k SATA drives by optimizing the incoming I/Os to be sequentially written to down. And this is where spinning disks really shine. Flash in this case is used only as read cache. Other approaches like EMC’s FastCache in the VNX series can use SSDs also as write cache. This in my opinion can be more efficient than a classic Storage Tiering approach, simply because hot data will be way faster on flash, basically when it’s needed and not after a scheduled data movement. And as you can see, some arrays can be found in multiple categories since they have evolved overtime. Another example would be X-IO’s ISE arrays.

This brings me to one of my favorite arrays out there, the “All Flash Arrays” (AFA) which are packed with just Flash storage (usually SSDs of the shelve) to provide even more IOPS. And excuse me if I’m a bit rude here, but a legacy block storage packed with SSDs in my opinion is not an AFA! Simply because the price per SSDs is often disproportionately expensive and a bunch of SSDs can easily drive the storage controllers to their maximum. A real AFA should offer a non-blocking architecture,  data reduction technologies like de-duplication and compression and should have an efficient implementation of RAID technologies optimized for flash as well. So I talk about those specifically build for Flash, like Pure Storage’s FA-400 or EMC’s XtremeI/O series. Gartner recently published a new Magic Quadrant for AFA currently led by those two vendors. In my opinion closely followed by SolidFire. Those arrays (usually also dual controller + SSD shelves) can be used to run dedicated workloads which require high IOPS and low latency or also as a storage tier within a Storage Tiering concept. One thing that’s not quite obvious, whereas Pure is following a scale-up approach, EMC and SolidFire are scale-out architectures. I can only recommend the “Tech Field Day” videos on YouTube to get a better understanding of their technologies. Because of the current pricing and the rather low capacity I don’t see them to be the only storage within your data center in the near future.

Last but not least a completely different approach, the “Server Side Caching”. The idea behind this is to use local Flash devices or even RAM inside the host/hypervisor and to transparently intercept I/Os in their path down to the storage array to be cached on those local Flash devices. Even if these solutions from PernixData or others like SANDisk and Infinio don’t provide any persistent storage capacity, they have some really big benefits. The most obvious is the performance, because all accelerated VMs see real SSD latency no matter how the actual SAN looks like. You are free to choose which SSDs or Flash devices you buy to accelerate your application performance. The application performance will be decoupled from the actual SAN performance, which makes the decision for a new SAN somewhat easier since you don’t need to be concerned about the performance.  And these solutions allow to easily scale-out as you are adding new hosts.

There would be so much more to say about the individual solutions, but this post should only give you an overview of the current storage market. One thing that maybe becomes clear after reading this post is that there is no final answer to the question which approach is the holy grail of the storage technology. Therefore customer sizes, requirements, budgets, etc., are way too different to give an absolute answer. And in my opinion this is a positive thing, to have so many solutions to choose from!

One last thing. I’ve covered the major approaches and mentioned just a bunch of vendors in this post. No doubt that there much more interesting solutions like Dell and their Compellent, HP’s 3Par or even more startups like Coho Data. Maybe I’ll find a way to provide you with an overview of all of them.


Nexsan – New E-Series V announced 2

Earlier this year in February, I wrote a post about Nexsan’s E-Series and their current firmware. This time I want to give you a short update on their latest E-Series hardware and products. I have to admit I still need to get used to the new name which is actually Nexsan powered by Imation.

Nexsan_by_Imation

This week Nexsan announced the second generation of their E-Series controllers/arrays, so I would like to give you quick update on what’s new.

Basically the E-series received the 2nd generation of storage controllers which of course offer a couple of enhancements. Both models will be available on the market and so the new series can be recognized by their new name “E-Series V”. Also the arrays got a new name, so for example new E60 is now called E60VT.

But let’s start with the enhancements of the 2nd generation of E-series V controllers.

The CPU speed has been increased by 50% and the controllers of the E48VT and E60VT arrays now support up to 16GB cache per controller. But one of the major new features is called “turbocharger” which is a PCIe based acceleration card which can be attached to the controller. This card adds additional 8 GB of RAM and an additional XOR RAID engine to speed up parity calculations which further improves throughput. This card will be an additional product and won’t be included in the E-Series by default.

E60VT

More details and the latest spec & data sheets can be found here

 


New Design 2

It’s not hard to see that I more than just slightly changed the look of my blog 🙂

I’m still a noob when it comes to web design and such things, but I recognized that a lighter design/color would be more “appropriate”. So don’t wonder if I change some more things related to the design in the next couple of days.

//Update: Beside the new design I just checked several posts in multiple browsers what I haven’t done so far. Usually I just check the posts with Chrome and I just realized how bad IE performs when it comes to image scaling. For future posts I’ll focus more on the quality of images and not only on the content itself. Don’t hesitate to tell me if something doesn’t work properly or looks horrible!


Nexsan – The E-Series & latest Firmware enhancements 2

In the last couple of months I had the chance to work more with Nexsan’s E-Series arrays, especially in combination with DataCore. So I would like to give you a brief introduction to Nexsan’s latest storage portfolio and what they can do for you.

Let’s start with a closer look at the E-Series and in future posts we will also see other series more in detail, especially the NST-series.

Nexsan’s portfolio basically contains three lines of storage systems which cover several use cases.

 

NST5000

UNIFIED

CIFS

NFS

iSCSI Block

Assureon

ARCHIVE

Secure Archive

Non-changing Files

CAS-based

E-Series

BLOCK

Fibre Channel

 iSCSI

SAS

So let’s take a look at the specs and the latest feature set of the E-series.

NEXSAN E18™

NEXSAN E48™

NEXSAN E60™

 
 Controllers

Dual Active/Active controllers with dual RAID engines per controller

 Density

2U – 18 disks

4U – 48 disks

4U – 60 disks

Available drives

1 / 2 / 3 / 4 TB SATA

450 / 600 GB 15K SAS

100 / 200 / 400 GB SSD

Capacity in chassis

72 TB

192 TB

240 TB

Expansion chassis

E18X

E48X

E60X

With 1 expansion

144 TB

384 TB

480 TB

With 2 expansions

216 TB

576 TB

720 TB

Host I/O ports

4     x 1GbE iSCSI +

0-4 x 1 GbE iSCSI    OR

0-4 x 8 Gb FC           OR

0-4 x 10 GbE iSCSI  OR

0-4 x 6 Gb SAS

4     x 1GbE iSCSI +

0-8 x 1 GbE iSCSI    OR

0-8 x 8 Gb FC           OR

0-8 x 10 GbE iSCSI  OR

0-8 x 6 Gb SAS

As you can see in the spec overview, the Nexsan E-Series is equipped with two fully redundant active/active controllers. In case of a replacement you don’t have the need to do any configuration or zoning changes, simply because the controller configuration is replicated between the controller and raid arrays. As such the new controller will instantly continue to work with the exact same FC/WWN and IP configuration.

With the latest enhancements Nexsan was able to further improve performance by unlocking an additional CPU core which provides more computing resources as well as more PCIe lines to go up to 8 host ports per controller. And as you now can imagine, with the additional CPU power they could improve throughput up to 50%, which also adds up to 10% performance in terms of IOPS.

Each array can be extended with two additional chassis with the exact same amount of drives as the main array. With the E60 array you can grow up to 720 TB in just 12 HU, which is very interesting if high capacity as well as density is required. Or if you are rather looking for IO density, you can also go with a bunch of SSDs and 15k SAS drives.

Now let’s check out where and how you can benefit from Nexsan’s storage arrays in particular.  With the increased number of host ports you are more flexible when it comes to direct” attached storage to your application hosts, like your virtual infrastructure or DataCore layer.

One major enhancement is that all arrays achieved the VMware ready certification now offering full VAAI support!

  • Block Zero
  • Full Copy
  • ATS – Hardware-Assisted Locking
  • Thin Provisioning – UNMAP

But in most cases I’ve seen Nexsan E-Series arrays in combination with DataCore, because they complement each other every well.  Especially in environments where the need arises to work with different storage tiers, you can make use of the good performance and the capability of mixing multiple disk types within the array.  As you probably know I’m a fan of the DataCore solution so let me give you a short explanation on how this works together.

Basically it starts with the creation of an array, also known as a RAID set or RAID group. This array defines the physical layout / RAID & protection level you would like to use, like a RAID10 with 15k SAS drives. On these arrays you now create Volumes/LUNs, which is Nexsan’s approach of flexible volumes to separate the physical RAID layer from the logical volume layer, for example when it comes to operations like extending volumes. In theory a single array, can hold” multiple volumes but I prefer a 1:1 mapping from arrays and volumes, this will prevent one volume from affecting performance of other volumes on the same array. DataCore in the end will see these volumes you’ve created and take them and into any disk pool you like.

Now imagine you have the need to provide multiple storage tiers to your users or clients, you can easily achieve this by combining different disk drives in your Nexsan storage system with the DataCore’s Auto-Tiering feature. Simply create multiple arrays” based on different disk drives (SSD, SAS or SATA) and put them in your Auto-Tiering” disk pool(s).

Maybe you want to start with some flash based storage and combine it with some fast RAID10 15k SAS drives. Then you could add some RAID5 SAS RAIDs or some SATA storage. Then just let the storage hypervisor do the work for you, to balance all the storage allocation units based on their I/O pattern. Independent of how your initial design looks like, you can easily scale out by replacing or adding more disk drives, or even adding new expansion shelves/arrays.  Sounds good, doesn’t it?

In addition to that you can benefit from the high capacity and density and use them also as backup to disk target.  However, once your data landed on the array you can use of the replication and AutoMAID feature to complete your DR or energy saving plan.

 

AutoMAID

This feature comes into play if you have areas of data which are static or rarely accessed. AutoMAID is Nexsan’s solution to achieve low power consumption on your backup or archive arrays. In multiple steps, it’s is possible to….

1. Unload heads:
2. Slow down to 4000 RPM:
3. Stop rotating:
4. Turn drive electronics off:
5.Turn chassis off:
15% Power Savings @ Sub-second recovery time
30% Power Savings @ 15 seconds recovery time
50% Power Savings @ 30-45 seconds recovery time
75% Power Savings @ 30-60 seconds recovery time
95% Power Savings @ 30-60 seconds recovery time

Here a little example to provide you with some numbers

AutoMAID1

Let’s do a quick step back to DataCore’s Auto-Tiering feature. You could combine both techniques, where as your high level (fast) storage tiers are heavily accessed, AutoMAID can slow down your low level (slow) disk tiers with rare access. So you can make use of power saving features on your production storage. 

 

Snapshots & Replication

If you have the need of taking snapshots or replication of your volumes to another array, you can do so by purchasing the required license.

Then the E-Series provides you with the capability of taking up to 4096 snapshots of your volumes which later on can be deleted in any order.Even if snapshots are nothing new, allow me provide you with some details about copy-on-write” based snapshots, for those who are not familiar with them:

To use copy-on-write based snapshots you will need a so called Reserved Snapshot Space” when you create a virtual volume, to reserve storage capacity exclusively for snapshots.

In the first step of the snapshot creation only the metadata, containing the information about the original blocks, will be copied.

Copy-On-Write

At this point no physical blocks will be moved or copied, so the creation of the snapshot is almost instantaneous. From now on the array will track incoming changes. Before block can be altered, the original block which should be changed, has to be copied to the designated snapshot reserve space, before the original block can be overwritten.

Copy-On-Write_2

As soon as the original block has been copied and the new block has been written to disk the snapshot now starts consuming disk space. The snapshot now points to these blocks which represent the consistent data at the moment of time at which the snapshot was taken.

Copy-On-Write_3_new

And as mentioned, you can also replicate your data, by setting up an asynchronous Replication of individual iSCSI, Fibre Channel or SAS connected volumes. There is no maximum distance between source and target devices, because the arrays will use a TCP/IP based iSCSI connection for the replication traffic. You can chose a one-to-one mapping or also a many-to-one mapping which works perfect with the fact that only changed blocks will be transferred. This is ideal if you got multiple arrays like the E18 in branch offices and you want to keep the data backed up in your datacenter.

The replication is based on snapshots which will be transferred to the remote system. This requires at least a minimum of snapshot capacity on the source and depending on the numbers of snapshots to keep, some more on the destination volume. With that information I already revealed that you can keep multiple snapshots of your volume on the target array.

There are some more really useful improvements within the new firmware like an enhanced monitoring or improved alerting but I haven‘t had the chance to play around with it, so I’m going to put this in another post, then I can provide you with some mite details about it. The new firmware is available now for all E-Series arrays and is designated Q011.1100.