My thoughts about Server Side Caching

To follow up my last post about different storage technologies and approaches, I want to dive deeper into one of those. This time it’s all about server side caching which is rather new in the way it now can be implemented in virtual environments and how it can accelerate a broad range of applications.

Server side caching technologies leverage local resources right inside the hypervisor like SSDs, PCIe Flash cards or even RAM to drive down application (VM) latency which in the end is the key performance metric that really matters. The idea is to transparently plug into the virtual machines I/O path and to intercept their I/Os, write them to Flash or RAM and acknowledge the I/O right back to the VM. So virtual machines see real Flash/RAM performance and latency.

Since this technology hasn’t gained as much as awareness it actually deserves, I think it’s worth it to point out some of the key benefits:

Decouples virtual machine performance (IOPS/Latency) from primary storage

I have no clue how much time I’ve spent just playing with spindle counts, RAID variants, vendor discussions just to hit the requires IOPS for new storage systems. Even with a Storage Tiering approach and Flash as Tier 1 it still can be a challenging because all of a sudden the IOPS/GB ratio between different storage tiers comes into play. How much Tier 1 do I need? Should I go with just two Tiers or will my pool be big enough to create a third Tier 3 with slow SATA disks and so forth.

This is especially important for SMB customers, who often have to use basic block storage systems with rather low spindle counts. I’ve seen those system to become a bottle neck very fast because people tend to underestimate the I/O demand of applications like SQL, Exchange, etc. even in smaller environments.

Allows to freely choose which Flash devices to use

I’ve seen storage vendors using of the shelve SSDs but charging 5 times more than the actual street price. No doubt that people still hesitate to pay this price! But this approach lets you choose what to plug into your host. You can choose the SSD vendor, capacity with the best $/GB or IOPS/GB. This can significantly drive down the costs to implement flash. If you go with devices from the OEM of your existing server equipment the price will certainly be higher, but you don’t need to worry about compatibility, RMA processes etc. basically you get the services/SLAs you are used to.

Efficient & Fault Tolerant

Since the local device don’t need to be RAIDed you don’t lose as much capacity as in external storage systems. Only in case the write back cache is enabled, the cache software has to replicate every I/O to a second (or even third) host within the cluster to ensure fault tolerance against host and device failures. If you use the layer as read cache only, there is basically no loss and the full capacity can be used to cache I/Os.

It can scale out

If you add additional hosts to a virtual environment you CAN add them with or without local flash devices. If your current hosts are sufficient to run your most critical enterprise applications and they are already accelerated you don’t necessarily have to add flash to those new hosts but you can of course! Just keep failover scenarios in mind. And in case you want every VM to be accelerated you can easily scale out by adding new hosts with local flash devices (or additional RAM).

Management & Ease of Use

All solutions I’ve seen so far did a very good job in terms of usability and integration into VMware’s vSphere & Web Client. So setup and management don’t cause a big overhead and don’t require a special training to be able to install and use the product.

However you may ask yourself “What about my primary storage system and the cache software, will they get along? And how about all the dirty data inside the cache?”

Of course this in turn means that the local caching devices are full of “dirty data” and the caching software is responsible to take care of it in terms of fault redundancy (I/O replication between nodes) and to commit (de-stage) those I/Os over time to a persistent storage system.

This has can have a huge positive impact to your primary storage, simply because it can significantly reduce the total number of IOPs your SAN or NAS has to process. Even if the caching layer acts just as read cache, due to the size of Flash devices the data (individual blocks or content) can reside there way longer than in just some GB of NVRAM inside the storage array.

If also the write back cache functionality is being used the caching software can eliminate redundant writes of the same block and only commit the latest version of a block down to the array. Both scenarios can significantly reduce the total number of IOPS which allows some conclusions:

  • Bring primary storage arrays down to a healthy state in case it’s already at its limits
  • Get more out of existing assets
  • And even if some folks don’t like this one, consider smaller arrays due to the reduced number of IOPS it has to process

But nothing comes for free. The caching software needs to be purchased, no doubt about the Flash devices and probably, if your hosts are just equipped with SD cards (diskless config.), a proper RAID controller will also be required. RAM can really help here, so don’t rule it out too quickly.

In my opinion Server Side Caching is a smart way to speed things up and to create room when it comes to choosing a new primary storage, since IOPS & latency is not longer your primary concern. Guess the storage folks don’t like to hear this but if SSC gets enough awareness and a growing install base it can really steal a big peace of the storage market cake.

That’s it for now. I’m planning to follow this one up with two additional posts. A hands-on deep dive on one of the current solutions available on the market as well as a post about the question “server side caching vs. All Flash Arrays” so stay tuned.


A view on the storage market – 2014

The dust of VMworld 2014 hasn’t even settled down yet and while I was following the event via Twitter and blogs, I’ve realized how glad I’m to work in the field of virtualization and especially in the area of storage. Why you may ask? Because I think the storage market is one of the most interesting areas in IT to work in. There are so many different solutions and approaches out there, all trying to solve the problems and challenges that come along with the need of storing your most critical business assets, your data. More and more startups are coming out of stealth to challenge the big players like EMC while those are not standing still.

It’s quite a challenge to keep pace with the evolving market and to stay up to date with about all those vendors and solutions, but it’s a fun one! That’s why I thought it might be a good idea to provide those of you who are not following the market as closely with a quick walkthrough across today’s major approaches and solutions.

Let’s start with the most basic solution, traditional “Block Storage” which still makes up a big portion of the market. They come many different varieties, ranging from rather stupid block storage like Dell’s MD3, HP’s MSA or Nexsan’s E-Series up to very well known solutions like NetApp’s FAS series or EMC’s VNX. Those include way more intelligence and also an own layer of abstraction to offer features like Thin-Provisioning, Storage Tiering, De-Duplication, etc. Over time they have built up a rich ecosystem, especially around data protection solutions which makes them still very attractive for many customers. Those arrays usually scale up by adding more disks or by replacing the existing dual controller configuration with faster ones.

More about storage architectures can be found on Chad’s blog.

However the range in this category is pretty big, from entry level systems up to really advanced enterprise grade solutions. Just for the overview I’ve put them into a single category.

No doubt that customers will still go for them as long as other solution I’m about to cover are still in the process scaling down own to SMB customers.

Combined with a “Storage Hypervisor” to abstract the provided capacity and performance like DataCore’s SANsymphony-V or FalconStor they can be even more attractive to SMB customers. These software solutions can act as hypervisor for your storage, which enables customers to turn all sorts of storage units into a smart solution with features like Thin-Provisioning, Storage Tiering, etc. The software layer abstracts the provided capacity and pools it together. This allows spreading your data across multiple physical storage arrays and so to accelerate performance. They even compete with solutions like EMC VPLEX because this software layer allows you to create stretched cluster across sites. And because those solutions are not tied to any hardware, they can also run as virtual machine to create your own virtual SAN/hyper-converged solution. But this usually goes hand in hand with additional complexity due to the multiple layers that are required to build such a solution.

This problem has been targeted by (native) “Hyper-Converged” solutions which combine computing resources (CPU & RAM) with local storage. They eliminate the central SAN as component of your datacenter by scaling it out across all computing (hypervisor) nodes. In my opinion especially VMware’s own VSAN technology will play an important role in the SMB market. For SME up to Enterprise customers they are now in a race with the leader Nutanix and SimpliVity which are on the market for quite a while now and they have made their self already a name. That’s why VMware just announced an OEM program called EVO:RAIL, which combines VMware vSphere (VSAN + vCenter Server) with a new interface to simplify the deployment. The solution will be provided by partners like Dell, Supermicro and even EMC which all want a piece of the booming hyper-converged cake.  You may ask why this market is going crazy? Simply because those solutions allow you to scale-out pretty easy and setup efforts have been reduced to a minimum. However the implementations vary, especially in terms of scalability and features like data reduction, just keep that in mind.

All those solution have one thing in common, FLASH! No matter which approach it is, all of them integrate Flash in some form to accelerate storage performance. No matter if as read cache in entry level SANs, as storage Tier or as Flash first approach in hyper-converged solutions.

A hyper-converged solution wouldn’t even be possible without Flash, due to the limited amount of physical disks available to each node. To achieve a reasonable level of performance they use Flash devices as read & write cache or also as storage tier. Even I haven’t used a solution other than VSAN myself; I see a potential performance bottleneck because of the limited number of SSDs/disks per node.

This leads me directly to the so called “Hybrid-Arrays” which combine Flash and spinning disks in a central storage array. A good example that comes to my mind is Nimble Storage. Due to the intelligence they’ve put into their arrays, they can even use slower capacity disks like 7.2k SATA drives by optimizing the incoming I/Os to be sequentially written to down. And this is where spinning disks really shine. Flash in this case is used only as read cache. Other approaches like EMC’s FastCache in the VNX series can use SSDs also as write cache. This in my opinion can be more efficient than a classic Storage Tiering approach, simply because hot data will be way faster on flash, basically when it’s needed and not after a scheduled data movement. And as you can see, some arrays can be found in multiple categories since they have evolved overtime. Another example would be X-IO’s ISE arrays.

This brings me to one of my favorite arrays out there, the “All Flash Arrays” (AFA) which are packed with just Flash storage (usually SSDs of the shelve) to provide even more IOPS. And excuse me if I’m a bit rude here, but a legacy block storage packed with SSDs in my opinion is not an AFA! Simply because the price per SSDs is often disproportionately expensive and a bunch of SSDs can easily drive the storage controllers to their maximum. A real AFA should offer a non-blocking architecture,  data reduction technologies like de-duplication and compression and should have an efficient implementation of RAID technologies optimized for flash as well. So I talk about those specifically build for Flash, like Pure Storage’s FA-400 or EMC’s XtremeI/O series. Gartner recently published a new Magic Quadrant for AFA currently led by those two vendors. In my opinion closely followed by SolidFire. Those arrays (usually also dual controller + SSD shelves) can be used to run dedicated workloads which require high IOPS and low latency or also as a storage tier within a Storage Tiering concept. One thing that’s not quite obvious, whereas Pure is following a scale-up approach, EMC and SolidFire are scale-out architectures. I can only recommend the “Tech Field Day” videos on YouTube to get a better understanding of their technologies. Because of the current pricing and the rather low capacity I don’t see them to be the only storage within your data center in the near future.

Last but not least a completely different approach, the “Server Side Caching”. The idea behind this is to use local Flash devices or even RAM inside the host/hypervisor and to transparently intercept I/Os in their path down to the storage array to be cached on those local Flash devices. Even if these solutions from PernixData or others like SANDisk and Infinio don’t provide any persistent storage capacity, they have some really big benefits. The most obvious is the performance, because all accelerated VMs see real SSD latency no matter how the actual SAN looks like. You are free to choose which SSDs or Flash devices you buy to accelerate your application performance. The application performance will be decoupled from the actual SAN performance, which makes the decision for a new SAN somewhat easier since you don’t need to be concerned about the performance.  And these solutions allow to easily scale-out as you are adding new hosts.

There would be so much more to say about the individual solutions, but this post should only give you an overview of the current storage market. One thing that maybe becomes clear after reading this post is that there is no final answer to the question which approach is the holy grail of the storage technology. Therefore customer sizes, requirements, budgets, etc., are way too different to give an absolute answer. And in my opinion this is a positive thing, to have so many solutions to choose from!

One last thing. I’ve covered the major approaches and mentioned just a bunch of vendors in this post. No doubt that there much more interesting solutions like Dell and their Compellent, HP’s 3Par or even more startups like Coho Data. Maybe I’ll find a way to provide you with an overview of all of them.

VMworld 2013 Recap and 2014 Outlook *updated*


Last year I was one of the lucky guys which were selected to attend VMworld 2013 in San Francisco and as you can imagine I was REALLY excited. It was my first VMworld, my first visit to San Francisco and my first stay in the U.S. I guess I don’t need to mention that SF is a really awesome city which I can only recommend to everyone to stop by when you have a chance to.

The scale of the city and the event itself was breathtaking. Over 22.000 people spread across the Mascone Center and even a hotel nearby. In the morning it looked like right before a football game when people are streaming to the stadium, but it this case it was for the general sessions.

Unfortunately I didn’t take much pictures of the event, so here just a bunch to get some impressions:

There were so many interesting sessions to choose from that I really struggled to build my own schedule. In the end I picked about three or max. four sessions a day. However, in case you are in doubt, there will be the chance to change your mind when on site. But be prepared if you didn’t schedule a session you have to expect a long queue of people waiting for free seats. I just scheduled technical deep dives and I enjoyed almost all of them. So it’s not just marketing, you can really benefit from those sessions for your daily job.

I can only recommend to limit the number of sessions per day, simply because it can be too much information at once and your brain will probably give up.

Make sure to spend enough time at the Solutions Exchange. The Solution Exchange was probably my favorite part. Talking to many different vendors, exploring & learning about their solutions was really interesting.

As many other attendees & bloggers already said, make yourself comfortable and don’t stress yourself too much and you will definitely enjoy the event.

It was definitely the coolest event I’ve been to so far, the combination of an awesome event in an amazing city is just fantastic. I really enjoyed every single day and no doubt that I will take every chance to attend again.


 What about 2014?

2014This year I’m going to attend VMworld 2014 in Barcelona for the very first time but not just as customer and partner but also as blogger. I’m really excited and also thankful that I get the chance to attend another VMworld event. No doubt it will also challenge myself to turn all that massive information into some blog posts. I will blog, tweet and share as much as information I can about the event.

Please don’t hesitate and let me know if you will be in Barcelona, I would be happy to meet you there!

And for those of you who are wondering what I’m looking forward this year …


My schedule will mainly focus on storage & data protection. I’m curios to hear some stories of early adaptors of VMware VSAN and how the product has evolved since its initial release. I’m optimistic to be able to have setup my first VSAN at a customer site till October, so I’m probably be able to share some experiences too.

Especially the storage market has currently so many vendors and solutions to offer that it’s hard to stay up to date and VMworld is a great opportunity to catch up with many of them. I guess that hyper converged and all flash solutions will dominate the event.

Topics like backup in all its varieties and archiving always seem to be problematic when talking to customers, so I’ll keep my eyes open for new and enhanced solutions to ideally support our customers.

It’s also a great to check out completely new vendors and solutions even outside my main field of interest.

It’s not too late, take the chance an register!