PernixData FVP & linked-clones – The hidden gem

In this post I want to introduce you to a “hidden gem” in FVP that can help you to bring your VDI project on the fast lane. One piece to a successful VDI project is user acceptance and usability which is often tightly coupled to the responsiveness of a virtual desktop and its applications. This responsiveness in turn is defined by the time (measured in milliseconds) I/O operations of those virtual desktops take to complete.

I can remember some early discussions about read intensive VDI workloads but it turned out that VDI workloads are way more write intensive than expected. So to ensure an optimal user experience it’s important to accelerate not only reads but also writes in an efficient way.

Often the answer to this challenge is to add more spinning disks or expensive Flash to the existing storage infrastructure or setup a new silo in form of an All Flash Array or a hyper converged block respectively, just to run the VDI environment.

Using FVP an administrator has the choice between SSDs, PCIe Flash cards or even memory as server side media to speed up those latency sensitive I/O operations leveraging their existing storage infrastructure. This moves the performance directly into the hypervisor and decouples it from the storage capacity.

Sometimes the existing server hardware introduces some design constraints which limits the possible options of an acceleration media. For example blade servers usually can’t take advantage of PCIe Flash cards or VDI hosts often have a rather high memory utilization.

But especially for virtual desktops memory is actually an obvious choice for certain reasons like the ultra-low latency and a consistent performance whatever block size the VM is writing.

So let’s see if memory can be an option despite that fact that you may don’t have tons of memory left.

Linked clones are virtual machines whose virtual disks are linked to a so called “golden image” also known as “replica” which holds the actual operating system, including applications, corporate settings, etc. The golden image of course is read only, but Windows doesn’t work it can’t write to disk. So to fix that the linked clones can write their changes to individual virtual disks. Optimally both, the reads from the replica as well as the writes to the individual disks should be accelerated to ensure an optimal user experience.

That’s exactly what FVP is doing out of the box, there is no need to configure anything to achieve this. This of course includes support for VMware vMotion, VMware HA, etc.

But I would like to point out how efficient FVP deals with those linked clones. FVP out of the box recognizes linked clones and more important the base disk of the replica when accelerating the datastore which are storing those objects.

Instead of building an individual memory (or Flash) footprint for all the reads of every single virtual machine, FVP promotes just individuals blocks. So for example if VM A reads block Z from the replica disk (on the persistent storage) this particular block will be promoted to the FVP layer. If then VM B also reads block Z it is already there and can be served from the local acceleration media. FVP doesn’t promote the same block (from the replica) twice. So in essence, all linked clones can share the read cache content on a host. So you can see this linked-clone optimization as some form of de-duplication.


Writes of every single VM will be accelerated individually as depicted above. Those individual written blocks on the local acceleration media can be used to directly serve subsequent reads.

As you can see on the screenshot below, the footprint of virtual desktops is rather low compared to the “Linked Clone Base Disk”. The allocated Megabyte of the linked clones are the individual writes.FVPLinkedCloneBaseDisk

This reduces the required amount of memory or Flash capacity to accelerate a whole lot of virtual desktops.

So let’s assume the thin provisioned size of the replica disk is 20GB and you have 100 virtual desktops per host, you only need 20GB of memory or Flash footprint to accelerate all reads of those desktops. These 20GB would be sufficient to virtually keep the whole golden image on the local acceleration media to speed up all VMs on a particular host.

Basically this applies to all non-persistent VDI deployments based on linked clone technology. No matter if VMware Horizon View or Citrix XenDesktp where FVP recently has been verified as Citrix XenDesktop ready.

With this hidden gem I would like to close not only this post but also the year 2014. I hope you’ve enjoyed it like I did, so have a great start and hopefully see you in 2015.

Print Friendly, PDF & Email

Related Post