Final thoughts about my VSAN experiences

To wrap up my VSAN series I want to share my final thoughts with the community. Please feel free to comment and share yours!

All experiences I’ve made with VSAN are only based on my LAB with the minimum deployment of just three hosts and without any real world workload but I would say I’m able to rate the overall impressions I got.

In my case the setup itself was quite simple because I already had a vCenter server running but in case of a green field deployment the provided boot strap process is maybe a bit cumbersome but no big deal.

The policy based management in general is really pleasant and offers flexibility to assign different policies to different workloads or even different VMDKs on a single VM.

The way VSAN handles problems/outages is good but it also has the potential to cause some trouble if you don’t follow the recommendation to set a proper host isolation response. Please see my “Working with VSAN – Part II” post for details.

The lack support for stretched cluster deployments and large 62TB VMDKs is a bit disappointing but I hope it won’ take too long until these features make it into the product.

From a performance perspective I won’t rate it without having any real experience from a productive environment but I can rate the way it scales, which is quite nice. I would always recommend to select a chassis which allows future SSD/HDD expansions. Personally I favor the Dell PowerEdge R720 XD which offers support for up to 24 HDDs, redundant SD card to install the hypervisor, sufficient computing recourses and enough slots to add HBAs, RAID controllers or Flash cards. I think really important is the ability to add hosts which do not contribute storage to the VSAN cluster. In my lab I was not able to feel the difference between a VM running on a host with or without a local copy of the VM data.

But please be realistic, if your “working set” doesn’t fit into the SSD cache and I/Os need to be served from disk(s), this can impact the application performance. Many people I’ve talked to were wondering why VMware doesn’t let customers the choice to use RAIDs instead of single disk drives to speed up disk operations. I don’t know if there is a technical reason behind this requirement or just the vision of using storage in a more efficient way.

When it comes to networking probably even if 1 GbE will be sufficient for smaller deployments but you also have the ability to mitigate potential bottlenecks by using multiple network adapters to handle the VSAN workload.

I’ve also talked to some VMware folks who don’t see VSAN as a 1:1 replacement for classic SANs yet. In the end we agreed that it heavily depends on the planned use case and the expected workload. Probably a huge IO monster database with hundreds of GB or even TB is not the best use case for VSAN, just keep that in mind.

However I indeed see it for customers which are running smaller environments with reasonable workloads to “replace” entry level SAN solutions. The huge benefit is the simplified management which enables admins to work in their well-known environments like Ethernet networking and vSphere.

But all that glitters is not gold. What really annoyed me was a problem with the VM Storage Policies or actually with the VSAN Storage Providers. There is a known Issue with vSphere 5.5 Update 1. In my opinion this is not supposed to happen when releasing an update and making such a hyped solution GA. To cut some corners and to speed up fixing the issue I moved all my hosts to a new VMware vCenter Server Appliance, which was no problem for VSAN itself.

So overall I really enjoyed working with VSAN and now I feel comfortable to recommend it to customers if it fits into the environment and it matches the expected workloads. This is important for me personally because I think you should always stand up for a solution you sell to a customer.

Working with VSAN – Part IV

To continue my “Working with VSAN” series, this time I want to challenge the scalability (at least what was possible within my lab). But see yourself.

Performance scaling by adding disk groups

To see how VSAN scales when adding disks I did the following tests:

IOMeter @ 32 QD, 75/25 RND/SEQ, 70/30 R/W, 8KB in combination with different disk group and VM storage policy settings. But it’s actually not about the total values or settings, it’s about to show the scalability. Not to mention that the SSD used are pretty old (1st GEN OCZ Vertex) and differ in performance!

RUN1

Failures to tolerate (FFT): 0

VMDK Stripe: 1

FTT0_ST1

FTT0_ST1_IOM

RUN2

Failures to tolerate (FFT): 0 – So still on one host…

VMDK Stripe: 2 – … but on two disk groups!

FTT0_ST2 FTT0_ST2_IOM

To be able to combine multiple stripes like shown above with FFT > 0, you will need multiple disk groups in each host to get the performance. In my case I just got a single host with two disk groups, so I was not able to perform the same test with a FFT = 1 policy.

Changing VM Storage Policies

To wrap up this post I want to mention that during my tests I’ve always used the exact same VMDK and so I had to change the policy multiple times. Of course it took some time till VSAN moved the data around to that it was compliant with the policy. But it worked like a charm and I though it is also worth mentioning!

But what about the network?

Multiple VMKernel Interfaces 

In case you are planning to run VSAN over a 1GbE network (which is absolutely supported), multiple VMKernel interfaces could be a good way to mitigate a potential network bottleneck. This would enable VSAN to leverage multiple 1GbE channels to transport data between the nodes.

To be able to use multiple VMKernel ports you will need to keep in mind to use different subnets for each VMK and to always set a single vmnic as active and all others to stand-by.

ActStbyNic 2VSANVMK

To see how this would scale I moved a virtual machine to a host who had no data stored locally of that particular VM, so that all the reads (& writes) had to traverse the network.

2VMsWithLocalwitnessI also had setup the VSAN networking a couple of day before so I started with the desired multi VMK setup und were quite happy with the results.

FTT1_ST1_esxtop_2VMK FTT1_ST1_IOM_2VMK

Then I disabled VSAN on the second VMKernel and also moved the vmnics down to be in stand-by only. The result were as expected, VSAN were just using a single vmnic.

FTT1_ST1_esxtop_1VMK FTT1_ST1_IOM_1VMK

 To verify these results I wanted to switch back to the multi VMKernel setup but for some reason I wasn’t able to get it back to work again. I moved the vmnic up to be active again (as depicted above) and re-enabled VSAN traffic on the second VMKernel interface (VMK4). But since then I was unable to see VSAN traffic across both NICs again. When I disable VSAN traffic on the first VMKernel (VMK3) it switches to the second interface (VMK4) which tells me that the interfaces are generally working. At this point I’m a bit clueless and asking you guys, have you already tried this setup? What are your results? Am I missing something or did I misunderstood something? Are there any specific scenarios where the multi VMK kicks in? I would love to get some feedback!

Working with VSAN – Part III

In Part I and II I already tested some scenarios which may impact your VSAN cluster, like simulating the outage of two hosts by just isolating them. This time I’m going to torture my VSN lab even more, read on to how this turned out.

What if I put one host into maintenance mode and another node fails?

HostMaintHostFailed

Will the remaining node be able to run the VMs and even restart those of the failed host?

Maintenances Mode with “Full data migration”

MaintModeError

VSAN didn’t allow me to put a host into maintenance mode using full data migration. I got a couple of VMs running with a VM Storage Policy saying FFT (Failures to Tolerate) = 1. So with just three hosts this would violate these rules.

 

Maintenances Mode and “Ensure accessibility”

This mode is possible since it ensures that at least one copy of the VM data and the witness (or the second copy of the VM data) is available on the remaining nodes. This mode didn’t move any data around since there were no eligible nodes available.*

Then I simulated several outage scenarios to see what would happen:

  • Host Reboot of ESX3

The remaining host ESX2 was fully functional and restarted the VM running on ESX3.

  • Disk Failure on ESX3

UnhealthyDiskGroup

The remaining host ESX2 was fully functional AND the VMs running on ESX3 were functional since they were able to access their disks over the network on ESX2. So it was also no problem to vMotion the VMs from ESX3 over the ESX2, reboot the host to fix the disk failure.

Btw. I simply re-plugged the SSD I pulled out to simulate the failure. By re-importing a foreign config. in the PERC Controller, the volume was back online and VSAN recognized that and no data was lost.

ForeignConfig

  • Network Partition – ESX3

The last test was to isolate ESX3 and as expected the remaining host ESX2 was fully functional and restarted the VM running on ESX3.

Honestly?  This is way better node cluster than I expected, since we are still talking about a THREE! node cluster. Ok I admit there could also arise scenarios where thing can go wrong. Assume the scenario above, when there was no VM data on ESX2 then those VMs on ESX3 would have crashed.

But again a three node cluster is just a minimum deployment, so if you want to make sure you can withstand multiple host failure, you have to add more nodes it’s as simple as that.

* Contrary to the scenario when you put two hosts into maintenance mode, then VSAN will start to move data around!

RAID0 Impact

OK now the disk is gone and I want to replace it. Usually when using a RAID other than RAID0 this would be no problem since the volume would be still online. In my case I was forced to use RAID0s on every single device because the PERC 6/i doesn’t support pass-through mode. For now, even if I replaced the drive the RAID0 was still offline. This means I had to reboot the host to manually force the RAID0 online again. In case I would have used a pass-through capable controller, this would be no problem since it would just pass through the new disk. The RAID0 also disables the option to use a hot spare disk since from a logical standpoint it wouldn’t make any sense to replace a disk within a RAID with an empty disk.

 

Stay tuned for more VSAN experience!