As you all know there has been a huge hype about the first release of VMware’s VSAN, but I’ve decided to wait until I’ve got my hands on it to be really able to rate it appropriately.I’ve never liked to jump onto the marketing train, I always prefer to rate product/solution based on hands on experience and that’s what I’m going to do. This post will cover the setup of VSAN to power my homelab.
I just got two Dell PowerEdge R710 servers from eBay which should host my homelab in the future. Both servers are equipped with a PERC 6/i controller which of course is NOT on the VSAN HCL.
But as I knew this would work because Jad is already using a similar setup to power his lab.
The third host is a self-made white-box which only participates in the cluster to provide the minimum number of three hosts. More about that in a second.
All three server chassis can hold up to 6 x 3.5” drives which should be sufficient. I’ve used some 2.5” to 3.5” carrier to be able to put my OCZ Vertex SSD into the chassis along with a single 1TB drive per host.
To be able to use the PERC 6/i or any other unsupported RAID controller you will need to configure a RAID 0 volume on every drive. And don’t forget to initialize the RAID to clear the previous content of the drive, because VSAN requires that the drives to be empty.
Then I booted the hosts and added them to my existing vCenter server. If you are looking for a process to deploy the vCenter server when not having other storage available then the VSAN disks, you should check out William’s post. However Ienabled VSAN, I’ve created a new cluster and added the two R710 hosts to it. Also I migrated all ESXi standard vSwitches and VMKernel ports to a distributed Switch. Why? Because VSAN comes also with a vDS license to enable users to make use of features like Network I/O control or other advanced features like “Load Based NIC Teaming”. (What in my opinion is pretty cool)
Then it was time to enable VSAN!
I selected the manual claiming to be able to show the claiming process in more detail.
Because I’ve used a controller which doesn’t support pass-through mode, I had to manually tell the ESXi host that one of the RAID 0 volumes is actually a SSD drive.
esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d naa.60024e8072c402001abb72a209c8e756 -o enable_ssd
esxcli storage core claiming reclaim -d naa.60024e8072c402001abb72a209c8e756
Then both hosts were ready to claim those disk to use for VSAN.
But as you can see the cluster is not so happy with just two hosts, so I was not allowed to deploy any virtual machines on my VSAN datastore.
But at this point I didn’t plan to add a third host so I had to manually edit the default VSAN policies to force the provisioning.
esxcli vsan policy getdefault
esxcli vsan policy setdefault -c vdisk -p “((\”hostFailuresToTolerate\” i1) (\”forceProvisioning\” i1))”
esxcli vsan policy setdefault -c vmnamespace -p “((\”hostFailuresToTolerate\” i1) (\”forceProvisioning\” i1))”
esxcli vsan policy setdefault -c cluster -p “((\”hostFailuresToTolerate\” i1) (\”forceProvisioning\” i1))”
After a while the cluster was looking fine.
I assigned the policy to a virtual machine and storage vMotioned it over to the vsanDatastore.
Up to this point I had the hope to be able to run an unsupported configuration with just two hosts, but this seems not to be possible if a certain level of redundancy is required/desired.
Ok then I decided to add the third host I mentioned earlier just to be able to apply the policy.
As soon as the third host was a part of the cluster the VM became compliant.
At this point I considered the VSAN setup to be complete. As final step I enabled VMware HA & DRS. To function properly at least VMware HA should be enabled after VSAN so that it is aware of it.
By now I can only rate the initial setup which I have to admit worked like a charm without any problems. But to be able to write a final resume I want to cover some additional points like fail over behavior or day to day tasks in a dedicated post so stay tuned.