FreeNAS 9.3 NFS share – ESXi datastore – Unable to connect to NFS server – Fixed!

Update2: I was able to “fix” the problem by downgrading to release 9.2.1.9 which works like a charm!

Update: Today I ran into the same problem again and I’m still working on it. This time adding a comment or modifying it didn’t help. Once I found out what is causing the problem I’ll update the post.

Since yesterday evening I was trying to mount a new FreeNAS (9.3) NFS share as ESXi 5.5 datastore and no matter what I’ve tried, the attempt always failed:UnableToConnectToNFSServer

~ # esxcfg-nas -a FreeNAS:Volume1 -o 192.168.180.150 -s /mnt/Volume1

Connecting to NAS volume: FreeNAS:Volume1

Unable to connect to NAS volume FreeNAS:Volume1: Sysinfo error on operation returned status : Unable to connect to NFS server. Please see the VMkernel log for detailed error information

 

ESXi : /var/log/vmkernel.log

2014-12-15T19:33:30.901Z cpu2:55811)NFS: 157: Command: (mount) Server: (192.168.180.150) IP: (192.168.180.150) Path: (/mnt/Volume1) Label: (FreeNAS:Volume1) Options: (None)

2014-12-15T19:33:30.901Z cpu2:55811)StorageApdHandler: 698: APD Handle a99dc9de-d6c49dd6 Created with lock[StorageApd0x41118f]

2014-12-15T19:33:40.023Z cpu1:34783)World: 14296: VC opID hostd-00ac maps to vmkernel opID f3171169

2014-12-15T19:34:00.023Z cpu1:34783)World: 14296: VC opID hostd-d3b4 maps to vmkernel opID 707e611f

2014-12-15T19:34:01.293Z cpu3:55811)StorageApdHandler: 745: Freeing APD Handle [a99dc9de-d6c49dd6]

2014-12-15T19:34:01.293Z cpu3:55811)StorageApdHandler: 808: APD Handle freed!

2014-12-15T19:34:01.293Z cpu3:55811)NFS: 168: NFS mount 192.168.180.150:/mnt/Volume1 failed: Unable to connect to NFS server.

 

FreeNAS: /var/log/messages

Dec 15 20:33:42 FreeNAS mountd[3769]: mount request succeeded from 192.168.180.80 for /mnt/Volume1

Dec 15 20:33:57 FreeNAS mountd[3769]: mount request succeeded from 192.168.180.80 for /mnt/Volume1

Dec 15 20:34:02 FreeNAS mountd[3769]: mount request succeeded from 192.168.180.80 for /mnt/Volume1

This screenshot looks like many others floating around in multiple community threads and the config seemed to work for a couple of users: FreeNASShareProperties

But it took me a while to realize that I should try to add a “Comment:” FreeNASShareComment

~ # esxcfg-nas -a FreeNAS:Volume1 -o 192.168.180.150 -s /mnt/Volume1

Connecting to NAS volume: FreeNAS:Volume1

FreeNAS:Volume1 created and connected.

FreeNASConnectedShare

I hope this helps to save you some time!

NVMe – Flash unleashed

In this post I want to provide you with a quick introduction into NVMe, the next generation of Flash devices.

So what’s NVMe also known as Non-Volatile Memory Express or Non-Volatile Memory Host Controller Interface Specification (NVMHCI)?

The idea behind NVMe is to improve the storage stack by optimizing the way an application accesses a Flash device. NVMe offers not only a lightweight specification optimized for Flash, but also cuts some corners by removing components within the I/O path like the RAID controller. NVMe leverages PCIe as transport media which offers high bandwidth and a direct path to the hosts CPU and memory. This in turn removes another potential bottleneck, the limited bandwidth of SAS or SATA connection.

So the overall goal was to empower modern Flash devices so they can deliver their full potential and no longer being slowed down by a storage stack which was primarily designed for slow spinning disks.

I’ve found this graphic from Intel which illustrates the idea quite well.IntelNVMeSource: https://communities.intel.com/community/itpeernetwork/blog/2014/06/16/intel-ssd-p3700-series-nvme-efficiency

Below you can find a table to get a basic overview about the bandwidth PCIe can provide.

PCI Express
version

Per lane

Bandwidth

1.0

2 Gbit/s

250 MB/s

2.0

4 Gbit/s

500 MB/s

3.0

7.877 Gbit/s

984.6 MB/s

4.0

15.754 Gbit/s

1969.2 MB/s

Source: http://en.wikipedia.org/wiki/PCI_Express

As you can see even a single PCIe 3.0 lane provides more bandwidth than a 6Gb/s SAS connection. The cool thing about NVMe is that it is not limited to the form factor of PCIe Flash cards, it also made its way into a new generation of 2.5” SSDs. “How?” you might ask because it’s not obvious.

Instead of putting a RAID Controller into your server to attach SSD drives via SAS or SATA as usual, you can put in a PCIe extender card instead. This adapter extends the PCIe connectivity via SFF-8639 cables to the server backplane as usual. This of course will only work with corresponding backplanes & NVMe SSDs which have a PCIe based interface like Intel’s DC P3700 series.

From what I’ve seen, a single extender plugged into a PCIe x16 slot can provide four PCIe x4 interfaces to connect four 2.5” drives. This means each SSD can make use of 31,508 Gb/s of available bandwidth! This now shifts back the bottleneck from the transport media to the device itself.

The new NVMHCI specification mentioned above defines a new host interface as well as a command set for use with non-volatile memory which can be compared to SCSI or AHCI & SATA to some extent. For example SCSI (Small Computer System Interface) also defines a protocol, a command set and interfaces like SAS. However all of them define a way how to exchange data between the attached storage device and the host’s system memory.

I’ve tried to outline the relationships below:

  • Host system <- AHCI -> HBA/Controller <- SATA -> DISK
  • Host system <- SCSI -> HBA/Controller <- SAS -> DISK
  • Host system <- NVMHCI -> FLASH DEVICE

To be able to make use of the new NVMe based devices inside your host(s) you have to have a proper driver installed which also can take care of the SCSI to NVMe translation. This enables NVMe devices to function within the existing operating system I/O stack.

NVMeDriver

Source: http://www.nvmexpress.org/wp-content/uploads/NVM-Express-SCSI-Translation-Reference-1_1-Gold.pdf

More technical details like a list of the NVM Express key attributes can be found here and news about the recently delivered NVMe 1.2 specification here.

Whereas Windows Server 2012 R2 ships with native NVMe support, ESXi was lacking a little behind. But on the 12th of November 2014 VMware has released the very first “VMware Native NVM Express Driver” for ESXi 5.5 which can be downloaded here.

Before warping up this post I also want to mention that there are also other specifications like SCSI over PCIe (SOP) or SATA Express in the making which aim to achieve the same result by leveraging existing protocols. So at some point in time it’s possible that multiple approaches co-exist, but at the moment NVMe leads the trail of innovation.

NVMe like any other next generation media, DDR4 memory or Flash storage on the memory channel for example, shows the real power of server side acceleration. Those new technologies make their way into the hypervisor relatively quickly compared to a central storage system and allows FVP customers to adapt them early on. FVP enables its users to leverage low latency & high IOPS medias as close to the virtual machine as close as possible. With the ability to easily scale performance by adding new devices or by adding new hosts to a cluster, we can scale performance as it’s required by the applications. This illustrates how future-proof FVP as storage acceleration platform really is.

vSphere 5.5 – vCenter Server Appliance – Could not connect to one or more vCenter Server systems

This is probably the shortest post I’ve written so far, but it might be worth sharing.

Usually I boot my lab only when I really do something with it, and sometimes some of the VMs, service, etc. don’t like that up and down.

Today I wanted to log into my vCenter Server Appliance which was running fine the last couple of months, when both Clients promted me an error. The Web Client was a little bit more precise:

ErrorMessageCouldNotConnect

Could not connect to one or more vCenter Server systems: https://192.168.178.110:443/sdk

Error message of the vSphere Client:

Call “ServiceInstance.RetrieveContent” for object “ServiceInstance” on Server “192.168.178.110″ failed.

To cut some corners here, in the end it turned out to be a simple DNS problem because my DNS server wasn’t properly booted. So before starting troubleshooting check your DNS resolution first.