ESXi shows false Memory Warnings / Alerts 3

It could happen that your ESXi host(s) show a „random“ memory error even if everything seems to be fine. These warnings or alerts can be caused by old log entries and need to be clear manually to fix the problem. But of course if you are not certain if this is maybe a real hardware issue, you should definitely raise an incident at the corresponding vendor and perform a hardware diagnosis.MemoryWarning

However resetting the sensors or just updating won’t help here, you will need to perform the following steps:

  1. Switch the view from “Sensors” to “System event log”
  2. Reset the event log
  3. Switch the view from “System event log” to “Sensors”
  4. Perform an update to check if the warning disappears
  5. If it didn’t work, try to reset the sensors and update again
  6. If this didn’t work either, login to the host and restart the management agents: services.sh restart

This fixed the problem for me and I hope it helps to save you some time.

VMware ESXi – async drivers

Recently we got some storage problems with a couple of ESXi hosts and one of the recommendations of the VMware support was to update the async drivers. I think that there isn’t much awareness about those drivers as it actually should be, that’s why I wrote this short post.

Async drivers are developed by partners like QLogic and certified by VMware to optimally support their hardware. A good example is the pretty common Fibre Channel HBA QLE2562.

A certain version of those drivers will be shipped with the VMware ESXi image or the corresponding OEM images. As you probably can imagine that those drivers updated more often than the ESXi images. An example of which QLogic drivers are included in the VMware ESXi images can be found here: Default QLogic HBA driver/firmware versions for ESXi

OEMs like Dell will probably include more recent drivers for their hardware when they release an updated version of the image like the following:

Dell – VMware ESXi 5.1 Update 2 Recovery Image

VMware ESXi 5.5 Image Dell customized ISO

In case of the Dell images it’s recommended to take a look at the “Important Information” section on the Dell download site to check which drivers are really included:

Qlogic HBA Drivers & its versions(Available at vmware.com) ==================================================

qla2xxx – 934.5.29.0-1vmw

qla4xxx – 634.5.26.0

qlcnic – 5.1.157

qlge – v2.00.00.57

In addition to that the drivers are also available in the VMware patch depot to be downloaded manually. These could be used to create a custom ESXi image using the “Image Builder CLI” or for a manual installation via CLI:

esxcli softwarev install -d offline-bundle.zip

The offline bundles can be found in the my.vmware.com download portal within the „Drivers & Tools” section, listed under “Driver CDs”. Those downloads also include the corresponding release notes.

This “mean” thing is that those drivers won’t be shipped via the Update Manager. Of course you could use the Update Manager to install them, but it will require to download them manually and to import them into the patch repository. Depending on the number of HBA/NICs this can be a bit cumbersome but it will be the only way to have them up to date other than updating the ESXi using a new image.

The KB article 2030818 (supported driver firmware versions for I/O devices) offers a link collection to the corresponding vendors like QLogic and their own VMware compatibility matrix. This matrix lists the latest driver version as well as the recommend firmware which should be applied accordingly.QLogicHCL

To check which version is currently installed on your host, you could also use the Update Manager or the following CLI command:

esxcli software vib listesxcli_software

To determine which driver is really actily used by the HBA can be verified via

esxcfg-scsidevs –aesxcli-scsidevs

But why is it actually important?

As with all pieces of software also drivers can contain bugs that need to be fixed or they get new feature, improved performance or support for new hardware devices. So if an outdated driver contains a bug which may affect the failover behavior of an ESXi host, this can have a significant impact on your environment!

By the way ESXi 5.5 introduced a new driver architecture called Native Device Driver. Currently a ESXi can run in hybrid mode to support the new and the “legacy” driver which is why the HCL currently shows two different types of drivers.


More useful links:

Identifying the firmware of a Qlogic or Emulex FC HBA (1002413)

Determining which storage or network driver is actively being used on ESXi host (1034674)

Installing async drivers on VMware ESXi 5.0, 5.1, and 5.5 (2005205)

Final thoughts about my VSAN experiences

To wrap up my VSAN series I want to share my final thoughts with the community. Please feel free to comment and share yours!

All experiences I’ve made with VSAN are only based on my LAB with the minimum deployment of just three hosts and without any real world workload but I would say I’m able to rate the overall impressions I got.

In my case the setup itself was quite simple because I already had a vCenter server running but in case of a green field deployment the provided boot strap process is maybe a bit cumbersome but no big deal.

The policy based management in general is really pleasant and offers flexibility to assign different policies to different workloads or even different VMDKs on a single VM.

The way VSAN handles problems/outages is good but it also has the potential to cause some trouble if you don’t follow the recommendation to set a proper host isolation response. Please see my “Working with VSAN – Part II” post for details.

The lack support for stretched cluster deployments and large 62TB VMDKs is a bit disappointing but I hope it won’ take too long until these features make it into the product.

From a performance perspective I won’t rate it without having any real experience from a productive environment but I can rate the way it scales, which is quite nice. I would always recommend to select a chassis which allows future SSD/HDD expansions. Personally I favor the Dell PowerEdge R720 XD which offers support for up to 24 HDDs, redundant SD card to install the hypervisor, sufficient computing recourses and enough slots to add HBAs, RAID controllers or Flash cards. I think really important is the ability to add hosts which do not contribute storage to the VSAN cluster. In my lab I was not able to feel the difference between a VM running on a host with or without a local copy of the VM data.

But please be realistic, if your “working set” doesn’t fit into the SSD cache and I/Os need to be served from disk(s), this can impact the application performance. Many people I’ve talked to were wondering why VMware doesn’t let customers the choice to use RAIDs instead of single disk drives to speed up disk operations. I don’t know if there is a technical reason behind this requirement or just the vision of using storage in a more efficient way.

When it comes to networking probably even if 1 GbE will be sufficient for smaller deployments but you also have the ability to mitigate potential bottlenecks by using multiple network adapters to handle the VSAN workload.

I’ve also talked to some VMware folks who don’t see VSAN as a 1:1 replacement for classic SANs yet. In the end we agreed that it heavily depends on the planned use case and the expected workload. Probably a huge IO monster database with hundreds of GB or even TB is not the best use case for VSAN, just keep that in mind.

However I indeed see it for customers which are running smaller environments with reasonable workloads to “replace” entry level SAN solutions. The huge benefit is the simplified management which enables admins to work in their well-known environments like Ethernet networking and vSphere.

But all that glitters is not gold. What really annoyed me was a problem with the VM Storage Policies or actually with the VSAN Storage Providers. There is a known Issue with vSphere 5.5 Update 1. In my opinion this is not supposed to happen when releasing an update and making such a hyped solution GA. To cut some corners and to speed up fixing the issue I moved all my hosts to a new VMware vCenter Server Appliance, which was no problem for VSAN itself.

So overall I really enjoyed working with VSAN and now I feel comfortable to recommend it to customers if it fits into the environment and it matches the expected workloads. This is important for me personally because I think you should always stand up for a solution you sell to a customer.

Working with VSAN – Part IV

To continue my “Working with VSAN” series, this time I want to challenge the scalability (at least what was possible within my lab). But see yourself.

Performance scaling by adding disk groups

To see how VSAN scales when adding disks I did the following tests:

IOMeter @ 32 QD, 75/25 RND/SEQ, 70/30 R/W, 8KB in combination with different disk group and VM storage policy settings. But it’s actually not about the total values or settings, it’s about to show the scalability. Not to mention that the SSD used are pretty old (1st GEN OCZ Vertex) and differ in performance!


Failures to tolerate (FFT): 0

VMDK Stripe: 1




Failures to tolerate (FFT): 0 – So still on one host…

VMDK Stripe: 2 – … but on two disk groups!


To be able to combine multiple stripes like shown above with FFT > 0, you will need multiple disk groups in each host to get the performance. In my case I just got a single host with two disk groups, so I was not able to perform the same test with a FFT = 1 policy.

Changing VM Storage Policies

To wrap up this post I want to mention that during my tests I’ve always used the exact same VMDK and so I had to change the policy multiple times. Of course it took some time till VSAN moved the data around to that it was compliant with the policy. But it worked like a charm and I though it is also worth mentioning!

But what about the network?

Multiple VMKernel Interfaces 

In case you are planning to run VSAN over a 1GbE network (which is absolutely supported), multiple VMKernel interfaces could be a good way to mitigate a potential network bottleneck. This would enable VSAN to leverage multiple 1GbE channels to transport data between the nodes.

To be able to use multiple VMKernel ports you will need to keep in mind to use different subnets for each VMK and to always set a single vmnic as active and all others to stand-by.


To see how this would scale I moved a virtual machine to a host who had no data stored locally of that particular VM, so that all the reads (& writes) had to traverse the network.

2VMsWithLocalwitnessI also had setup the VSAN networking a couple of day before so I started with the desired multi VMK setup und were quite happy with the results.

FTT1_ST1_esxtop_2VMK FTT1_ST1_IOM_2VMK

Then I disabled VSAN on the second VMKernel and also moved the vmnics down to be in stand-by only. The result were as expected, VSAN were just using a single vmnic.

FTT1_ST1_esxtop_1VMK FTT1_ST1_IOM_1VMK

 To verify these results I wanted to switch back to the multi VMKernel setup but for some reason I wasn’t able to get it back to work again. I moved the vmnic up to be active again (as depicted above) and re-enabled VSAN traffic on the second VMKernel interface (VMK4). But since then I was unable to see VSAN traffic across both NICs again. When I disable VSAN traffic on the first VMKernel (VMK3) it switches to the second interface (VMK4) which tells me that the interfaces are generally working. At this point I’m a bit clueless and asking you guys, have you already tried this setup? What are your results? Am I missing something or did I misunderstood something? Are there any specific scenarios where the multi VMK kicks in? I would love to get some feedback!