Monday, May 17, 2021

HPE H240 - the new value king of HBAs

With the release of ESXi 7.0 came a farewell to the vmklinux driver stack. Device vendors focused on writing native drivers for current generation hardware, meaning that a large subset of network cards and storage controllers were left behind (a near full list of deprecated devices can be found on vDan's blog: https://vdan.cz/deprecated-devices-supported-by-vmklinux-drivers-in-esxi-7-0/). One device in particular were the LSI 92xx series of HBAs, which utilized the mpt2sas driver. These are widely used for 6.x vSAN as well as other storage server operating systems. While the 92xx series can still be used in BSD and Linux based systems, this leaves a gap for those who want to run vSAN 7.0. The LSI 93xx is readily available and uses the lsi_msgpt3 native driver, but typically runs in the $80-100+ range.


The new value king is... an unlikely candidate. It isn't vendor neutral, although by my testing it should work on most mainstream systems. It advertises 12Gb/s connectivity, but still uses the old style mini-SAS SFF-8087 connectors. The new value king for vSAN 7.0 usage is the HPE Smart HBA H240. At a price range of $22-30 (as of this writing) depending on the bracket needed, the H240 proves to be a pretty capable card. It supports RAID 0, 1 and 5, but I wouldn't recommend it for this use case as it doesn't have cache. What is critical about this card is that it has a native driver, which is supported in ESXi 7.0 and is a part of the standard ESXi image.


The major concern I had was if this card would work in a non-HPE system. My homelab is comprised of unorthodox and whiteboxed machines. The Cisco UCS C220 M4 is the only complete server that I have - the 2 node vSAN cluster I had on 6.x comprised of a Dell Precision 7810 and a SuperMicro X10 motherboard in an E-ATX case. Introducing the card to both systems went without issue - all drives are detected and it defaulted to drive passthrough mode (non-RAID). One caveat is that I am using directly cabled drives - the only backplane I have to test with would be the Cisco and it doesn't appear to support hot swap. The other issue I've found is that you cannot set the controller as a boot device, although I didn't purchase it for this purpose. If you're looking for these capabilities, I would suggest sticking with HPE Proliant servers, or find a cheaper LSI 93xx series controller.


For my use case, the HPE H240 was a drop in replacement that brought my old 2 node vSAN cluster onto 7.0 without much drama. The H8SCM micro atx server remains on 6.7, but is more than capable of running the 7.0 witness appliance. Here's a few shots of the environment post-HBA swap:




Wednesday, May 12, 2021

Cisco UCS M4 - Out of band management update post-Flash era (the easy way)

Historically, the easiest way to update Cisco UCS CIMC firmware has been to load the update ISO in the KVM virtual media, reboot the server and check off the updates required. As Flash has been discontinued, folks can no longer login to the CIMC and access KVM. The only official workaround is to perform a dd copy from a supported Linux distribution with a custom bash script to a USB drive. 

Fortunately, there is a much easier workaround that involves using your CIMC login credentials to download the JNLP file required for KVM. Enter the following into your browser:

https://<CIMC_IP>/kvm.jnlp?cimcAddr=<CIMC_IP>&tkn1=<CIMC_username>&tkn2=<CIMC_Password>


By doing so, this will download the JNLP file and allow you to access the KVM without having to actually log into the CIMC Flash interface. I'd suggest updating as soon as possible, the HTML5 client is excellent. Unfortunately, the UCS M3 CIMC will always be Flash based, but the same workaround can be used to access the KVM.

Evacuate ESXi host without DRS

One of the biggest draws to vSphere Enterprise Plus licensing is the Distributed Resource Scheduler feature. DRS allows for recommendations ...