Thursday, September 23, 2021

Is the diskless server dead? Long live boot from SAN!

 Recently, VMware announced that future versions of ESXi will no longer support SD cards or USB devices as a standalone boot option. This update comes on the heels of ESXi 7.0 U2C, which rectified issues with the new partitioning scheme and heavy I/O, which would cause SD cards/USB devices to fail quickly. Rather than back tracking on the partitioning changes, VMware has decided to end support for a media that is in use in many diskless server use cases, recommending redundant persistent flash devices, SD cards with a separate persistent device for the OSDATA partition, or boot from SAN/NAS. More detailed information can be found at the official KB: https://kb.vmware.com/s/article/85685 

It makes sense that VMware is taking this route. vSphere has come a long way from the 4.x days of old. The hypervisor has changed drastically in terms of drivers, software packages and services that must meet the demand of the modern datacenter. Unfortunately, homelabbers may have difficulty making the move to said devices.

With that being said, the question remains: Is the diskless server dead? Not quite. Today, we're going to be covering how to set up option 3 of the KB mentioned before - boot from SAN. Boot from SAN simplifies the boot option conundrum as it doesn't require us to have redundant, local storage that not only is required for each server, but also requires an additional controller which adds additional cost. Boot from SAN can create multiple LUNs on the same mirrored or striped storage for multiple hosts to boot from.


Let's start with the requirements:

Storage that supports SAN/NAS (in this example, I'm going to use a TrueNAS iSCSI virtual machine, but bare metal would work just the same)

A server with a network adapter that supports iSCSI boot (not necessarily iSCSI hardware offload, just boot - in this example, a Broadcom 57810S as it is what I have on hand)

Recommended: a separate network switch or VLAN for boot from iSCSI (in this example, a separate physical switch is used)


Step 1: Configure the storage network

From the TrueNAS web interface, go to Network > Interfaces and select the configured network interface.







Edit the interface, disable DHCP, and enter an IP address of your choosing. DHCP can be used, but for the sake of this exercise we will be using a static configuration.






Step 2: Configure storage

After configuring network, we will need to add a storage pool. Go to Storage > Pools and click the "Add" button in the top right corner.





Follow the prompts and use the available disks to create a pool. If you're using bare metal hardware, the recommendation would be to use mirroring, two disks should be more than enough.





Step 3: Configure iSCSI

Once complete, we can move on to creating the iSCSI block shares. Select Sharing > Block Shares (iSCSI), and click the "Wizard" button in the upper right corner.





For the name, we will use "bfs". Multiple boot from san files can be configured here, so if you plan on booting multiple servers, feel free to enumerate them as needed. Under Device, select "Create New", drill down the folders and select "Boot from San", and set the size to 32 GiB.





Click next, and create a new portal. We will use the IP address configured previously.





Click next, and it will bring us to the initiator section. We can leave these fields blank, or if you wish you can specify the IQN numbers of the network adapters so that other NICs do not try to boot from this.





The last page allows you to confirm the configuration, if all looks good, hit "Submit".

After doing this, be sure to enable the iSCSI service: Go to "Services", set iSCSI to running and hit the check box to start automatically.

Also make note of the "associated targets". This should read as LUN 0 - this will be important to configure later on.



 


Step 4: Configure physical network adapter for boot from SAN

The shortcut to get into the preboot/oprom environment for the network adapter to configure the IP addresses will vary based on the vendor. For Broadcom and Qlogic will either be CTRL-B or CTRL-S, some Intel cards will be CTRL-S or CTRL-D. Consult the user manual of the card you're using to find out which shortcut your card uses.

From here, we can configure the adapter:

Boot protocol: iSCSI (options are typically None, PXE, iSCSI)

Initiator: The IP address you wish to assign to the server, we'll use 10.0.1.2

Target: Use the IP and IQN addresses of the TrueNAS server, be sure to set LUN 0 if it isn't already, or match accordingly if it must be changed. Under "name" or target, use the iqn under "target global configuration" (defaults to iqn.2005-10.org.freenas.ctl) followed by :bfs. In my case, it should read as iqn.2005-10.org.freenas.ctl:bfs


Step 5: Install ESXi

Set the boot order as needed, and use your preferred software to extract the ESXi ISO installer onto a USB drive. Boot into the install environment, and when you go to select a device to install to, you should see the following:





Install to the iSCSI LUN and follow the prompts. Once complete, reboot and it should load into ESXi. Congratulations! You have successfully configured boot from SAN!






ESXi on ARM Fling 2.0 - Challenges with the vSphere 8.0 update

TL;DR - if you're using an Orange Pi 5 Plus or Raspberry Pi 4/5, you might want to stick with 1.15/7.0. Apologies for the lack of update...