A series of unfortunate events occurred shortly after posting the previous blog post:
- DIMM H1 decided to fail
- Replacement was ordered
- Post office lost the replacement
A series of unfortunate events occurred shortly after posting the previous blog post:
Hello, internet! Long time no see, how you been? It's been a pretty interesting year so far, and the homelab has not been spared from the chaos; hardware failures and upgrades have caused my projects to come to a standstill. Fortunately, I've made headway by consolidating some of the hardware into a project I've been trying to get online for some time now.
I hit a stroke of luck by winning an AMD based Supermicro motherboard off of my favorite auction site, which came with a processor and some memory for about $300. The large number of PCI-e lanes opens up a number of expansion options in a standard mid-tower case. In this blog, I'm going to discuss consolidating all ten of the Intel Optane disks that I received last year into one compute node, and detail the process of getting the latest versions of ESXi and vCenter Server installed.
My BOM:
H11SSL-i - each PCI-e slot configured for x4x4 or x4x4x4x4 bifurcation
AMD EPYC 7551
128GB (8x16GB) 2133 DDR4 RAM
10x Intel Optane 280GB NVMe SSDs (vSAN pool)
5x 10Gtek PCI-e x8 to 2x U.2 NVMe adapters
Solidigm P41 Plus 2TB M.2 NVMe SSD (boot disk)
Corsair RM1000x PSU
Silverstone CS380 8 bay mid-tower case
Noctua NH-U9 TR4-SP3 heatsink
This system will consume the same Optane drives that I used in my Supermicro BigTwin SuperServer, which comprised of two X11DPT-B boards, each containing 2x Xeon Platinum 8160's and 768GB of RAM. The previous vSAN ESA build gave 5 of the Optane drives to each node, a vSAN Witness VM on a third node, and 100Gbe direct connect to share bandwidth.
Consolidating down to the tower will be considerably quieter and draw less power, while allowing us to benchmark all ten drives without networking overhead. Downsides include less processing power, far less memory, and a little more work to do under the hood to get it working. Unlike the two node cluster, this will have no redundancy.
I'm going to detail how to accomplish all of this without a vSphere license of any kind; this will utilize the 60 day trial license, and a copy of ESXi that was acquired through supported means. Some of the old tricks of standing up a vSAN node still work with ESA, and can be deployed without vCenter.
As most of my blog posts go, this is strictly for lab use - I would not suggest running a single node vSAN cluster in production, nor would I suggest running a vSAN cluster without a proper vCenter server. We will install vCenter in a later blog post.
The first step is to download the ESXi-Customizer-PS script. This can be found here: https://github.com/VFrontDe-Org/ESXi-Customizer-PS/tree/master
PowerCLI is required to use the script. Full documentation on the script can be found here: https://www.v-front.de/p/esxi-customizer-ps.html
Simply running the script without any options will seek out the VMware online depot and create an ISO based on the latest patch version. As of this writing, I can confirm that the script works to download ESXi 8.0 Update 3 build 24022510.
Install the OS to the boot disk, then reboot.
Once booted, clear any partitions that may be on the Optane disks.
Prior to creating the vSAN cluster, we'll want to get a list of the disks that we want to use in the cluster. For my use case, I was able to run the command "esxcli storage core device list | grep t10" to list out all NVMe drives. I removed my 2TB boot disk from that output.
Since we're using a single node vSAN cluster, we can create a vSwitch with no uplinks for the purpose of vSAN networking:
TL;DR - if you're using an Orange Pi 5 Plus or Raspberry Pi 4/5, you might want to stick with 1.15/7.0. Apologies for the lack of update...