Tuesday, February 21, 2023

VMware Cloud Professional certification - thoughts and tips to pass

 I recently passed the VCP-VMC 2023 exam, which was made possible by the free VMware course that allowed me to check the prerequisite for the certification. For those looking to take on the exam, I'll include what I can remember in terms of general concepts.

For starters, and as per usual with any VMware exam, start with the exam guide.

Like anything to do with cloud, it is network heavy. I'm not a networking engineer but have a long-since lapsed CCENT certification. This exam is going to grill you on CIDR, subnets, and network overlaps, and assumes that you have a general knowledge of the OSI model. Focus as well on the different connection and VPN types for each cloud provider. While it primarily focuses on AWS, GCP and Azure questions were in there as well, so be sure to know what each provider supports for configuration minimums and maximums regarding management networking configuration.

Speaking of minimums and maximums, you'll want to read up on cluster sizes and hardware configurations. What are the specs of an i3.metal instance vs. an i3en.metal? What kind of nodes can you get with Azure and GCP? And how many can you throw into a cluster? All of these may appear on the exam.

Managed services, such as VMware Cloud on Dell EMC and AWS Outposts should be studied as well. What physical requirements do these carry? What are their responsibilities?

HCX... woo boy. Several of these questions showed up, and did nothing but generate anxiety. Get to know HCX. Get to know the deployment models, and read up on how to troubleshoot different scenarios.

Containers, Kubernetes and Tanzu all showed up on my exam. Know what TKG does, how to deploy it, what value Kubernetes brings to containers in general, and what Tanzu services do what function.

That's all I can remember as of this moment. I'm still kind of pumped from getting through it. The only feedback that I have is that I don't know if some of my answers were right as they may have changed after the exam was written. For instance, Google has updated their networking requirements as of November 2022, so I'm not sure if I got the question wrong by answering based on current requirements, or if I should've answered based on previous specs. Perhaps a higher-level question that isn't dependent on something that can change with relative frequency would be better.

I hope you found this helpful. Feel free to comment below or ping me on Twitter if you have any questions!

Friday, February 17, 2023

vSphere 8.0 2023 homelab buyer's guide

The hardware market has started to recover, and with vSphere 8.0 introducing native support for the excellent Intel i226-V network card, some new contenders have arrived in terms of price to performance that generally should be able to run ESXi. This post will focus on several categories, including mini PCs, second hand workstations and servers, and whitebox builds that should meet the requirements of the updated HCL. Let's get started!


Mini PCs

NUC like systems are a classic piece in the homelab. Historically, the tradeoff has been limited network connectivity and/or lack of compute/memory density. Now, however, there are some exceptions to the rule. 

Topton, a 6 year old shop on AliExpress, has an AMD Ryzen 5000 series based "router" which grants 6 to 8 cores, up to 64GB of RAM, 3x M.2 NVMe slots, and four i226-V based 2.5Gbe ports. With an entry point of $346 USD at the time of this writing, along with a claimed capability to ship VAT/Tax free (not verified, YMMV), this looks like it could be the value king mini PC of 2023. Be wary of copycat shops who may offer similar specs at a higher discount; ensure that the store has been around for some time as not every seller can be trusted. Intel based systems can also be found with similar specs (minus the awesome core count, of course) for ~$200 USD for super cheap vSAN clusters. 

Sadly, mini PCs still suffer from a lack of PCI lanes, but I found a creative way around this... that's reserved for a follow up blog.


Second hand workstations

With buying cycles slowing down in an uncertain economy, second hand hardware is getting harder and harder to come by. Workstations, however, are sometimes offered up on eBay for a deep discount. Most of the hardware in these are supported by ESXi, with perhaps an exception for the onboard network card. Fortunately, to make up for this, they have several PCI-E slots, so adding a supported NIC isn't too much of a hassle.

The Dell Precision T7820 and T7920 can sometimes be found with Xeon Bronze or Silver processors in the sub $500 USD range. Recently, I saw a 2x Silver with 64GB of RAM listed for $350 USD. The HP equivalent Z6 G4 and Z8 G4 can be found for similar price points. Both of these *should* come with an Intel based on board NIC according to their drivers on their respective support pages. Be wary of barebone kits, as these systems do not have onboard graphics - a barebone system with no GPU will require a graphics card to function properly.

Looking forward to the future of Threadripper based workstations - The Dell Precision 7865 can pack up to 64 cores in a standard ATX tower form factor. Definitely not cheap, but exciting to see nonetheless!


Second hand servers

As of this writing, the same issue that is faced with workstations is impacting servers ten fold. Most hardware vendors are officially ending support for socket 2011-3 based servers, such as PowerEdge 13G, HPE Proliant G9, and Cisco UCS M4 systems. These are also falling out of support with ESXi 8.0, as they were originally introduced in 2014. The next generation of each (14G, G10, M5) are difficult to find on eBay for a decent price. I'll post an update to this in another blog post later this year, as I expect more second hand hardware will drop in price closer to the EOL of the older generations.


White box builds

Building out hardware that is intended for gaming and enthusiast builds has some caveats. Most gaming systems use a Realtek NIC for gigabit or 2.5Gbe networking onboard. These cards are not supported with ESXi as there is no compatible driver. Our options are:

  • Find a board with i226-V onboard (i225 for most gaming boards had many issues, regardless of revision)
  • Add a supported USB NIC with the Fling driver
  • Add a supported PCI-e NIC (and/or HBA if using a CPU with onboard graphics enabled)
AM4 builds that can make use of the Ryzen 7 5700G or Intel builds with onboard graphics can allow for multiple supported network cards. Enthusiast boards, such as the B550, have many of the similar options you'd expect to see on a server board regarding BIOS options. If you want to add in KVM-like capabilities, you can also invest in a Raspberry Pi based PiKVM solution that can allow for remote out of band management, but these cost quite a bit for a quality of life feature.
SuperMicro also has some relatively cost effective hardware available second hand on eBay, such as the H11SSL motherboard at $400 USD, which supports 8-32 cores (or up to 64 if it's revision 2.0), although you still need to factor in the cost of a heatsink, CPU and ECC memory.

ESXi on ARM Fling 2.0 - Challenges with the vSphere 8.0 update

TL;DR - if you're using an Orange Pi 5 Plus or Raspberry Pi 4/5, you might want to stick with 1.15/7.0. Apologies for the lack of update...