A new level for JBLan

In a previous post, I described how I started to build and run things from whatever I could get back then. It led me to my previous server, an R820 deployed in a professional data center under colocation. Since that post, the environment took another major step.
Still limitations despite being very good
The R820 was very good but had a few limitations...
Only 8 drives
With only 8 drives in it, the total volume available to store data and VMs was limited. Even worst, the performance was so slow that Kubernetes controllers kept failing and switching over from one to another, making the environment very unstable.
Only a single hardware server
With a single hardware server, there was no redundancy. Should anything be required at firmware / OS level, the complete environment had to be shut down for the maintenance to proceed.
Running VMWare ESXi 6.7U3
Not only ESXi 6.7U3 was old, VMWare is not the same since its acquisition by Broadcom. The free version of ESXi also had limitations like a maximum of 8 vCPU per VM and no advanced features like vMotion.
A new system to reach a new level
I searched before for a system that would let me do more and better than my R820. Because the server is hosted in colocation paid per U, an objective was to remain at 2U just like the previous server. It is how I learned about the FX2S, a modular system by Dell. It can be configured in many different ways to answer different needs. Unfortunately, when I first looked at it, it would have been way over 20K for me to get one that would fit my needs.
Luckily, I found a real opportunity with MET servers, a company specialized in refurbished servers and enterprise gear. Since my own transaction, they announced that they have been acquired by The Server Store. In all cases, my transaction with them has been a pleasure at every aspect and I am very happy with it! Shipping from Texas to Montreal, Qc was not problem for them.
This new system is an FX2S divided for 4 half-width blades. 2 are server blades (FC630, 2x 14 CPU, 192G RAM) and the 2 others are storage blades (FD332s) with 16 drives each. That allows for 2 identical systems from which I am able to build redundancy.
Despite both are in the same chassis, each node operates independently of the other. They can be powered On and Off separately and I connected network cables between them so one node gives me full access and control over the other. Each node is also able to present the Chassis Management Card to let me manage the chassis itself. I had to do a little black magic for that but it works...
Each node is now running Proxmox VE and both have been grouped in a cluster. Because the Proxmox cluster needs an odd number of vote, a third vote is now hosted at home in a QDevice that monitors the cluster from outside. That cluster offers HA, live migration and more, opening a complete new world of possibilities.
Each node is powerful enough to run 100% of my main services. Some extra stuff for dev, lab or QA can run from the extra power available and need to be turned off only during some maintenance.
With room for 36 drives, there is no more bottleneck or lack of storage. I can deploy everything I need and with that many drives working together, IOPS are where they have to be. I also had the opportunity to put in some SSD for even higher performance.
The previous R820 was designed to run with a storage server next to it. When I got it, I had no plan to move it to colocation. Now equipped with this one, I should be good for many years. It has plenty of CPU and RAM, I can keep adding more, I can change the drives for more capacity or performance if needed and more. I know enough about self-hosting to acknowledge that one day I will take yet another step forward but for now, I trust that it should not happen before 5+ years (wish even 8 or 10...)