I'm a self-confessed data hoarder, with a veritablearmy of hard drives inside myNAS units, holding my digital life together. They have documents, media files, programs and everything else collected over the years, and they've not only been storing data. Those hard drives used to hold my virtual machines as well, running Home Assistant to keep my smart home in order, media servers to keep me entertained, and all manner of home lab experiments.
I'm sure many of you are in a similar situation and, like me, were held back by the high cost of SSD storage. But NVMe SSD prices had been going down, and Ibuilt a mini PC with all-flash storage earlier this year while the prices were still acceptable. Pricing has gone back up due to NAND shortages, but I've since moved all my virtual machines to the all-flash mini PC running Proxmox as the hypervisor, and it's been a transformative experience. I won't go back to using virtual machines on hard drives ever again, and by keeping larger storage on another NAS, I can happily run VMs on the handful of NVMe drives I already have.
I'd been going at VMs in the wrong way
It all made more sense when I started using a Level 1 hypervisor
When I first started using virtual machines, I'm not even sure SSDs were available on the consumer market. I remember the slowness of everything when usingVirtualBox on an Intel Core 2 Duo-powered Dell XPS laptop that had a 2.5" hard drive inside, but it was (just about) usable, and I learned a lot in those years when I was forced to slow down and wait for tasks to happen.
Over the years, those VMs have migrated, first to SATA SSDs on a desktop PC, and then to a succession of NAS devices with HDDs in RAID with SSD cache to improve performance. That worked fine, although it was still slow whenever I needed to do anything inside the VMs, and I hand-waved it away as the technical limitations of the NAS.
I still had slowdowns

NAS enclosures aren't designed for speed, and the CPUs and other hardware in them are often lackluster. The operating system and data-serving functions already take up a large amount of hardware overhead, and adding VMs to the mix was a step too far. At least, I'd spun up a bunch of VMs to play with, added more RAM to the device for extra resources, and thought I was staying under a conservative limit on the load I was putting on the NAS. I wasn't, not by a long shot, and I'd either have degraded transfer speeds or sluggish VM performance, and I still put it down to the limitations of the NAS hardware.
I felt the need for speed
But really, IOPS rule the roost for VM performance
In reality, I wasn't hitting the CPU or RAM limits. I was hitting the RAID array's throughput and IOPS limits, and the only way to fix that was to upgrade to faster drives. Even though this mini PC only has Gen 3 x1 speeds in each of the M.2 slots the NVMe drives are in, that's more than enough to support multiple VMs running at once and large data transfers over the network. And with two vdevs in the pool, I had plenty of performance to play with.
Proxmox is the other essential part of the equation, because level 1 hypervisors are far more performant than the level 2 ones I'd been using, and running it from NVMe made it even more so. I can back up and replace VMs in minutes, instead of the long wait I'd been experiencing using hard drives. The database that those VMs store their precious digital cargo in is accessed so much more quickly, and the IOPS performance of NVMe can't be overstated.
But it's not only my home lab that agrees. According to Microsoft's Azure Virtual Desktop deployments, the benefits ofswitching to NVMe are easily apparent:
- Ultra-low latency
- Up to 10x faster OS disk performance
- Fast reimaging
- Perfect for stateless scaling
- Up to 400K remote disk IOPS
But it's not only improvements to boot times, drastically dropped latency numbers, and huge increases in IOPS that make running my VMs on NVMe SSDs so appealing. I also get vastly improved power efficiency, so my home lab can stay on without my power bill skyrocketing, and that's good news for me and the planet.
If it's good enough for the datacenter it's good enough for my home lab

If the datacenter has moved to using ultra-fast SSDs for hosting VMs on, who am I to argue? These companies work at scale, and they wouldn't roll out NVMe if it didn't make business sense, both for their own use and for the thousands of companies that use their hosting services. My home lab is a small pebble thrown into the ocean in comparison, but even then I've noticed the difference in responsiveness and deployment speeds, and I'll not go back for my VMs. Let the spinning rust drives handle long-term storage; I'll be using NVMe for my VMs and containers from now on.







