ESXi whitebox server

I usually have access to an ESX box at work where I can run multiple VMs and virtual routers for labbing and testing. I’ve also wanted one at home. It’s nice to be able to quickly spin up VMs when needed without always running them through my laptop.

While virtual routers don’t need lots of resources, I did want a beefy machine as there are a few servers I’d like to get running that need lots of CPU power.


  • 32GB RAM capable with ECC
  • Fast CPU with at least 4 physical cores
  • Quiet
  • Small
  • OOB (ilo/IPMI/Etc)
  • Low power

Specifically these are the things I don’t need:

  • Optical drive
  • Hard drive
  • GPU

The point of the box is to sit in the corner with only power and network connected. If anything went wrong, I don’t want to have to connect a monitor to it. I’m also not running any tasks requiring video output on the server itself. All VMs will be logged in via SSH.

I already have a Synology DS411 which will provide an iSCSI connection to the ESX server. Hence no need for internal hard drives.

My initial build was going to be built around an Intel i7 4790. However the i7 doesn’t support ECC ram and it also has a built-in HD4000 GPU which I don’t need.

Parts list

I ended up going for the Intel Xeon E3-1230v3. 4 cores, 8 threads, all the virtulisation support I need. It has no built-in GPU. It supports up to 32GB ECC RAM. Intel have released a newer 1231v3, but I couldn’t find a good price for it in the UK and all it gives is an extra 100MHz which I’m not fussed about.

RAM is quite pricey at the moment. While I wanted 32GB, I’ll start with 16GB for now and add another 16GB when prices drop.

For the motherboard, I went with the SuperMicro X10SLL-F. It supports both the CPU and RAM and also has built-in IPMI. The board has two onboard Intel NICs, i217LM and i210AT. The board also has an on-board VGA card. I’m not going to use that, but it will be handy if I can’t log into both the server and IPMI. It also has an a-type USB slot on board which is quite handy as you’ll see later.

For the PSU, I wanted both silent and efficient. I don’t need a huge amount of wattage either as I have no GPU. I ended up with the Seasonic G-360. 80-Plus certified and very quiet.

Part of the problem with an ESXi whitebox is ensuring that VMWare recognises all your components. I did extensive research in order to ensure this was the case. There are a couple of things I hit upon, but they were easily fixed.

Final part list:

  • Intel Xeon E3-1230 V3
  • SuperMicro X10SLL-F
  • 2 X 8GB Crucial DDR3 PC3-12800 ECC Unbuffered
  • Seasonic G-360 PSU
  • Aerocool Dead Silence Gaming Cube Case
  • Old 1Gb USB flash drive

Building and installing

The SuperMicro board has a dedicated IPMI port and so I can do the entire install remotely. I’ll mount the ISO over the network, and all do all the config this way. This is the screen you see when first logging onto IPMI web interface:
I decided to install Vmware itself on a USB stick. What’s nice about this motherboard is that it has a USB port on the motherboard itself, meaning no external USB key required. This keeps it a bit neater.

The SuperMicro has two external NICs, one Intel 217 and an Intel 210. I’ve installed VMware 5.5 update 2 and the I210 Intel card is supported out of the box. No need to hack any drivers into the ISO. I’m more than happy with one NIC for now so I’ve no need to try and get the 217 working.

Once VMWare was installed, I created a 300GB iSCSI LUN from my Synology and attached VMWare to that. The install and set up really was painless.

Vmware shows my system as:

Virtual devices

I wanted to start a basic lab, so I have the following VMs running in my lab:

With all my VMs running, I see hardly any CPU and quite a bit of RAM usage as I expected:
For now the RAM amount is fine. As I ramp up the lab and prices drop, I’ll add another 16GB to the system.

Power usage

As I wanted this to be low power, I’ve done full wattage readings on power usage.

  • Server off, IPMI on – 3.7 Watts
  • Server on, no VMs running – 23 Watts
  • Server on, all lab VMs running – 34 Watts

Not at all bad. In another post I’ll show the Synology power draw as well as the power draw if all VMs are using full CPU. I’ll also go over how I automate my VMs starting and shutting down.

Published by


Network Architect. Dual CCIE and JNCIE-SP.

21 thoughts on “ESXi whitebox server”

  1. Hi Mike.

    When using high amounts of mem, any errors not checked in memory can make their way to the hard drive once written. I plan to run a few large databases and other VMs on here and I really didn’t want error creep.

    While ECC is slightly more expensive, the whole thing worked out roughly the same as the i& non-ECC build

  2. Hi Dan.

    I initially didn’t want to put the cost on as it depends on so much. Where in the world you live, how long after this post you buy, what extra components are needed, etc.

    But I spent roughly £400. Not bad, but I didn’t have to spend any money on hard drives. My NAS plus drives cost quite a bit of money, but that’s a separate expense :)

  3. Xeon E3s are surprisingly cheap. Not bad for a 4 core, 8 thread CPU. E5s and above are quite pricey, but the amount of cores you can get is insane

  4. Alternately one can get HP DL360s with good memory on ebay for less than £200. I got a couple of them with 16GB of memory and 2QC processors for £160 including shipping. No HDD though.

  5. I could’ve, but I like the silence my box gives me and a cube fits better than a rackmount server.

    If this was in the basement and I had a rack, that’s certainly an option I would’ve looked at

  6. Hi Darren,

    I like your ESXI WHITEBOX SERVER setup and I would like to build one too. I have ordered today most of the parts, but I am wondering if I can use the ESXI with an internal storage, like 2 x Crucial M550 256GB in raid 0?

    I already have a Synology NAS 211+ but I am using it for personal storage and it’s old now.

  7. Sure, you could use internal storage. If I didn’t have the NAS I’d probably have a raid1 SSD set up :)

    The only issue is that I don’t think the onboard raid controller is recognised by ESXi. You could give it a try, but you have have to get a supported raid controller.

  8. Did you purchase licensing for ESX 5.5 or are you running a free version? I am running 5.1 because I ran into too many roadblocks with the free version of 5.5. Thanks for the article, I will probably go this route when it’s time to upgrade from my Dell Dimension e520. Can you describe how you got the data on your wattage usage? Those are amazing numbers they are less than a 60 Watt light bulb!

  9. Hi Jason.

    Yes this is a licensed version. However before I had the license entered, everything I needed was already working.

    To check power usage I used one of these: – At first I thought the readings were too low, but I tested it with an 11 watt energy saving bulb and got exactly 11 watts on display.

    The PSU itself is an 80plus, and the Xeon really pulls hardly anything. Also I have no spinning drives or GPU. Overall it’s the perfect server for me and I love the low power use :)

  10. Darren,
    Very informative ! Thanks for sharing. You mentioned that the server is diskless and you installed ESXi server on NAS iSCSI LUN. Can you let me know how you did this? A quick research in Google revealed that we need NIC card capable of iSCSI boot for this to work.

  11. Hi DM.

    Nope, I installed ESXi on a USB stick which is then located inside the box itself. I then mounted some iSCSI LUNS from the NAS for VM storage. ESXi itself is still a local USB install :)

  12. Hi Darren

    Thanks for your write up – that is impressively low power consumption!

    I’m just gathering parts for a new, high memory white box server to run ESXi: I have an i7-5820K, 64GB of Ballistix DDR4 and have just ordered a Seasonic S12G-450 PSU. I stumbled upon your site when trying to decide between a Supermicro MBD-C7X99-OCE-F and ASRock Extreme 4 mobo – I think you’ve swayed me in favour of the Supermicro with IMPI so I don’t need to worry about the extra graphics card.


  13. Hello Daren.

    So any have you checked you Synology power draw? I’ve found the following spec on synology site working: 127,7 W, standby: 13.2W.

    Thanks in advance.

  14. Hi Alexander.

    I’ve just measured. I tend to keep the discs spinning to keep the wear and tear down. It also uses a surge of power when spinning up.

    When I have all 4 of my discs running I measure 25W at the wall. So not bad at all.

  15. Hi,

    If you are after a low power ESXi white box, check out the Intel NUC.
    OK, its not IPMI or ECC, but my i5 4250 with 16gb RAM and SSD runs great. At idle its just 7w at the wall, max I got with 6 VM’s running (mix of Win and Centos) was 22w.

    Florian Grehl over at has a article on how to set one up.

  16. Thanks for another awesome post. Question: has ESXi replaced your dedicated dynamips server? Do you get the same functionality with ESXi or do you continue to turn to dynamips for some features, like packet captures, quickly reloading lab configs/topos, etc.? Do you have to use a breakout switch to reach your physical switches?

  17. Yes I no longer have my old dynamips lab. If I need to test actual hardware I do that at work. However for my CCIE my home lab was absolutely essential :)

  18. Hi Darren. Thanks for your article; I’m thinking about setting up a homelab with one, maybe two ESXi servers. It’s always the question what hardware to go with, since you possibly don’t know if your setup will eventually work with ESXi 5.5 / 6.0 – so at first I thought: “Maybe I’ll get a HP ProLiant MicroServer Gen8” – it’s not expensive, it’s supposed to work fine with VMware but you can a.) only use up to 16 GB RAM and b.) only go with v2 (Socket 1155) CPUs. In terms of effiency and factual CPU power it’s not enough for me and the chipset is merely 3 years old when setting it up. Your proposal seems to work smoothly so I’ll give it a try; thanks a ton and greetings from Germany. Tobias

Leave a Reply