Server Build

you need to check against the hardware compatibility matrix.

-for example at work we're upgrading firmware in servers that are 2 years old because that server with default firmware has dropped off the list.

What I mean is me looking at the list for you isn't going to be enough... you need to check it and check it regularly if you ever want support...

(but it obviously opens the door to saying.
the compatibility list isn't a "it'll only work on this list" -as I said we're upgrading firmware to have continued support, not because things are broken.)

I think it more likely said something about a storage back plane rather than storage back plate.
 
Sorry, I do realize this post is a bit old, but wanted to give some personal input on this topic since I built an ESXi host in my home to consolidate several machines into one hefty efficient machine.

You NEED to look at the HCL (Hardware Compatibility List) for the version of ESXi you would like to use, I am using ESXi 5.5, they dropped a ton of official support for hardware like RealTek network controllers. You can build an ESXi 5.5 image that supports those controllers, but, don't cry if you run into bugs, because it's not meant to work that way.

Never depend on a single disk in a system, nor use the motherboard RAID solution, many times the motherboard is deploying software RAID which will not work in ESXi, you HAVE to pretty much get a dedicated SAS controller that supports hardware based RAID. A good controller that can be found cheap is a 9650SE, comes in a 4, 8, 12, and 16 disk version.

Next thing, when doing RAID with ESXi, you almost always have to turn write caching on in your raid controller, otherwise you will see that you get horrible writes to the RAID array, the downside is, this means you also have to have a BBU on your controller to help prevent data corruption if the power is lost.

RAM, 32GB might be JUST enough for 8 machines... I have personally found that 16 is too little for just four, and 32GB would be just enough to be comfortable with 6 machines. ESXi does use some of the RAM, remember this.


Now, to my actual experiences... I am cheap, horribly cheap. When I started my ESXi project, it was so I could retire two old rack servers in my closet that ran PFSense, as well as move my media server over to the same server. I made the mistake of not having a proper disk controller, and found some driver packages that I could put into my ESXi install so that I could use my motherboard controller with the OS. The downside, now, all those disks that are attached to the motherboard, are married to ESXi, this shouldn't happen, but it's a bug from using improper drivers to get a non-compatible SATA controller to work. If any of those drives vanish, ESXi will not boot, which is a bad thing, that's 4 disks, all in AHCI, that can fail at any time with vital data on them, and one of them failing will take the whole server with it, easily half a day worth of work just to get it back online.

Don't source all your parts from online retailers, look around ebay, you can get some pretty decent older hardware that will serve you just fine. Can find a 9650SE-12ML for $75 with the fanout cables. Still have to learn how to SSH into the host and install the controller, but it works great. Stay with higher end Intel NIC's for your network, your going to need two ports IMO, one for management, and at least one or more for your VM's.


In my personal opinion, you need a beefy CPU IMO, while you may not use the power that much, it certainly makes life aa bit easier should you ever need that extra power.

Consider 64GB of RAM, and don't allocate it all, always keep some of it left over.

Consider an actual RAID controller seperate from your motherboard, that way if a failure occurs on the RAID card, you can just swap the cards, and not the whole motherboard. Easier to find a controller than a whole motherboard.

Consider your overall network throughput, and how much your going to be using this thing, the goal of virtualization is overall power reduction by sharing resources, sometimes it's not wise to share a single NIC between several VM's, especially if you ever get to where multiple networked computers are accessing different machines at different times. Congestion is real, even at 1GBit these days.

If I am not mistaken, for passthrough to work, the system as a whole (motherboard) must support it, can't just throw in a card and pass that piece through to a VM. It actually has to be fully supported.

Plan, plan, PLAN. Ask questions, research, don't buy the most expensive part because it looks like it might work, actually research that component to see what you might have to do to get it to work with ESXi. If you think you are done planning out your hardware, ask others to evaluate the build. Take the input, and plan more. Once you are done, start planning on how you will handle the datastores, how you will be backing the datastores up, and how your going to setup your system resources between the different machines. If you don't setup reserved resources and maximum resources properly, a single VM that acts up can bring the whole system to a crawl.


Yeah, I know my post is a bit scattered, and a bit late, but I am tired, and just wanted to share a little bit of my experiences with ESXi and running VM's in a home network.
 
Last edited:
Back
Top Bottom