Building Your Home Lab
Ok, I admit it, I am a geek. I have enough hardware in my home lab to run a fairly good sized business if I so desired. I thought others might appreciate having some of the information and ideas on how to build a lab on the cheap (or relatively so).
First, we’ll start with a quick inventory of what populates my lab:
- 2 Dell DCS6005 systems
- 3 nodes per chassis
- Each node with dual hex core opertons and 48GB of RAM
- Supermicro RSC-R1U-E16R riser
- Mellanox MHQH29B-XTR IB card
- 4 HP SE316M1 servers
- 1 Xeon L5640
- 12 GB RAM
- Replaced P400 RAID with P410
- Mellanox MHQH29B-XTR IB card
- 2 Minnowboard Max units with Flotsam (mSATA) expansion Lure
- 1 RaspberryPi 2
- 1 RaspberryPi 1
- 1 Dell T110
- 1 Extreme Networks Summit 400-48ti
- 1 Raritan Dominion KX2-32
- 1 Mellanox IS5030 IB switch
- 1 Used 24u cabinet
- 2 8 outlet Rack Mount Strip PDU
- Various cable management gear
- Lots of Cat 5e cables purchased online
- 1 Kill-a-watt
First thing everyone always says is “WOW, that’s a lot of gear, what do you do with it all?” That’s the easy part to answer. I use this hardware to test and validate various configurations and to serve as the playground I need for testing ideas around cloud, software defined storage, systems management, etc. Having all the hardware also keeps me sharp on the skills I have learned over the years involving hardware maintenance, design and cable management (which I need to spend a little more time on right now). Most recently, this hardware has been used in doing a lot of testing for our SUSE Enterprise Storage product. I have enough hardware to either have 2 clusters with a couple of clients, a larger cluster with some extra defined node roles, etc.
Next question I always get is about cost. I source almost all of the equipment via ebay. There are a number of liquidation companies that buy out leased or depreciated hardware and sell it cheap.
Let me provide a basic idea of what each component cost me.
- Dell DCS6005 servers ~$500 each, these were the most expensive, but I bought them about 3 years ago.
- HP SE316M1 server – ~$125 each
- Extreme Networks switch ~$100
- Raritan KX2-32 with CIM (Computer Interface Modules) ~ $250
- Mellanox IS5030 and HBAs ~$300
- PDU ~$50 each
You’ll notice I don’t mention the drives in the units. This is because I am in the slow process of replacing them all with consumer SSD drives. This helps lower the power use and heat production quite dramatically. For the DCS6005 units, I have to convert the 2.5″ drives into a 3.5″ carrier with hot-swap capability. For this, I really like the ICY Dock MB882SP-1S-2B, they run about $10 – $15 each if you catch a good deal on them. For the drives, I have been picking up Kingston 120GB SSDNow V300 units. You can find them for less than $50 and occasionally less if you are watching the deals quite carefully.
Let’s talk networking for a moment. I don’t list my home router in the inventory, but it is an ASUS RT-AC66R unit. I’ve used both the stock firmware and DD-WRT on this router and been quite happy with the ability to handle the throughput and advanced network configurations I need.
I chose Extreme Networks hardware for the enterprise class switch (if buying today, I would be buying an x450) for a few reasons:
- I’m very familiar with it from a previous job
- The same top-down CLI works for all the Extreme switches
- L3 routing capability
- Can be stacked
- 10Gb uplink ports
My recommendation is to buy what is right for your budget and what you are familiar with. I don’t recommend necessarily buying the cheapest as buying something you are likely to encounter, even if an older version, builds your skill set for the real world. I do usually suggest at least 24 ports with a preference towards 48 though. If you look at the configuration I have there are about 30 network ports that are utilized when everything is cabled up. Crazy, right?
Why the Infiniband hardware? That’s because nobody I know really wants to spend huge amounts to enable 10Gb ethernet for their home lab. The least expensive switch I can find today is an 8 port that costs just shy of $900. Then add a few hundred each for each network card and you get well into the thousands pretty quick. Contrast this with $50 or less for each dual port 40Gb/s IB card and a few hundred (if you catch the right deal) for an IB switch. The IB hardware will run IP over IB and allow you to build a very high speed, low latency, private network that way. Some of the IB switches will even bridge to a regular ethernet network.
I’ll probably blog another time about some specifics of the network configuration. I have multiple subnets being routed at various places and a little guide may be helpful for those with less routing experience.
The KVM might seem not all that important, but trust me, it is. When you have 11+ servers, perhaps in a different part of your house or office, an IP KVM is very useful. Do I have to do some tricks to get this older hardware happy? Yes, it doesn’t work as well with a modern JVM as it used to, so you end up having to turn off certificate enforcement and turn down some security settings, but it does still work nicely. It also saves you from plugging and unplugging a monitor, keyboard and mouse when you absolutely MUST be on the console. Also the use of Cat 5 cabling makes it easier to keep a small footprint for the cable mess you are generating by plugging in all this gear.
Let’s talk about power usage. IF I were to leave all this gear on 24×7 I would see a significant power cost associated with it, something along the order of $150 per month. I know this thanks to the Kill-a-watt meter I have the PDUs running through. The meter indicates that power consumption for my configuration is around 2kW per hour. This is important to monitor and know as it may pay into home office reimbursement OR be something you can claim on your taxes (consult your tax professional). This doesn’t count the extra cost for cooling the heated air that I would be releasing as well as that varies throughout the seasons here in Oklahoma.
All that being said, I only run the gear when I actually need it on. This keeps the power usage in check and also keeps noise level down.
Ok, so not everyone has the space and tolerance for a 24U cabinet in their house. How else can you build a lab that provides that you need on the cheap. There are a few primary building blocks that I really like.
- Business class laptops that are used. These can usually be found for a few hundred each on ebay and upgraded with SSDs, RAM and an extra USB based network connection. I have had a strong preference for Dell Latitude D630c systems in the past. There are a lot of them, you get dual cores and you can put 8GB of RAM in them, although, it probably makes sense to identify a newer model at this point in time.
- Developer boards/systems like the Minnowboard Max. With the Flotsam Lure, an mSATA drive, an extra USB3 1GbE and SSD for the SATA port, the total investment for each of these systems is around $250 – $275. Not bad from a unit that draws about 5W.
- Dell T110 II, Lenovo TS140 or similar servers. These tower servers were designed for small office environments and thus are quiet, but yet relatively well suited for lab environments. The T110 I have actually was used as a home theater system for a period of time because it is so quiet. Again, ebay is a great source for these systems.
At the end of the day, what matters is that if you are building a home lab, it doesn’t have to be the latest and greatest. Think outside the box, don’t be afraid to use older components and maximize your learning and skill improvement from whatever you invest in. The key is that the lab serves the need, not that it is the latest technological wonder.
Nice piece @davidbyte
I need help with my OpenSuse Leap 42.1.
Every time i try updating my repo’s through the terminal, i get this error: “sudo: unable to open /var/lib/sudo/ts/harrison: Read-only file system
I have booted Leap 42.1 on top of Windows 8.1 and just recently, i started experiencing log in problems. After inputting my password, the cursor and the “gecko” screen hang infinitely forcing me to either restart the machine or play around with the boot-able partitions. Please help.
My email address is email@example.com
I am fascinated reading your blog posts. The one which really got my attention was on the lab setup you had blogged.
I would like to get some advise on setting up a poor man’s lab to test SUSE Enterprise Storage using either VMware Workstation or Oracle virtualbox for learning purposes.
Appreciate if you could let me know if its possible and it will be a great help if you can guide me in a step by step process.