SLES 12 HPC Cluster on Minnowboard Max
If you know what a Minnowboard Max is you’re already asking yourself why? “Why would anyone want to build an HPC cluster from these things?” Quite frankly, the answer is “Because I can”.
In preparing for the Super Computing 2014 show in New Orleans, I received 6 of these lovely units as giveaways. Being that I always like to do creative things for the show, I thought, why not build what is arguably the first SLES12 HPC cluster using these systems and that’s just exactly what I did.
Here’s the recipe.
1 Head node, 5 compute only nodes. Each node has a 16GB thumb drive for storage and is connected to an unmanaged gigabit ethernet switch. The “chassis” is made up of 4 12″ 6-32 screws and nylon tubing cut to equal lengths to ensure the boards stand off from each other sufficiently. Power is provided by the 5v 2.5A power supplies purchased for each MinnowBoard Unit. If I’d had the time, I would have likely identified a single power supply to drive the entire cluster and built a nicely integrated chassis, but time is a luxury that I didn’t have for this project.
All nodes are loaded with a slimmed down SUSE Linux Enterprise Server 12 installation. To do this, I created the image by deselecting packages I didn’t want from the details part of the software configuration. I removed the gui, cups and a number of other items. You need to add openmpi and other libraries you may need for the environment (in this case, I needed libblas3 and nis-client). At the end of the install, during the reboot, I powered off the unit, removed the key, and placed it in another SLES system. On this system, I used dd to make an exact copy of the stick:
dd if=/dev/sdb of=/root/hpc.raw bs=4k
I then removed the stick and proceeded to clone the remaining 5 sticks using dd.
dd if=/root/hpc.raw of=/dev/sdb bs=4k
After returning the head node’s USB drive to the USB 3 port on the Minnowboard, I booted and installed a few extra items. 1 being DHCP and DNS. This makes it easier to configure the additional nodes. After getting those services up and running, we had the bones for a functioning cluster and I powered it up.
Now I needed to do something with the cluster. Like any geek, I decided I should benchmark the cluster. For this, I turned to LINPACK. I downloaded HPL and proceeded to get it compiled with some help from my friend and coworker, Vojtech Pavlik (Director of SUSE Labs).
Getting HPL compiled required adding:
- gfortran 77 (create a symlink gfortran->g77 in
It also required removing -DHPL_CALL_CBLAS from the Makefile as we are using the classic BLAS.
After doing all of this, I now have a cluster that turns in a LINPACK result just shy of 3Gflops without further turning. I hope to be able to do some more tuning Monday and Tuesday to get the cluster closer to the real gigaflops that it should be capable of. I expect some tuning of the HPL.dat file by an expert will yield much better results. I also suspect that there are bits of the OS that need to be tuned.
If you are an HPC geek and want to come try your hand at finding more knobs to tune on the system, make it a point to come by the SUSE booth #3943 and we’ll see what we can do.