During the past days, I’ve been attending the International Supercomputing Conference 2012 in Hamburg. This is one of two events (the other one is Supercomputing in the US) where the newest list of the top 500 supercomputers of the world is released – Amie just published a nice blog about SuperMUC, the fastest supercomputer in Europe, that runs SUSE Linux Enterprise Server 11 SP2.
At the show, I have got so many good questions about SUSE’s “role” in HPC, and there wasn’t even enough time to go in detail for all of them. Thus I decided to make up for it with a short series of blogs.
The first question I’d like to address is “What is your perspective on how Supercomputing, High Performance Computing and Linux have evolved over the past decades?”
The past few years have seen significant changes in the High Performance Computing landscape – recently often referred to as High Productivity Computing. This happened at least in part due to the emergence of open source and new clustering technologies.
A few years back, UNIX variants such as AIX, HP-UX, Tru64 UNIX, Solaris, etc., ruled. Clustering independent, commodity-class machines-and building supercomputers out of them was a controversial idea as recent as 15 years ago. For the last 20 years, HPC technologies have been mainly (and still are) used in areas such as academic research, fluid dynamics, oil and gas exploration, computer aided design and testing, and pharmaceutical and military research. The historic cost of HPC or “supercomputers” had limited their use to market segments that could afford these systems.
But the evolution of both lower cost hardware and Linux has dramatically
reduced the cost of these systems. Compute power has increased on a scale of one thousand times in just a few years, allowing many companies to use the power of supercomputers in the form of an HPC Linux cluster on commodity hardware.
Relatively suddenly (and by market standards), Intel and AMD replaced RISC processors, and thanks to its maturation and low costs Linux unseated UNIX as the dominant operating system. Today, in HPC environments Linux is a given, and has displaced the majority of the UNIX systems. While the low price was a key argument for Linux in the past, today customers buy Linux-based systems for the excellent performance, reliability, scalability, and security. Because it is open source, naturally the TCO of Linux infrastructures is still unbeaten.
Linux had steadily incorporated HPC features over the years and has become the primary OS for clustering and HPC deployments. Excellent operating system performance is required to achieve best possible performance and scalability of the HPC system. In order to gain performance, HPC systems running on Linux have also been spearheading the industry with regards to the deployment of latest architectures, such as e.g. new chips like the 64bit processors (Intel Itanium2 or AMD Opteron), or technologies like Infiniband.
Virtually every industry is adopting Linux clusters to attain the performance improvements needed to deliver on organizational goals. Seismic analysis for oil exploration; aerodynamic simulation for motor and aircraft design; molecular modeling for biomedical research; and data mining and financial modeling for business analysis all leverage HPC. Organizations are also adopting clusters based on Linux to ensure constant uptime, while still leveraging the flexibility, reliability and low cost of open source.
Linux clusters have also become easy to set up and simple to manage. More importantly, there are a lot of resources available for HPC on Linux – many of them free. Today, even the large business and research agencies are using Linux for their HPC requirements because Linux on a cluster of x86 servers is more economical.
COMING SOON – Part II: SUSE’s evolution and position