Paradigm Shift: Why HPC Stands for High Productivity Computing | SUSE Communities

Paradigm Shift: Why HPC Stands for High Productivity Computing


The past few years have seen innovations in the High Performance Computing (HPC) landscape shift from research to commercial use. Due to the emergence of open source and new clustering technologies, supercomputing has permeated its way across geographies and  industries beyond research and academia, into areas such as energy, health care, entertainment and retail, among many others. Large enterprises, including government and universities over the last few decades had begun to use Linux, making it the de facto standard for HPC. There is an affinity to use Open Source across the board – and with public sponsored PHDs driving a lot of development, it’s obvious that HPC can be done with open source better than other ways. The HPC environment using Linux and open source is highly mature  – which is proven also by the wide use of Linux in the top 500 supercomputer list – and provides ease of use also for commercial customers.

(Picture of MareNostrum, the most beautiful supercomputer, operated by BSC)





But how did we get here?

A Faster Pace of Innovation

For the last 25 years, HPC technologies have been mainly (and still are) used in areas such as academic research, fluid dynamics, oil and gas exploration, computer aided design and testing, and pharmaceutical and military research. The historic cost of HPC limited their use to market segments that could afford these systems. But the evolution of both lower cost hardware and Linux has dramatically reduced the cost of these systems and compute power had increased on a scale of one thousand times in just a few years, allowing many companies to use the power of supercomputers in the form of an HPC Linux cluster on commodity hardware.

It is well known that the term ‘High Performance Computing’ (HPC) originally describes the use of parallel processing for running advanced application programs efficiently, reliably and quickly. The term applies especially to systems that function above a teraflop or 1012 floating-point operations per second, and is also often used as a synonym for supercomputing, although technically a supercomputer is a system that performs at or near the currently highest operational rate for computers. To increase systems performance, over time the industry has moved from uni-processor to SMP to distributed-memory clusters, and finally to multi-core chips. However, for a growing number of users and vendors, HPC today refers not to cores, cycles, or flops anymore but to discovery, efficiency, or time to market.

The interpretation of HPC to ‘High Productivity Computing’ highlights the idea that HPC provides a more effective and scalable productivity to customers, and this term fits really well for most of the commercial customers.

But with regards to High Productivity Computing, there are still some challenges to solve:

Time is money!

Compared to before, more business are now using computers not only to manage their businesses but also as part of their delivery (animations, stock exchange, analysts, finance, weather forecast) or to support the creation of products (oil research, car crash tests, drug research). Being able to get this done faster speeds up time to market and making it more accurate gives more buffer. Thus it’s a competitive advantage to run it as fast as possible – which can be done on an HPC system. To give an example on upgrading a cluster, a team recently built a new HPC system for a healthcare client. The new cluster was 10x faster than the old one, half the price, and one quarter the size.

Big Data or HPC?

Another important category is “ultra-scale business computing”. Commercial Companies with data-intensive tasks are adopting HPC. As an example, just take a look at companies like Google, Amazon, Facebook or eBay: even if the web-based transactions or searches are no traditional HPC workloads, and even you might call this „big data”, at the end of the day these companies use HPC technologies to deal with all the data processing and to run them at extreme scale.

Building Bridges!

There is also a growing number of mid-market companies adopting HPC due to changing business needs AND the availability of economical solutions. And quite a few organizations in the supercomputing area start building bridges to the SME industry by offering these companies access not only to their supercomputers but also to their own expertise in high performance computing, and co-operations between industry and government/academia are continuously growing – two examples are Pittsburgh Supercomputing Center, and the Irish Centre of High-End Computing (btw: both sites are running SUSE Linux Enterprise Server on their supercomputers).

Windows isn’t the only option …

Businesses that are used to using Windows need to “have the heart” to check out alternatives – and dual-boot HPC systems might be a first step. As Linux and Windows seem to become the two dominant platforms of the future in the enterprise, there will be an increasing need for these operating systems and the tools that manage them to work well together. Systems that lack well-developed interoperability capabilities can cause inefficiencies throughout the enterprise. This translates as well for HPC; it seems to be logical that the two major platforms used in the High Productivity Computing market will be Linux (primarily) and Windows – and they also need to interoperate well in this area.

 … and Linux is your friend!

Thanks to its speedy adoption of technical innovations and improvements – or even better, as Linux very often is ‘spearheading’ technical innovations, Linux will further play a significant role in the new HPC market dynamics, where HPC turns more and more into High Productivity Computing. And as technology continues to evolve, supercomputing will continue to become an essential technology across more industries and within High Productivity Computing.

In fact, the U.S. Department of Energy is currently working on deploying Summit, a new supercomputer capable of 200 petaflops, and even on developing the ultimate project for exascale computing (1,000 petaflops per second of sustained performance) which is relevant also for industries that may adopt HPC in the coming decade. As open source and Linux continue their evolution, High Productivity Computing will be found in most data-driven enterprises and organizations as their key to success.


Disclaimer: The text at hand has not been reviewed by a native speaker. If you find typos or language mistakes, please send them to me ( – or if you like them, just keep them and feed them. 😆

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published.

No comments yet

Meike ChabowskiMeike Chabowski works as Documentation Strategist at SUSE. Before joining the SUSE Documentation team, she was Product Marketing Manager for Enterprise Linux Servers at SUSE, with a focus on Linux for Mainframes, Linux in Retail, and High Performance Computing. Prior to joining SUSE more than 20 years ago, Meike held marketing positions with several IT companies like defacto and Siemens, and was working as Assistant Professor for Mass Media. Meike holds a Master of Arts in Science of Mass Media and Theatre, as well as a Master of Arts in Education from University of Erlangen-Nuremberg/ Germany, and in Italian Literature and Language from University of Parma/Italy.