Pittsburgh Supercomputing Center Logo
Sector: .edu & .gov
Ubicación: United States
Descargar historia completa

Pittsburgh Supercomputing Center Wins National Science Foundation Grant with SUSE Linux Enterprise Server

Aspectos destacados

  • Implemented largest cache-coherent shared-memory system in the world
  • Delivered outstanding performance for over 1300 researchers working on 373 projects
  • Operated without a security incident while providing open access to systems
  • Reduces genome processing time from almost two weeks to less than eight hours


The Pittsburgh Supercomputing Center (PSC) won a grant from the U.S. National Science Foundation to provide its Blacklight research platform based on the SGI UV 1000 supercomputer running SLES. Blacklight is unique in that it features extremely large cache-coherent shared memory that can reduce the time it takes to process data from weeks to hours while increasing ac­curacy.


The Pittsburgh Supercomputing Center (PSC) competes to support scientific research from fluid dynamics to climate modeling and genomics. PSC won a National Science Foundation grant with a SUSE Linux Enterprise Server (SLES) based SGI UV 1000 cache-coherent shared-memory system. The system now hosts 1,316 users and 373 research projects at universities across the Unit­ed States with unparalleled ease of use for rapidly testing new ideas.

El desafío

The U.S. National Science Foundation (NSF) periodically issues solicitations for solutions in its Extreme Science and Engineering Dis­covery Environment (XSEDE), a US$121-mil­lion project that integrates digital resources and services for universities and research centers across the United States. The NSF maintains a rigorous selection process for resource providers, screening for solu­tions that offer tremendous capabilities, maximum productivity, the ability to share knowledge and the power to make XSEDE the most advanced and capable digital cyber infrastructure in the world.

PSC, which has a long history of success, highly regarded reputation and people who are respected throughout the industry, proposed a unique shared-memory su­percomputing system that would be much faster and more efficient than previous distributed-memory systems. PSC also had a long relationship with SGI and knew the supercomputer maker was the only pro­vider that could deliver the unique shared-memory capabilities it was looking for.

“SUSE Linux Enterprise Server is the only distribution that supports the full capabilities of the SGI machine. It was a no-brainer for this application. We use it. We recommend it. SUSE has a newer kernel than other options, making it the best choice.”

Solución de SUSE

PSC selected SGI, the maker of the SGI UV 1000 system, as its partner in building the shared-memory foundation for its XSEDE proposal. Shared memory far surpasses distributed memory, because all the pro­cessors can access all the memory, where distributed-memory systems require ad­ditional code to access the processors’ memory. Programming for a shared-mem­ory machine is thus much easier, and pro­cessing is far faster and more efficient.

“SGI is a unique supplier of large, hard­ware cache-coherent shared-memory machines,” said Jim Kasdorf, director of special projects at PSC. “Software shared-memory approaches exist, but they are far less efficient. We knew when we de­cided on a shared-memory machine that SGI was the only choice.”

Selecting the operating system for the SGI supercomputer was an even easier choice. PSC has worked with SUSE since 2004, when SUSE provided the operating system for components of a Cray XT3 computer. When SGI designed its shared-memory super­computer, SUSE Linux Enterprise Server was the only operating system that could pro­vide the necessary support. “SUSE Linux En­terprise Server is the only distribution that supports the full capabilities of the SGI ma­chine,” said Kasdorf. “It was a no-brainer for this application. We use it. We recommend it. SUSE has a newer kernel than other op­tions, making it the best choice.”

The SUSE support team meets weekly with SGI to ensure its needs are met, and the SGI support team meets weekly with PSC. “They are very responsive. They do a very good job. They work very hard. The users are happy. And when the users are happy, we’re happy,” said Kasdorf.

Los resultados

SLES supports this SGI shared-memory system that holds 256 blades, 4,096 pro­cessing cores and 32 terabytes of memo­ry in two 16-terabyte partitions. This is the largest cache-coherent shared-memory system in the world. And the benefits to re­searchers are unparalleled. More than 1,300 users are taking advantage of the sys­tem for research in 373 projects, covering extreme-scale performance engineering, chemistry, fluid dynamics, the early uni­verse, condensed matter, seismic analysis, nanomaterials, astrophysics, climate mod­eling and genomics. One example involves researchers who hope to build a diagnostic chip that may identify heart disease in humans. They have been screening more than 100,000 mutant mice to find heart defects, sequencing the genomes and comparing the results to the genome of a healthy mouse. With the PSC-SGI machine running SLES, processing that had been tak­ing almost two weeks was cut to less than eight hours.

A system such as this must be readily ac­cessible to researchers all over the United States, yet the research must be kept se­cure. The NSF has been very successful in maintaining security, and this SGI system has never had a security incident. “We’ve been very successful in providing the secu­rity while providing the open access that’s necessary,” said Kasdorf. “SUSE is very good, and they stay very up to date on security.”

The extraordinary memory size, ease of pro­gramming, scalability and stability of the PSC system built on the SGI UV 1000 running SLES has given scientists and engineers the ability to solve problems in ways that have never been possible before.