Stanford researchers break the million-core supercomputer barrier
29 January 2013
Researchers claim to have set a new record in supercomputing, as they harness a million computing cores to model supersonic jet noise.

A floor view of the newly installed Sequoia supercomputer at the Lawrence Livermore National Laboratories (photo courtesy of Lawrence Livermore National Laboratories)
The work was performed at the Lawrence Livermore National Laboratories (LLNL) on a newly installed Sequoia IBM Bluegene/Q system. Sequoia once topped the list of the world's most powerful supercomputers, boasting 1,572,864 processors and 1.6 petabytes of memory connected by a high-speed five-dimensional torus interconnect.
Because of Sequoia’s impressive numbers of cores, Joseph Nichols, a research associate in the centre, was able to show for the first time that million-core fluid dynamics simulations are not only possible, but also contribute to research aimed at designing quieter aircraft engines. Engineers are keen to design new and better aircraft engines that are quieter than their predecessors. New nozzle shapes, for instance, can reduce jet noise at its source, resulting in quieter aircraft.
Predictive simulations based on advanced computer models aid in such designs. These complex simulations allow scientists to peer inside and measure processes occurring within the harsh exhaust environment that is otherwise inaccessible to experimental equipment. The data gleaned from these simulations are driving computation-based scientific discovery as researchers uncover the physics of noise.
Computational fluid dynamics (CFD) simulations test all aspects of a supercomputer. The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Supercomputers like Sequoia divide the maths into smaller parts so they can be computed simultaneously. The more cores, the faster and more complex the calculations can be.
However, the difficulty of the calculations only becomes more challenging with more cores. At the one-million-core level, previously innocuous parts of the computer code can suddenly become bottlenecks.
Recently, Stanford researchers and LLNL computing staff have been working closely to solve these problems. This week, the first 'full-system scaling' to see whether initial runs would achieve stable run-time performance were undertaken. The code performance continued to scale up to, and beyond, the all-important one-million-core threshold, and the time-to-solution declined dramatically.
“These runs represent at least an order-of-magnitude increase in computational power over the largest simulations performed at the Centre for Turbulence Research previously,” said Nichols “The implications for predictive science are mind-boggling.”
The computer code used in this study is named CharLES and was developed by former Stanford senior research associate of Nichols, Frank Ham. This code utilises unstructured meshes to simulate turbulent flow in the presence of complicated geometry.
Stanford researchers are also using the CharLES code to investigate advanced-concept scramjet propulsion systems used in hypersonic flight and to simulate the turbulent flow over an entire aircraft wing.
Contact Details and Archive...