CASPER

THE FRIENDLY GHOST
090731-supernova-sim-01.jpg



This astrophysics simulation seeks to discover the mechanism behind core-collapse supernovae, or the violent death of short-lived, massive stars. The image shows entropy values in the core of the supernova, different colors and transparencies assigned to different values of entropy. By selectively adjusting the color and transparency, the scientist can peel away outer layers and see values in the interior of the 3-D volume. Credit: Hongfeng Yu


A new view of supernovas — the spectacular explosions of dying stars — has come not from a telescope, but from a powerful supercomputer simulation.

The simulated supernova, revealed in cut-away layers in the rendering, grew out of an effort to develop faster ways to build high-fidelity computer models of complex phenomena in the real world.

Performing a single run of a current model of the explosion of a star on a home computer would be next to impossible — it would take more than three years just to download the data. So scientists instead use supercomputers, which can handle processing quadrillions of data points at a time.

"On the scale that we're working, creating a movie would take a very long time on your laptop — just rotating the image one degree could take days," said Mark Hereld, who leads the visualization and analysis efforts at the U.S. Department of Energy's Argonne National Laboratory.

The models run on these computers can generate visualizations of everything from supernovas to protein structures.But even with the speed supercomputers provide, the complex models are quickly overwhelming current computing capabilities.

Scientists at Argonne are exploring other ways to speed up the process, using a technique called software-based parallel volume rendering.

Volume rendering is a technique that can be used to make sense of the billions of tiny points of data collected from an X-ray, MRI, or a researcher's simulation. The effort could particularly help in manipulating images and creating movies from them.

The techniques the researchers are working on use parallel computing — processing data in multiple computing cores (160,000 cores for Argonne's computer). The data is then usually sent to graphics processors (GPUs) to make images, but because most GPUs are developed in the video gaming industry, they're not always well suited for scientific tasks. It also takes a lot of computing power to move all that data over.

Argonne researchers wanted to know if they could improve performance by skipping the transfer to the GPUs and instead performing the visualizations right there on the supercomputer. They tested the technique on a set of astrophysics data and found that they could indeed increase the efficiency of the operation.

"We were able to scale up to large problem sizes of over 80 billion voxels per time step and generated images up to 16 megapixels," said Tom Peterka, a postdoctoral researcher at Argonne.

This new visualization method could enhance research in a wide variety of disciplines, Hereld said.

"In astrophysics, studying how stars burn and explode pulls together all kinds of physics: hydrodynamics, gravitational physics, nuclear chemistry and energy transport," he said. "Other models study the migration of dangerous pollutants through complex structures in the soil, to see where they're likely to end up; or combustion in cars and manufacturing plants—where fuel is consumed and whether it's efficient."

"Those kinds of problems often lead to questions that are very complicated to pose mathematically," Hereld said. "But when you can simply watch a star explode through visualization of the simulation, you can gain insight that's not available any other way."
 
Top