Video Games Revolutionize Scientific Computing

Video games themselves have not revolutionized computing, but the cards that render the images have. In the quest to simulate as realistic an environment as possible, video game hardware developers created graphic processing units (GPUs). Now scientists are using these graphics cards to simulate the interactions between the fundamental constituents of matter.

GPU cards render realistic environments because video game programmers have incorporated physics equations onto these cards so that the laws of physics apply during the game. “In other words, when a vase breaks against the wall in the game,” begins Chip Watson, manager of the High-Performance Computing group in the Information Technology Division at the Thomas Jefferson National Accelerator Facility (TJNAF), “the shards of broken glass fall and reflect light properly and look realistic.”

If GPUs were powerful enough to render realistic visuals for video games, what about simulating interactions between molecules, atoms, and subatomic particles? This captured the attention of the nuclear science community. It did not hurt either that GPUs were becoming more powerful than the central processing unit (CPU) in standard computing systems. “About five years ago, a group of scientists led a heroic charge to utilize these inexpensive and energy efficient GPUs in their research” said Watson.

And thus began a new age of computing. Well, it may not be quite so dramatic, but the graphics card manufacturers noticed how the scientific community was using this technology and began marketing software tools to make it easier to load complex physics equations onto the cards.

computer clusterPhoto credit: Thomas Jefferson National Accelerator Facility

A portion of a nearly $5 million grant received as part of the DOE American Recovery and Reinvestment Act was used to purchase 200 GPUs and associated hardware for a new computing cluster dubbed “9G.”

Watson saw the writing on the wall and proposed to dedicate a portion of Department of Energy Office of Science Recovery Act funds to purchase 200 GPUs and accompanying hardware to create a computer cluster. The only downfall is that GPUs differ from CPUs in how they are programmed. “They’re a bear to program, but they are definitely worth the effort,” said Watson.

The new computer cluster became operational in January 2010 with the express purpose of tackling the difficult mathematics behind quark—gluon interactions, a basic component in describing matter in the standard model. “[Quark—gluon] interactions may seem esoteric, but these interactions are vital to understanding the fundamental properties of the universe,” said Watson.

Simulating these interactions is numerically demanding and initially requires supercomputers to crunch massive equations. The supercomputers produce a series of ‘snap shots’ of how these types of constituents interact over time. “[Developing these snapshots] can take upwards of one year to complete using computing time on several supercomputers around the United States at a cost of millions of dollars,” said Watson.

With the ‘snap shots’ in hand, the nuclear scientists turn to the new computer cluster to analyze what the pictures mean. “The GPUs are allowing us to take these large snapshots from these computers and push to much more realistic parameters,” said Robert Edwards, a senior staff scientist at TJNAF. According to Edwards, the GPUs also accelerate this process, analyzing as many as 10,000 quark—gluon ‘snapshots’ to tell you how a quark or another particle propagates. “And then we can tie the quarks and gluons together to make [a variety of exotic] particles,” said Edwards.

“The [most amazing aspect of this project] is that we are using far more computer power in this cluster of GPUs (over 100 Teraflops) than we get from all the conventional resources we have at our three collaborating labs (17 Teraflops).”

gpuPhoto credit: Thomas Jefferson National Accelerator Facility

The collaboration has ordered an additional 336 graphics processing units, or GPUs, with delivery expected midsummer.

The cost of the aggregated system is also far lower. It may cost one million dollars to set up the aggregate, but that pales in comparison to the cost of the initial phase of the project that required the power of supercomputers. “It is not only less expensive, it is also a lot greener,” said Watson. The GPUs allow smaller computers to perform calculations using far less energy than the supercomputers.

The GPU system is not a substitute for supercomputers, but this system enhances the ability of scientists to perform smaller tasks faster and using less energy.
.
The first steps (“snapshot” generation) for this project were conducted on several supercomputers: Jaguar at Oak Ridge National Laboratory supported by the Department of Energy’s (DOE) Office of Science, Big Ben at the Pittsburgh Supercomputer Center, and Ranger at the Texas Advanced Computer Center both funded by the National Science Foundation.

This article was written by Stacy W. Kish