If you are a video game enthusiast, you might be familiar with the importance of the graphics processing unit, or the GPU. The GPU determines whether you can play a game with all the fancy visual effects turned on, how high you can set the screen resolution, and how many frames (the images you see on the screen) it can process in a second. Did you know that they are used for scientific research purposes as well? Our own premier supercomputer, Big Red II, has hundreds of NVIDIA GPUs inside of it and, as a gamer myself, I think it’s pretty neat that I get to use them for my research.
In my previous post, I introduced you to star clusters. In fact, I work on simulations of the evolution of globular clusters, the largest and oldest star clusters in the Milky Way. In this post, I will tell you how these simulations are conducted.
The most accurate simulations, called N-body simulations, follow every star as they are influenced by the gravitational pull of every other star in the cluster. The equations of motion for two objects orbiting each other under the influence of each other’s gravity are straightforward, but adding a third body makes the equations unsolvable without the help of a computer program. Imagine adding hundreds of thousands more bodies to this gravitational system to match the numbers seen in globular clusters!
If you’ve ever written a computer program, you know that it executes everything in sequential order. This is important in explaining why these types of simulations are computationally demanding. The simulation calculates the gravitational force that one star feels from another star, then moves on to another star to calculate its gravitational force on the original star, and so on with all stars in the cluster, to calculate the cumulative gravitational force on the original star. Now the simulation goes through the process again with another star, and another star, until it’s found the cumulative gravitational force for all of the 100,000 stars. All these gravitational forces we calculated are used to figure out where the stars will be when we advance the simulation in time, for example, advancing one thousand years, and move them there. We have to repeat this whole process each time we advance the simulation by our thousand year time step, for typically a few billion years of time in the simulation, which translates to repeating these calculations millions of times.
A solution to get around doing all of this in sequential order (as a traditional program would do) is to split the work among multiple computer processors working in parallel so that the program finishes more quickly. Calculating the gravitational force between two stars is a simple computation, and can be done without waiting for the results of the force between another pair of stars. Thus, the utility of Graphical Processing Units (GPUs) for these kind of simulations is that they have hundreds of smaller, more efficient cores set up to do computations in parallel. The same technology that helps our video games run smoothly can also significantly speed up a scientific computing application. Simulation time can be reduced from months to weeks or weeks to days, which is great for researchers who conduct many simulations (and want to graduate on time).
Within IU’s Department of Astronomy, researchers in stellar dynamics conduct simulations of the evolution of globular clusters over their billion year lifetimes with computer programs that use GPUs, and run their the simulations on IU’s Big Red II. For the work that will be included in my PhD thesis, I have run at least a thousand simulations, made easier thanks to the supercomputing resources here at IU and the freely available N-body codes that use GPU acceleration. While I may not have the luxury of using the latest and greatest graphics cards for my gaming, I have the opportunity to use them for my research.