| Forward | Back | Map | Glossary | Information |
Digital Cosmology Timeline
Because dark matter dominates the universe, the first simulations, carried out in the early 1980s, modeled the formation of large scale structures resulting from gravitational effects of dark matter alone. The numerical cosmologists traced the gravitational interactions of "mass clouds" of dark matter particles within grid boxes. The approach--a so-called N-body problem--assumes that each particle in the system exerts a force on all the other particles. By assuming that any luminous matter in the universe would evolve in the same way as dark matter under gravity's influence, the scientists could therefore show, first in two dimensions, and later, in three, how the interactions of gas clouds might change the density distribution of matter.
Of course, the real universe contains luminous, or baryonic matter as well; any attempts to model the real universe must therefore incorporate baryonic matter in the form of gas clouds (because that was the state of baryonic matter at the time of galaxy formation). These more sophisticated simulations, begun in the early 1990s, were made possible by a more advanced computing engines: multi-processor vector computers such as the Convex C-3380; and massively parallel machines, for example the Connection Machine 5. The simulations required hydrodynamics codes that could simulate the behavior of gas under a variety of conditions, and, like the pure dark matter simulations, ran first in two dimensions and later in three.
A 1994 simulation modeled the way galaxy clusters and superclusters might evolve in a universe consisting of cold and hot dark matter, plus baryonic gas. This simulation had a twist: it portrayed the universe as it would be seen in the X-rays emitted by the superhot gas that surround the galaxies as they form.
Practically speaking, this requires simulations with at least one billion dark matter particles and a similar number of fluid cells. And this in turn requires computers with larger memories and faster processors than currently available. How much bigger and faster, and when might these computers become available? The largest simulation carried out to date--the mixed dark matter and gas simulation of Greg Bryan and Michael Norman--utilized 50 million dark matter particles and 134 million gas cells on the Connection Machine-5 massively parallel supercomputer.
Roughly speaking, a computer twenty times faster and with twenty times as much memory is required. This amounts to a machine with a peak speed of about 1 Teraflop and with a memory of about 300 Gigabytes. Historically, computers have doubled in speed every 1.5 years (see timeline), and memory has kept pace. Therefore, Teraflop number crunchers should be available around the year 2000. Indeed, several computer vendors have Teraflops computer development projects with delivery dates around then.
A chief stumbling block which future simulations will have to overcome is the treatment of star formation--a process which is currently not well understood. Yet it is the transformation of cosmic gas into stars that is at the heart of the galaxy formation process, and after all, it is the galaxies that we see! Current strategies of modelling cosmic evolution, such as approach used by Renyue Cen and Jeremiah Ostriker ("The Works!"), rely on simplified prescriptions of star formation for lack of a better theory.
Forward to Crisis in Cosmology? Or Turning Point?
Return to Frontiers of Cosmology
Back to Science Expo Home Page