| Back | Map | Glossary | Information |
Performing multiscale cosmology calculations at high resolution requires huge amounts of speed and memory, in the order of teraflops or even greater. Numerical cosmology, along with a number of other scientific disciplines, has become an important driver in the "March to the Teraflop."
But scientists won't realize the necessary increases in performance through hardware alone. Entirely new computational approaches are needed to develop codes that fully exploit the parallelism of processors or their ability to networks, making full use of each machine's unique capabilities and doing all this transparently to the scientist. Such transparency means that researchers can spend more time on discovery and less on worrying about what's happening inside the number crunchers that are cracking their problems. NCSA's approach to achieving this entails " There's yet another, related challenge. Bigger, faster computations generate ever-growing mountains of data. Advanced visualization technologies, particularly "virtual reality" must be deployed to navigate, analyze, even interact with data pouring out of teraflop computer.
Some key steps towards realizing these lofty goals were demonstrated in December, 1995 at the Supercomputing, '95 conference.
In the Grand Challenge Cosmology Consortium, cosmologists, computer scientists and visualization experts are combining their technical and intellectual resources to harness teraflop computers that will answer cosmology's most fundamental questions.
Not many years from now cosmologists will be able to follow their flights of fancy by "experimenting" with alternative digital universes and comparing them to the latest astronomical observations, all within a day's work. The pace of discovery will surely rise dramatically.
Return to Crunching the Cosmos