| Back | Map | Glossary | Information |
Using many different kinds of computers for different tasks is nothing new; for years, scientists have been storing experimental data on one machine, manually transferring it to another computer for analysis, and porting the results to a graphics workstation for visual inspection. Storing or retrieving the data meant transferring the data to and from tape.
All of which is time-consuming and tedious. More often than not, different machines may have distinct operating systems (like the difference between, say, Windows '95 and Macintosh System 7.5 in personal computers), making it necessary for the user to write a new version of the code for each type of machine.
Plus, if the various computers have different processor architectures, the user is obliged to optimize the code to each configuration. As for "navigating" one's data in real time--much as a light microscopist moves a slide across the stage or adjusts the focus--that has been simply out of the question, especially when dealing with lots of data. Carrying out all of these tasks at a secondary site has been difficult or impossible.
One motive, then, for developing a metacomputer is simply ease of use and access to computional resources. Scientists want to spend their time doing science; they're generally not interested fooling with the nuts and bolts of computers and accompanying technologies.
Another reason for developing the metacomputer is the much-talked about "March to the Teraflop," the federal High Performance Computing and Communications (HPCC) Program's goal to attain sustained teraflop (that's a trillion floating point operations per second) performance to supercomputing. Teraflop computing is becoming essential to solving some the most complex and pressing problems of science and society, known otherwise as Grand and National Challenges. A recent world speed record for supercomputing was set by two Intel Paragon computers, with a peak performance of 281 gigaflops, or 281 billion calculations each second. Though pretty impressive, it's still far short of the teraflop. And beyond the teraflop lies the petaflop -- a further thousand-fold ramp up in sustained speed! Petaflop performance may not be attainable with a single super-fast supercomputer; evidence is mounting that it's neither economically nor technically feasible using silicon chip-based microprocessors. Moreover, it's possible that petaflop and maybe even teraflop performance will likely require harnessing and coordinating the power of many computers. Enter the metacomputer.
The issues of speed, ease-of-use and accessibility are actually closely related. Once you increase speed, you increase throughput (the rate at which a system handles the data input and output) as well. What use is teraflop (nevermind petaflop) performance if you still have to manually carry out transfers from one machine to another? Navigating the mountains of data require more efficient methods of collaboratively analysing and exploring the data as it's being computed, with the option of feedback control of the supercomputers or instruments that are churning out the data. That in itself present a major computational hurdle, one that metacomputing will help overcome.
So, the underlying technologies of metacomputing therefore both drive and are driven by the needs of scientists and engineers to cope with their ever-growing mountains of data.
But it's not only scientists who want--and need--the metacomputer. Metacomputing technologies are critical to the implementation of the National Information Infrastructure (NII). The NII, brainchild of business, government and academic leaders, promises in turn to provide the foundation for building digital libraries and improving education and lifelong learning, energy resource management, environmental monitoring and protection, health care, manufacturing, national security and public access to government information.
Return to What is a Metacomputer