| Back | Map | Glossary | Information |
Metacomputing depends on a whole cast of different computers, often called processors. The supercomputers (the fastest, most powerful computers available at a given time in the history of supercomputing) are the stars of the show. Fed with an initial set of instructions in the form of a program together with the appropriate data, they can perform an number of impressive tasks, like solving the mathematical equations governing the behavior of thunderstorms, the action of a drug on a cell, or the interactions of two colliding black holes.
By definition metacomputing is not a one-computer show. The scientist typically uses a desktop workstation to develop and debug the program before it gets shipped off to the supercomputer. Some supercomputers use a separate computer (a compiler) which translates the program into machine language. The data may be stored on yet another computer. The researcher makes sense of the supercomputer's output data by processing it on a specialized graphics workstation, or experiencing it in a virtual environment.
From the standpoint of their architectures, supercomputers come in two basic varieties: vector and parallel processors. More recently, new terminology has come into vogue, namely uni-processors and multi-processors, which roughly-speaking correspond to vector and parallel processors. For time being, though, we'll stick with the older terms.
Vector processors perform numerical calculations much like workers on an assembly line build a car. A central processor hands the first subunit a piece of data, and that subunit carries out the first mathematical step in the task. The data then gets handed off to the second subunit, and so on, until the calculation is completed. Vector processing is especially well suited for problems that inherently possess well-organized datasets, like calculating how fluids flow. For several years, supercomputing almost always meant vector processing; and Cray Research, Inc. was synonymous with vector machines.
The early 1990s saw the rise a new kind of processing: parallel processing. Parallel processors rely on dozens to thousands of ordinary microprocesors--integrated circuits identical to those found in millions of personal computers --that simultaneously carry out identical calculations of different pieces of data. Massively parallel machines can be dramatically faster and tend to possess much greater memory than vector machines, but they tax the programmer, who must figure out how to distribute the workload evenly among the many processors. Massively parallel machines are especially good at simulating the interactions of large numbers of physical elements, such as those contained within proteins and other biological macromolecules -- the types of molecules that computational biologists are interested in modeling.
Massively parallel machines such as the Thinking Machine's Connection Machine 5 (CM-5) contain thousands of low-cost microprocessors (just like those found in a standalone Sun Microsystems workstation), connected by a special-purpose, high-speed internal network. But it's also possible to achieve high performance by connecting a group of individual workstations-- a cluster of more economical and flexible Hewlett-Packard workstations, for example.
Technological improvements in microprocessor technology have begun to blur the line between vector and parallel processing. For instance, just one of the Convex Exemplar's 64 microprocessors, is equal in computational power to two-thirds of a Cray Y-MP, a vector machine with 8 processors which for several years represented the apex of vector processing technology.
New, hybrid or distributed shared memory systems are emerging that
strengths of each of the previous systems. Groups of processors (called
share a local memory; the nodes are networked so that any processor can
any portion of memory. These systems should have the advantage of being
scalable and quite easy to program. Though these systems are still somewhat
experimental, more and more software for them is becoming available.
Since 1993, the price versus performance of machines based on off-the-shelf processors versus has fallen sharply and is reflected in the deployment of these systems in constrast to machines based, in part or whole, on more specialized processors. This market trend is clearly indicated by the following diagram, which shows the top 500 high performance computing systems deployed during the past two years (1993-5).
Top 500 HPCC Machines in Current Use
In this way, systems are fully-scalable, that is they can be upgraded simply by adding additional processors, and there is less need to develop elaborate communications software Programs developed on the workstation can, in many cases, run on a corresponding high-performance machine without modification.
Such scalable, microprocessor-based metacomputers employ groups of computers that use the same application development environment, the same operating system, the same memory distribution systems, and compatible microprocessors--all the way from the desktop PC to the high-performance computer.
Building so-called "homogeneous" systems makes metacomputing simpler. For example, a scientist at NCSA may develop programs on a desktop SGI workstations, send the programs to run on the SGI Power Challenge Array, and retrieve the data to be displayed on the workstation, the SGI Onyx, or in a virtual environment like The CAVE. Since all of the machines involved are made by SGI, the scientist doesn't need to recompile any programs.
Because it's likely that no one system of computers will satisfy the needs of every application, NCSA is presently experimenting with two families, or "pyramids", of scalable computing systems.
Click on the pyramid above to go to the relevant computer type
Return to the Parts of a Metacomputer