Expo/Computation/The Metacomputer

| Back | Map | Glossary | Information |

Processors

Processors Banner

Metacomputing depends on a whole cast of different computers, often called processors. The supercomputers (the fastest, most powerful computers available at a given time in the history of supercomputing) are the stars of the show. Fed with an initial set of instructions in the form of a program together with the appropriate data, they can perform an number of impressive tasks, like solving the mathematical equations governing the behavior of thunderstorms, the action of a drug on a cell, or the interactions of two colliding black holes.

By definition metacomputing is not a one-computer show. The scientist typically uses a desktop workstation to develop and debug the program before it gets shipped off to the supercomputer. Some supercomputers use a separate computer (a compiler) which translates the program into machine language. The data may be stored on yet another computer. The researcher makes sense of the supercomputer's output data by processing it on a specialized graphics workstation, or experiencing it in a virtual environment.

Vector and Parallel Processing
All in a Memory
Scalable Metacomputing


Vector and Parallel Processing

Underlying the functional differences in computers are architectural and operational differences as well. Making a metacomputer work requires choosing the best combinations of machines for a given application. Ideally this would be handled automatically through a standardized user interface and software.

From the standpoint of their architectures, supercomputers come in two basic varieties: vector and parallel processors. More recently, new terminology has come into vogue, namely uni-processors and multi-processors, which roughly-speaking correspond to vector and parallel processors. For time being, though, we'll stick with the older terms.

Vector processors perform numerical calculations much like workers on an assembly line build a car. A central processor hands the first subunit a piece of data, and that subunit carries out the first mathematical step in the task. The data then gets handed off to the second subunit, and so on, until the calculation is completed. Vector processing is especially well suited for problems that inherently possess well-organized datasets, like calculating how fluids flow. For several years, supercomputing almost always meant vector processing; and Cray Research, Inc. was synonymous with vector machines.

The early 1990s saw the rise a new kind of processing: parallel processing. Parallel processors rely on dozens to thousands of ordinary microprocesors--integrated circuits identical to those found in millions of personal computers --that simultaneously carry out identical calculations of different pieces of data. Massively parallel machines can be dramatically faster and tend to possess much greater memory than vector machines, but they tax the programmer, who must figure out how to distribute the workload evenly among the many processors. Massively parallel machines are especially good at simulating the interactions of large numbers of physical elements, such as those contained within proteins and other biological macromolecules -- the types of molecules that computational biologists are interested in modeling.

Massively parallel machines such as the Thinking Machine's Connection Machine 5 (CM-5) contain thousands of low-cost microprocessors (just like those found in a standalone Sun Microsystems workstation), connected by a special-purpose, high-speed internal network. But it's also possible to achieve high performance by connecting a group of individual workstations-- a cluster of more economical and flexible Hewlett-Packard workstations, for example.

Technological improvements in microprocessor technology have begun to blur the line between vector and parallel processing. For instance, just one of the Convex Exemplar's 64 microprocessors, is equal in computational power to two-thirds of a Cray Y-MP, a vector machine with 8 processors which for several years represented the apex of vector processing technology.

All in a Memory

One of the challenges of designing parallel, multi-processor systems is to resolve the issue of memory and communication between individual processors. In multi-processor supercomputers, memory typically takes one of three forms.

Symmetric Multiprocessors (SMP)

These systems typically possess a relatively small number of processors (usually less than 16) which can share a common block of memory, much like office workers drawing files from a central filing cabinet. It's easy to develop fast, efficient programs for this design because every processor has direct access to all data. However, the disadvantages of this type of arrangement are that they're not scalable beyond dozens of processors. After all, only a limited number of office workers can be expected to share one filing cabinet; the same is true for processors. Also the technologies needed to connect the processors are rather expensive.

Credits

Message-passing Distributed Memory (MDM)

Distributed memory systems employing so-called message passing can accommodate thousands of processors. Each processor has a private block of memory from which to draw data. The drawbacks? Such systems tend to be slower because data often must be shuffled back and forth between the processors and they're more difficult to program. Also, there's less available software as compared with the number programs that can be run on SMP and DSM architectures. As a result, a lot of applications have either to be "ported" from other types of processors, or custom-built -- an expensive proposition.

Credits

Distributed Shared Memory (DSM)

New, hybrid or distributed shared memory systems are emerging that maximize the strengths of each of the previous systems. Groups of processors (called nodes) share a local memory; the nodes are networked so that any processor can access any portion of memory. These systems should have the advantage of being scalable and quite easy to program. Though these systems are still somewhat experimental, more and more software for them is becoming available.
Credits

Scalable Metacomputing

Since 1993, the price versus performance of machines based on off-the-shelf processors versus has fallen sharply and is reflected in the deployment of these systems in constrast to machines based, in part or whole, on more specialized processors. This market trend is clearly indicated by the following diagram, which shows the top 500 high performance computing systems deployed during the past two years (1993-5).

Top 500 HPCC Machines in Current Use

Larry Smarr, NCSA
In step with these market trends, a number of supercomputing centers, NCSA included, have decided to get more "bang for the buck" by moving away from expensive, specialized-processor machines and toward a scalable approach based ascending pyramids of compatible microprocessors.
JPEG Image (25K); Caption, Credits & Copyright

In this way, systems are fully-scalable, that is they can be upgraded simply by adding additional processors, and there is less need to develop elaborate communications software Programs developed on the workstation can, in many cases, run on a corresponding high-performance machine without modification.

Such scalable, microprocessor-based metacomputers employ groups of computers that use the same application development environment, the same operating system, the same memory distribution systems, and compatible microprocessors--all the way from the desktop PC to the high-performance computer.

Building so-called "homogeneous" systems makes metacomputing simpler. For example, a scientist at NCSA may develop programs on a desktop SGI workstations, send the programs to run on the SGI Power Challenge Array, and retrieve the data to be displayed on the workstation, the SGI Onyx, or in a virtual environment like The CAVE. Since all of the machines involved are made by SGI, the scientist doesn't need to recompile any programs.

Because it's likely that no one system of computers will satisfy the needs of every application, NCSA is presently experimenting with two families, or "pyramids", of scalable computing systems.

NCSA's Two "Pyramids" of Scalable Metacomputing


Click on the pyramid above to go to the relevant computer type

Silicon Graphics Pyramid
HP/Convex Pyramid

Return to the Parts of a Metacomputer

Exhibit Map
Glossary
Information Center

Copyright, (c) 1995: Board of Trustees, University of Illinois


NCSA. Last modified 11/4/95