Expo/Computation/The Metacomputer

| Forward | Back | Map | Glossary | Information | Expo Home |

Metacomputing: Past to Present

Short History of Metacomputer 

Birth of a Concept
Forward to Grand Challenges
From HPCC to NII
It Pays to Shop Around
Test, Test and Demo

Birth of a Concept

The term "metacomputing" was coined around 1987 by NCSA Director, Larry Smarr. But the genesis of metacomputing at NCSA took place years earlier, when the center was founded in 1986. Smarr's goal was to provide the research community with a "Seamless Web" linking the user interface on the workstation and supercomputers.

By 1988, NCSA's vision of the metacomputer, as far as hardware was concerned, consisted of vector multi-processors integrated with the newly-emerging massively parallel architectures.

An early metacomputer diagram showed a gigabit/second local area network linking the massively-parallel Connection Machine-2 (CM-2) with a Cray-2 vector supercomputer, a file server mainframe computer and a workstation. But it soon became apparent that, while supercomputer technology was advancing at a rapid pace, the other components of the metacomputer--the network, storage systems, software and visualization--were not. Researchers had to customize their applications that were then impossible to adapt to new technologies or other scientific uses.

Forward to Grand Challenges

Nonetheless, a number of trends combined to favor further development of metacomputing technology: advances in parallel computing and acclerating growth in computing resources linked to rapidly falling price-performance ratios -- these combined to inspire the scientific community to coin the concept of "Grand Challenges", major problems of science and society whose solutions required 1000-fold or greater increases in the power and speed of supercomputers and their supporting cast of networks, storage systems, supporting software and virtual environments. In short, tackling Grand Challenge problems would demand faster, more flexible and more interactive metacomputing.

From HPCC to NII

In the late '80s, American scientists, engineers and government and industry leaders began to recognize that advanced computer and communications technologies could benefit not just the research community but the entire U.S. economy. The term "metacomputer" hadn't fully surfaced yet, but the idea that networks of powerful computers could improve U.S. economic and technological competitiveness was beginning to take hold.

In 1991, Congress passed the The High Performance Computing and Communications Act, introduced by then-Senator Albert Gore, Jr. The act formally established the High Performance Computing (HPCC) Program. The mandate of the HPCC program was to accelerate the development of future generations of high performance computers and networks and their applications to GrandChallenge research.

It Pays to Shop Around

Aiding the implementation of the HPCC Program: the rapid convergence between the performance of microprocessors and their much more expensive, vector counterparts.

"Victory of the Microprocessor"

Larry Smarr, NCSA
In the past decade, cheap, mass-produced microprocessors have become dramatically faster, while vector processors, though also increasing in speed, have gained ground much more slowly, and have remained costly to produce.
JPEG (51K); Caption

It was against this national backdrop that in 1992, NCSA began to comparison shop--evaluating new network, data storage, software and load balancing technologies.

At NCSA, computer scientists and applications researchers studied the performance of a number science and engineering codes on a variety of platforms, and found that most users simply weren't using the vector machines to their best advantage. Also, multi-processor-based machines

That, coupled with the increasing cost of supercomputers vs. the falling price of microprocessor-based parallel systems, led NCSA adopt a major shift towards scalable metacomputing. NCSA wasn't operating in a vacuum, of course. The other NSF supercomputing centers were developing their own metacomputing strategies.

Test, Test and Demo

Between 1990 and 1995, in addition to exploiting novel computing architectures, NCSA also participated in a number of significant testbeds and demonstration projects, each designed to probe the technologies needed for effective metacomputing.

BLANCA: Testing Coast-to-Coast

In 1992, NSF, the Corporation for National Research Initiatives (CNRI), the Advanced Research Projects Agency (ARPA) and AT&T Bell Laboratories established the BLANCA testbed, one of four gigabit networking testbeds spanning the country.

JPEG Image (29.2 KB); Credits and Copyrights

BLANCA provided high-speed (622 megabit/second) connections between scientists at the University of Illinois at Urbana-Champaign and the University of Wisconsin-Madison; and between scientists at the University of California-Berkeley and the Lawrence Berkeley Laboratory. Together with its partner testbeds, BLANCA gave computer scientists a chance to learn which network software protocols and operating systems could support applications in a wide-area, near-gigabit (billion bit) per second network; and BLANCA gave astronomers, biologists and atmospheric scientists an opportunity to conduct collaborative research over high-speed networks.

Showcase '92: A Step Up for Metacomputing

Electronic Visualization Laboratory

Of the many exhibits at SIGGRAPH '92 (SIGGRAPH is the Association for Computing Machinery's (ACM's) Special Interest Group on computer GRAPHics), Showcase alone demonstrated nearly 50 leading-edge high-performance computing applications.
JPEG Image (16.2K)

A collaborative venture between a score of academic, government and private institutions, Showcase showed that both computation and data display could be handled in realtime among remote, networked computers. NCSA's PATHFINDER (Probing ATmospHeric Flows in an INteractive and Distributed EnviRonment) project, for example, allowed users on the exhibition floor in Chicago to explore visualizations of severe thunderstorm phenomena through a system that coupled model initiation (setting the starting conditions for a calculation), simulation, data management, analysis and display. Computation and rendering of the simulation data were performed on NCSA's CAVE added a whole new dimension to virtual reality, offering an environment in which multiple viewers could enter and interact with simulated universes, molecules, thunderstorms or mathematical shapes. However, there was no coupling of the prototype virtual environment to the actual computations behind the data. In this earlier version of the CAVE, pre-computed simulations were rendered locally for interactive display, enabling conference-goers to experience immersion in data first-hand.

Ramping Up with VROOM

Two years and a lot of experience later, NCSA, along with a number of other major research institutions, demonstrated the extraordinary potential of the CAVE as as a window into computation. A special exhibit at SIGGRAPH'94 called VROOM (Virtual Reality Room) showed how onsite supercomputers could be linked to the CAVE to display simulation output graphically in an immersive virtual environment, all of this in realtime.

Electronic Visualization Laboratory

Linking on-site supercomputers to the virtual environments meant that VROOM users could now interact directly with simulation data and "steer" the calculations as they happen.

For example, visitors to VROOM could experience the sounds of mathematical chaos. This project required a real-time, distributed computing between the CAVE, an SGI workstation, and an SGI Onyx (a powerful mini-supercomputer).

The next step in metacomputing--marrying the remote networking capabilities demonstrated at Showcase with the interactive immersion in data highlighted at VROOM--was recently demonstrated at Supercomputing '95!

Forward to Tomorrow's Metacomputer
Return to the Metacomputer Home Page

Exhibit Map
Information Center
Back to Science Expo Home Page

Copyright, (c) 1995: Board of Trustees, University of Illinois

NCSA. Last modified 11/4/95.