Expo/Computation/The Metacomputer

Handling the Data Crunch | Back | Map | Glossary | Information |

Handling Data Crunch Banner

The problem of handling data in the information age is that there's so much of it. Storing, moving, and distilling information out of the data efficiently poses a major challenge to effective metacomputing.

Data Storage at NCSA

JPEG Image (37.1 KB); Credits and Copyrights

Storing Data

A typical Grand Challenge application run on the CM-5 may produce hundreds of gigabytes of data; real time visualization and feedback within virtual environments demands nearly instantaneous command of large data sets. The data must be stored either on disk space on the compute machines themselves or efficient mass storage systems. In either case, huge volumes of data may have to be moved in and out of storage at breakneck speeds.

Sharing Filespace

Metacomputer users would expect their files to be accessible from any computer they (or a collaborator) are using. Manually running time-consuming file transfers is simply not acceptable; that's why metacomputing technology will include shared file space systems like Andrew File System (AFS).

Serving Data

Within a year after NCSA released Mosaic, traffic on the World Wide Web increased more than 1,000 percent, placing enormous strain on the servers set up to provide information from the Web's ever-changing tree of files containing HTML documents, images, audio files, and movies. NCSA's solution to the problem is to build a local metacomputer of Web servers.

Return to the Parts of a Metacomputer

Exhibit Map
Glossary
Information Center

Copyright, (c) 1995: Board of Trustees, University of Illinois


NCSA. Last modified 11/4/95.