The problem of handling data in the information age is that there's so much of it. Storing, moving, and distilling information out of the data efficiently poses a major challenge to effective metacomputing.
Data Storage at NCSA
JPEG Image (37.1 KB); Credits and Copyrights
A typical Grand Challenge application run on the CM-5 may produce hundreds of gigabytes of data; real
visualization and feedback within virtual environments
demands nearly instantaneous command of
large data sets. The data must be stored either on disk space on the
compute machines themselves or efficient mass storage systems. In either
huge volumes of data may have to be moved in and out of storage at breakneck
Metacomputer users would expect their files to be accessible from any
they (or a collaborator) are using. Manually running time-consuming file
transfers is simply not acceptable; that's why metacomputing technology will
include shared file space systems like Andrew File System (AFS).
Within a year after NCSA released Mosaic, traffic on the World Wide Web
increased more than 1,000 percent, placing enormous strain on the servers set
up to provide information from the Web's ever-changing tree of files
HTML documents, images, audio files, and movies. NCSA's solution to the
problem is to build a local metacomputer of Web servers.
Return to the Parts of a Metacomputer