Search This Blog

Monday, 3 November 2014

Grid computing

Grid or cluster computing involves the creation of a sin-gle computer architecture that consists of many separate computers that function much like a single machine. The computers are usually connected using fast networks (see local area network). The purpose of the arrangement can be to provide redundant processing in case of system failures, to dynamically balance a fluctuating work load, or to split large computations into many parts that can be performed simultaneously. This latter approach to “high-performance computing” creates the virtual equivalent of a very large and powerful machine (see supercomputer).

Architecture

Grid and cluster architectures often overlap, but the term grid tends to be applied to a more loosely coordinated structure where the computers are dispersed over a wider area (not a local network). In a grid, the work is usually divided into many separate packets that can be processed independently without the computers having to share data. Each task can be completed and submitted without waiting for the completion of any other task. Clusters, or the other hand, more closely couple computers to act more like a single large machine.

The first commercially successful product based on this architecture was the VAXcluster released in the 1980s for DEC VAX minicomputers. These systems implemented par-allel processing while sharing file systems and peripherals. In 1989 an open-source cluster solution called Paral-lel Virtual Machine (PVM) was developed. These clusters could mix and match any computers that could connect over a TCP/IP network (i.e., the Internet).

Current Implementations and Applications

Clusters made from hundreds of desktop-class computer processors can achieve supercomputer levels of performance at comparatively low prices. An example is the System X supercomputer cluster at Virginia Tech, which generates 12.25 TFlops (trillion floating point operations per second) from 1100 Apple XServe G5 dual-processor desktops run-ning Mac OS X.

Additional savings and flexibility can be found in Beowulf clusters, which use standard commodity PCs run-ning open-source operating systems (such as Linux) and software such as the Globus Toolkit.

Another type of implementation is the “ad hoc” com-puter grid. These are projects where users sign up to receive and process work packets using their PC’s otherwise idle time. Examples include SETI@Home (search for extrater-restrial intelligence) and Folding@Home (protein-folding calculations). For more on this type of arrangement, see cooperative processing.

Although there has been some recent interest in enter-prise grids, most grid computing applications are in sci-ence. The world’s most powerful computer grid, TeraGrid, is funded by the National Science Foundation and ties together major supercomputing and advanced computing installations at universities and government laboratories. Current applications for TeraGrid include weather and cli-mate forecasting, earthquake simulation, epidemiology, and medical visualization.

No comments:

Post a Comment