Skip to Main Content
Distributed shared memory computers (DSMs) have arrived (G. Bell, 1992; 1996) to challenge mainframes. DSMs scale to 128 processors with two to eight processor nodes. As shared memory multiprocessors (SMPs), DSMs provide a single system image and maintain a "shared everything" model. Large scale UNIX servers using the SMP architecture challenge mainframes in legacy use and applications. These have up to 64 processors and a more uniform memory access. In contrast, clusters both complement and compete with SMPs and DSMs, using a "shared nothing" model. Clusters built from commodity computers, switches, and operating system scale to almost arbitrary sizes at lower cost while trading off SMPs single system image. Clusters are required for high availability applications. Highest performance scientific computers use the cluster (or MPP) approach. High growth markets, e.g., Internet servers, online transmission processing (OLTP), and database systems can all use clusters. The mainframe future of DSM may be questionable because: small SMPs are not as cost effective unless built from commodity components; large SMPs can be built without the DSM approach; and clusters are a cost effective alternative for most applications to SMPs, including DSMs for a wide scaling range. Nevertheless, commercial DSMs are being introduced that compete with SMPs over a broad range.