Skip to Main Content
In evaluating alternatives in large supercomputer complexes, the tradeoffs between when NFS-type distribution of data and the more recent 'cluster' or 'Storage Area Network' approaches are optimal can require careful analysis. Further, many HPC (High Performance Computing) workloads are not limited to using only large files accessed with large block I/O requests. Some loads can be very mixed. File systems can fill up rapidly, resulting in performance which deviates substantially from that observed in relatively empty file systems, when most accesses are on the outer tracks and metadata accesses are most efficient. Establishing the optimal file system parameters for a group of homogeneous supercomputers attempting to exchange data is challenging enough. When heterogeneity is added to the HPC complex, as - for example - when using one of the commercially-available heterogeneous SAN software products, the task of defining a configuration to ensure the desired behavior, both under a steady-state load and during surge conditions, can become daunting. In the work we describe in this paper, we investigated two different systems and storage architecture situations. The first was a relatively homogenous system, more than 10 supercomputers of the same vendor, same operating system level, similar or identical capabilities, which needed to share primarily large file, large block data at several different sustained rates but with very predictable performance for certain transactions under even the heaviest surges. There are, however, some smaller files in this workload. The goal was to evaluate whether a SAN, or NFS using high bandwidth local links, or some combination of both, could best provide the desired system behavior. An additional goal was to identify the various tuning options which could be used by the system administrators and application developers to ensure the performance required was preserved as the system grew or various initial conditions changed substanti- lly. The second system evaluated was a heterogeneous system, consisting of servers and supercomputers of different capabilities and different product generations, as well as running different operating systems on hardware from different vendors. Connections between these systems ranged from high-performance local links to WANs (wide area networks). Some of the issues considered in this system related to the proximity of the dataset to the intended compute server, since the goal was optimal workload distribution as well as deadline assurance.