Modern massively parallel computers (MPCs) are characterized by a scalable architecture. These computers offer corresponding gains in performance as the number of processors is increased. Such computers often consist of self-contained processing nodes, with associated memory and other supporting devices. This design approach has many advantages. The repetition of identical components leads to scalability, modularity, greater reliability, and opportunities for fault tolerance. However, parallel computing in such systems requires extensive communications between otherwise independent nodes so that data and instructions are redistributed periodically to keep all processors busy performing useful tasks. Because memory is not shared between node processors, inter processor communications are achieved by passing messages between nodes through a communications network. This network is implemented as a set of interconnected routers, each connected to its local processor.
According to Ken Batcher (Kent University), “A supercomputer is a device for turning compute-bound problems into I/O-bound problems.” Indeed, several of the most advanced supercomputers, such as Titan(Cray XK7), Sequoia, Mira, and Vulcan (IBM), have a 2D and 3D toroidal interprocessor network topology. These topologies are driven by the applied problems they were designed to solve, from weather forecasting to nuclear fusion, from cryptanalysis to biological macromolecules, and from quantum chromodynamics (QCD) to the nature of turbulence. Large-scale problems can be mapped to those topologies more efficiently. Such network configurations provide convenient modularization and low latencies for small messages. This implementation of a network reduces the path length between nodes and simplifies routing algorithms for static or dynamic routing. The need for prediction of networks behavior properties, such as relationships between network load, buffer’s capacity, queue lengths, latency, and the point at which network saturates, becomes urgent. In the situation when the experimental approach (such as running on real machines or some combination of the softwarehardware emulations) is difficult, expensive, and it is hard to achieve a controlled environment where parameters of the network can be separated and changed for analyzing their effect on the network performance, the analysis by use of theoretical models and controlled simulations is crucial.
|Simulation Technologies In Networking And Communications.pdf||Download|