Research Assistant: K. Y. Wang
Sponsors: Intel Corporation, NSF, CNPq Brazil, SIO
This project is part of the Scalable I/O Initiative (SIO) sponsored by NSF, ARPA, DOE and NASA. Its aim is to study the correlation of the access patterns of parallel programs, to provide language extensions, compiler analysis and program transformations, to design adaptive file options and support distributed run time optimization of I/O access.
Massively Parallel Systems (MPPs) are viewed today as expensive scientific and engineering instruments. Their primary use is in the area of numeric simulation of complex physical phenomena. The performance/usability trade-offs of such systems are heavily tilted in favor of performance. Most distributed memory MIMD (DMIMD) systems have rather primitive operating systems with restricted functionality and rudimentary management of system resources. Only recently MPPs which run commodity operating systems in all Processing Elements (PEs) have been announced, e.g., IBM's, SP1 and SP2 (running AIX), and Intel Paragon (running OSF/1 under Mach). Such systems are easier to use but less efficient than their counterparts which run only communication kernels (e.g., SUNMOS, NX, etc.).
Virtual memory is a convenience function supported by operating systems, which allows users to design applications without an immediate concern for the amount of real memory available on a certain system. The operating system maps the virtual (user) address space into the real memory available. If the application exhibits a good locality of reference, then the performance penalty associated with virtual memory is low, even when the virtual address space is considerably larger than the real memory.
The support for demand paging is an important step towards making massively parallel systems more usable and more appealing for a broader class of applications. Yet, existing distributed memory MIMD systems are unbalanced; their I/O and communication bandwidths are insufficient to sustain the request rates generated by powerful processors. There is a legitimate concern that the paging activity may lead to a significant performance penalty by increasing the I/O and the communication load.
The goal of our research is to observe and understand the paging activity of parallel programs. We want to answer questions like: (a) How to characterize the paging activity of a parallel program? (b) How is the paging activity affected by changes in the number of processing nodes and the size of the data space? How does it change when the system configuration changes, e.g., the placement and/or the number of I/O nodes, etc.? (c) How can the knowledge of the paging activity of an application be used to improve its performance? (d) How can the knowledge of the paging activity of several applications be used to improve the concurrent scheduling of these applications in different partitions of a large system? Such questions can only be answered by studying the paging activity of representative applications running on existing MPPs. Therefore our first objective is to develop a methodology for the study of paging activity which includes program monitoring and the analysis of the collected data.