Skip to Main Content
Current trends in microprocessor design indicate that chips are approaching their packaging thermal limits, and the power-related costs of high-performance clusters and multiprocessors continue to grow as a quadratic function of peak execution rates and clock frequencies. Although a faster scientific simulation, such as one obtained by exploiting quality-performance tradeoffs, is also often one that consumes less power by using fewer compute cycles, a major challenge is developing explicitly power-aware scientific computing tools that can exploit special energy-saving features of the circuit fabric. Such tools are perhaps most natural when scientific computing involved sparse or irregular computations, for example, simulations based on partial differential equations in two or three spatial dimensions solved using implicit or semi-implicit schemes. Sparse kernels typically cannot execute near peak rates of the CPU's, and there is potential for tuning them to co-manage both power and performance characteristics. Furthermore, each sparse kernel often has a variety of implementations offering a wide range of tradeoffs in solution quality (e.g., accuracy, reliability, and scalability) and performance (e.g., execution time/rate and parallel efficiency/speedup). Consequently, proper method selection to meet changing application quality-of-service requirements and changing technologies can potentially provide dramatic performance improvements and savings in energy by using circuit fabric features such as dynamic voltage scaling. Our goal is to design adaptive tools for sparse computations that deliver reduced-energy realizations without adversely impacting application performance. In this paper, we provide an overview of our project and discuss some initial results.