State-of-the-art run-time systems are a poor match to diverse, dynamic distributed applications because they are designed to provide support to a wide variety of applications, without much customization to individual specific requirements. As a result, the performance is disappointing. To address this problem, we propose SMART APPLICATIONS. In the executable of smart applications, the compiler will embed most run-time system services, novel speculative run-time adaptive optimization techniques and a performance-optimizing feedback loop that monitors the application’s performance and adaptively reconfigures the application and the OS/hardware platform. The resulting total resource customization for the application’s own performance gain should lead to major speedups.
SmartApps builds on the foundation of our speculative run-time techniques developed in the context of parallelizing, or restructuring, compilers. This talk will present an overview of software run-time techniques for speculatively parallelizing loops, adaptive algorithm selection for reduction parallelization and architectural support. These methods target irregular, dynamic applications which resist traditional static optimization methods. Typical examples include complex simulations such as SPICE for circuit simulation, DYNA-3D for structural mechanics modeling, and CHARMM for molecular dynamics simulation of organic systems. We present experimental results on loops from the PERFECT, HPF and other benchmarks which show that these techniques can indeed yield significant speedups.