In November 2018,Research Technologies(RT) at Indiana University, jointly with theUniversity of Delaware, was awarded $300,000 by theNational Science Foundation(NSF) to prepare real-world parallel scientific applications for integration into a benchmark currently under development at theHigh-Performance Group(HPG) of theStandard Performance Evaluation Corporation(SPEC). The two-year grant supports the preparation of real-world scientific applications to be included in a new application benchmark that SPEC/HPG is developing. This includes refactoring the applications so that they can be compiled on different hardware and software platforms – including IU’sJetstreamcloud computing system, the first production cloud funded by the NSF. In addition, the project is defining datasets that allow the applications to scale from a small number of nodes to large scale HPC systems. The new application benchmark will lead to a more realistic evaluation of HPC system designs, and will also enable better rankings of next-generation computing systems.
In September 2019, a benchmarking workshop was held in Alexandria, Virginia. Organized by Robert Henschel and Junjie Li of RT, and Rudolf Eigenmann and Sunita Chandrasekaran of University of Delaware, the workshop explored topics on using application benchmarks for the next generation of high performance computing (HPC) systems. The workshop featured talks by the organizers and by invited speakers working at the forefront of the field. Attendees represented major national labs and HPC centers such as Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, and National Center for Supercomputing Applications.
The two-day long workshop began with an introduction from Henschel and Chandrasekaran in which they presented the history of the SPEC High Performance Group and the benchmarksthe group hascreated over the last 20 years. Henschel offered an overview of the High Performance Group as well as SPEC’s benchmarking philosophy in general. He presented use cases for the benchmarks, like comparing compiler performance and performance of programming paradigms like OpenMP and OpenACC. He also gave an overview ofthe new benchmark that SPEC HPG is currently developing.Chandrasekaran focused on usage of SPEC ACCEL Benchmark Suite for research, education and learning purposes.
Invited talks addressed topics such as existing application benchmarking efforts, and benchmark metrics; proxy applications; holistic performance assessment for computational and data analysis systems; benchmarking HPC systems for running NASA workloads; the system and applications of Sunway TaihuLight, a Chinese supercomputer; and an overview of the design process, implementation, and field experience for several HPC performance evaluations done through benchmarking.