Oct 24, 2019 | Atlanta, GA
Researchers are beginning a three-year cross-institute project that aims to lower the barrier to entry for software engineers developing new high-performance applications on large scale parallel systems.
The new $1.26 million National Science Foundation (NSF) project seeks to develop compiler tools and runtime systems to create a framework, named Parallel Algorithms by Blocks (PAbB), built specifically to facilitate simpler programming of scalable parallel systems in high-performance computing (HPC) and exascale machines.
“Current supercomputers and exascale machines are getting harder to program because of how technology is evolving,” said School of Computational Science and Engineering (CSE) Professor Ümit Çatalyürek.
Çatalyürek is Georgia Tech’s principal investigator (PI) for the project and joins the project’s lead PI, University of Utah Professor Ponnuswamy Sadayappan, and co-PIs, Ananth Kalyanaraman, Aravind Sukumaran Rajam, and Sriram Krishnamoorthy of Washington State University. The team of researchers plan to combine user insights, new compiler optimizations, and advanced runtime support to create the PAbB framework which will ultimately create building blocks of parallel code for heterogeneous environments to use across a number of applications from computational science and data science.
From caches to networks, architectures are written so that a system inherently wants to transfer multiple items at once. And, according to Çatalyürek, when looking at algorithms and problems, and thinking of them in terms of blocks – or packages of data – they are able to take advantage of hardware transfer and have better scheduling of communication and computation in these heterogeneous systems.
“We cannot make the single core in a computer much faster anymore, which is why sequential programs are not gettingfaster, and why we have to do everything in parallel computing,” he said.
“If you look at today’s supercomputers you will see that all of the architectures are becoming more and more heterogenous. So, writing parallel code by itself without heterogeneity is difficult, but, when combined, it becomes a barrier for many engineers.”
Heterogenous systems are made up of hardware and software components that necessitate the use of different languages, run on different operations, and usually incorporate specialized processing capabilities to handle particular tasks.
Researchers, particularly those in the HPC and exascale spaces, need a way to ensure they write their data and computation to work in these heterogenous environments, on multiple nodes, and are still able to communicate effectively with the rest of a program while producing the results fast. PAbB aims to become the first framework that achieves both high productivity and high performance in such environments by creating a framework that utilizes block programming.