University of Cambridge Strives for Zettascale

Cambridge Open Zettascale Lab researchers lay the foundation for future computing ecosystems with Intel® oneAPI Tools.

Heterogeneous computing—by way of definition—is the use of different types of processors—most commonly CPUs and GPUs—to solve complex mathematical tasks. The aim is to enhance performance by breaking the larger tasks down into smaller units, then assigning them to the different processors available.

This can have dramatic effects on the ability of a given system, and it has resulted in considerable interest and innovation in the sector, driven by the increasing demands of large-scale computation in a range of fields. These include AI and ML, quantum mechanics, computational fluid dynamics and enhanced modeling of complex systems, which can cover anything from climate modeling to the design of consumer goods.

Post Date

Author

By COZL

 

Towards Exascale and Zettascale systems

Heterogeneous computing certainly isn’t a new topic, but it is becoming an increasingly polarized one as exascale computing becomes a reality and zettascale becomes possible. While the historical approach to supercomputers involved significant amounts of proprietary IP, this has increasingly come under scrutiny. One particular challenge is interoperability, especially for high cost exascale and zettascale systems, which need to be able to ingest data from a wide range of sources in order to offer maximum utility.

This data might be from NGOs, or Government sources, and might be anything from highly top secret to entirely public, as well as from public cloud service providers and other Enterprise-grade cloud-based storage implementations. The old, siloed, bespoke supercomputer installations historically cannot interface with this new, cloud-based world, and that creates a substantial challenge.

“Exascale and zettascale computing hold the promise to solve a lot of ‘grand challenge’ problems,” says Dr. Paul Calleja, Director of Research Computing Services and the Cambridge Open Zettascale Lab, University of Cambridge. “The computation provided by such systems is a big step forward compared to most systems today—but those systems have several different pain points in their usage. The Cambridge Open Zettascale Lab1 was set up to define those pain points and to address them within a design constraint of using—as much as possible—commoditized open standard solutions and open-source software.”

The Zettascale Lab was established in 2020, a joint venture between the University of Cambridge, Pembroke College, Intel, and Dell Technologies. Its mission is to explore, test and advance the next generation of high-performance computers. But it’s more than that. As Paul Calleja explains: “We’re really trying to democratize exascale and zettascale technologies, and then enable those technologies to trickle down to smaller systems—that’s the underlying driver.”

oneAPI – Democratizing Exascale and Zettascale Solutions

Dr. Paul Calleja and his team have already identified several key pain points on the road to zettascale, from inefficient data storage and non-proprietary middleware to network speeds and data visualization. But one of the core challenges for future computing systems will be porting application code to machines that incorporate heterogeneous hardware. In order to maximize computing power, software needs to become universal.

One part of the solution here—according to the Cambridge Open Zettascale Lab—is to use technologies such as oneAPI.2 Intel oneAPI is an open, unified, and cross-architecture programming model that can be deployed across CPUs and accelerator architectures (GPUs, FPGAs, and others). The programming model simplifies software development and delivers uncompromising performance for accelerated compute without proprietary lock-in, while enabling the integration of legacy code.

Cambridge Open Zettscle Lab is using multiple Intel® oneAPI Tools foundational in the Base nd HPC toolkits. And for visualization, the Intel oneAPI Rendering Toolkit is a set of open-source libraries that enables the creation of high-performance, high-fidelity, and cost-effective visualization applications and solutions. The toolkit supports Intel CPUs and future Xe architectures (GPUs). It includes the award-winning Intel® Embree, Intel® Open Image Denoise, Intel® Open Volume Kernel Library, Intel® OSPRay, Intel® OpenSWR, and other components and utilities.

The reason that oneAPI adds value in this context is that it enables programmers to ignore the hardware running their code and concentrate on the details of the functions themselves. In addition, it theoretically makes code “portable” between different hardware architectures, from FPGA to ASIC, CPU to GPU.

The Cambridge Open Zettascale Lab is a designated “center of excellence” for courses and workshops related to oneAPI, offering a range of courses to train programmers. In addition, the team at Cambridge is porting significant exascale and zettascale candidate codes to oneAPI, including CASTEP, FEniCS, and AREPO.

As Robert Maskell, Director of High-Performance Computing, Modelling and Simulation at Intel explains: “oneAPI is [the key to] getting people to think about open programming. So, when they’re writing a program, they can write it once and it runs on many different architectures. This is because oneAPI takes that code and says: ‘I want to run that on a CPU or GPU, and ASIC or an FPGA,’ and then compiles that code to run it on whatever architecture that you choose to run.

“oneAPI is an open program, so others are invited to join. The idea at Cambridge is not only to train the Cambridge team on how to use oneAPI and Intel oneAPI tools, but also to get that team to go out and do industrial engagement.”

The direction of travel towards zettascale computing is well underway, with the recent announcement (June 2022) that the 1.1 exaflop Frontier supercomputer at Oak Ridge National Laboratory was able to demonstrate performance of more than 1018 (1 quintillion) operations per second in standardized tests.3

Not far behind, at the US Department of Energy (DOE) Argonne National Laboratory, engineers are building the Aurora supercomputer. Aurora is expected to exceed over 2 exaflops of peak double-precision performance. It will be the first to showcase the power of pairing Intel® Data Center GPU Max Series and Intel® Xeon® CPU Max Series processors, and will be utilizing the capabilities of the Intel oneAPI toolkits.4

The challenges that face engineers, scientists, and programmers in the world of exascale are considerable, and unprecedentedly broad. From battling the physics involved in such powerful machines—huge power demands, intense cooling requirements and dedicated spaces to operate and maintain the hardware, through to the demands of middleware, networking and storage, the field is extraordinarily complex in every respect.