White Paper: Software Scalability in Exabyte Computing Systems

Migrating to exascale and, eventually, zettascale computing requires software optimized to make highly efficient use of the multi-core, heterogeneous systems. The Open Zettascale Lab at the University of Cambridge is tackling software portability challenges as it transitions from petabyte- to exabyte-scale systems and beyond, opening doors to new scientific and engineering breakthroughs with mainstream business applications not far behind.

At the microcomputer level, faster processors, new I/O efficiencies, and more generous, higher-speed storage and RAM arrive in a steady stream to meet increasing application performance requirements on PCs and workstations. On a much larger scale, occurring less frequently are exponential leaps in high-performance computing (HPC). HPC is currently positioned to migrate from today’s petabyte systems to exabyte-capable systems as the world’s data multiplies and increasingly complex, often AI-driven computations require the parallel processing of massive data sets across hundreds or thousands of servers, grouped into clusters and connected by high-speed networks.

For crunching the exabytes and, eventually, zettabytes of data required by machine learning algorithms, advanced analytics, and predictive modeling programs, faster processors alone aren’t adequate. Research and benchmarks underway at The Cambridge Open Zettascale Lab indicate that all the HPC ecosystem components, including software development, must evolve concurrently to allow applications to scale efficiently and take full advantage of the more powerful systems driving next-generation computing.