At the microcomputer level, faster processors, new I/O efficiencies, and more generous, higher-speed storage and RAM arrive in a steady stream to meet increasing application performance requirements on PCs and workstations. On a much larger scale, occurring less frequently are exponential leaps in high-performance computing (HPC). HPC is currently positioned to migrate from today’s petabyte systems to exabyte-capable systems as the world’s data multiplies and increasingly complex, often AI-driven computations require the parallel processing of massive data sets across hundreds or thousands of servers, grouped into clusters and connected by high-speed networks.
For crunching the exabytes and, eventually, zettabytes of data required by machine learning algorithms, advanced analytics, and predictive modeling programs, faster processors alone aren’t adequate. Research and benchmarks underway at The Cambridge Open Zettascale Lab indicate that all the HPC ecosystem components, including software development, must evolve concurrently to allow applications to scale efficiently and take full advantage of the more powerful systems driving next-generation computing.