The exascale moment finally arrives

There was some big news on this week’s update from top 500 list(Opens in a new window) fastest supercomputers in the world at the International Conference on Supercomputing (ISC) in Germany: the Frontier supercomputer from the Oak Ridge National Laboratory not only tops the list, it is also the first machine to record an exaflop. In other words, it can perform a billion billion 64-bit floating point operations per second. We have been waiting for this step for a long time, and it’s good to see it finally happen.

The Frontier system, which is based on the HPE Cray EX235a architecture, has 8,730,112 total cores based on 3rd generation AMD EPYC 7A53 64-core processors running at 2 GHz with AMD Instinct 250X accelerators and the Slingshot-11 interconnect. It scored 1,102 exaflops on the High Performance Linpack (HPL) benchmark used to determine the ranking. It uses 21 megawatts of power.

For the past two years, the top spot has been taken by the Fugaku system at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. True to its previous HPL benchmark score of 442 petaflops, Fugaku uses Fujitsu A64FX 48C 2.2GHz processors, with 7,630,848 cores, with a theoretical peak of around one exaflop, although it didn’t score a score as high on the benchmark. This has now fallen to second place, although it is still three times ahead of system number three.

This system is also new; it is the LUMI system of the EuroHPC center of SCS in Finland, which records 151.9 petaflops. This is part of the European High Performance Computing Joint Undertaking, in which different European countries work together to create exascale machines. This is also built by HPE Cray and has a similar architecture to Frontier.

Rounding out the top five are two American machines that have been on the list for a few years now. Summit, which is also at ORNL, topped the list in 2018 and 2019. It is followed by Chinese machine Sunway TaihuLight.

Thus, for the first time in a few years, the United States is again at the top of the list. However, there have been rumors for some time that China does in fact have an exascale machine, but has chosen not to disclose it. These machines are often used for defense purposes such as weapon simulations.

One thing the list shows is the growing importance of AMD and Nvidia on the list. Indeed, while AMD tops the list, Nvidia accelerators have a place in 154 of the top 500 systems, compared to just seven for AMD.

Of the top ten supercomputers, only China’s Tianhe-2A, a version of which topped the list in 2013, is based on an Intel design. Intel’s own exascale candidate, the Aurora System at Argonne National Laboratory, was originally announced in 2015 to use the company’s now-discontinued Xeon Phi architecture (and is then expected to make 180 petaflops). Instead, Aurora will now use around 10,000 Intel-equipped blades, each with two Sapphire Rapids Xeon processors and six Ponte Vecchio GPUs (using Intel Xe graphics design) in an HPE Cray EX system. Installation has begun and the system is expected to be online by the end of the year and fully operational by early 2023, with the goal of generating more than two exaflops.

Recommended by our editors

At the same time there is a new Green 500 list(Opens in a new window), which measures machines in terms of performance per watt. Here, the Frontier test and development system takes first place, creating 62.68 gigaflops per watt of power from a single cabinet (in the same architecture as the full Frontier system, which came in second at 52.227 gigaflops/watt.) The Lumi system comes in third place.

It is interesting to note the huge increase in energy efficiency compared to november list(Opens in a new window), which was topped by the MN-3 core server in Japan, based on a preferred networking design with MN-Core Direct Connect and 24-core Xeon Platinum 8260M processors running at 2.4 GHz. (This machine comes in at number 5 on this year’s Green 500 list, but only 326 on the Top 500 list.)

The demand for high-end computing has never been clearer, both in traditional high-performance computing (HPC) applications and in machine learning. It’s great to see machines becoming both more powerful and more efficient.

What's New Now to get our top stories delivered to your inbox every morning.","first_published_at":"2021-09-30T21:30:40.000000Z","published_at":"2022-03-29T17:10:02.000000Z","last_published_at":"2022-03-29T17:09:22.000000Z","created_at":null,"updated_at":"2022-03-29T17:10:02.000000Z"})" x-show="showEmailSignUp()" class="rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs">
Receive our best stories!

Register for What’s up now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertisements, offers or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use and Privacy Policy. You can unsubscribe from newsletters at any time.

Comments are closed.