Supercomputers: Obama orders world's fastest computer

  • Thread starter Deleted member 2197
  • Start date
Today U.S. Secretary of Energy Rick Perry announced that six leading U.S. technology companies will receive funding from the Department of Energy’s Exascale Computing Project (ECP) as part of its new PathForward program, accelerating the research necessary to deploy the nation’s first exascale supercomputers.

The awardees will receive funding for research and development to maximize the energy efficiency and overall performance of future large-scale supercomputers, which are critical for U.S. leadership in areas such as national security, manufacturing, industrial competitiveness, and energy and earth sciences. The $258 million in funding will be allocated over a three-year contract period, with companies providing additional funding amounting to at least 40 percent of their total project cost, bringing the total investment to at least $430 million.

The following U.S. technology companies are the award recipients:

· Advanced Micro Devices (AMD)

· Cray Inc. (CRAY)

· Hewlett Packard Enterprise (HPE)

· International Business Machines (IBM)

· Intel Corp. (Intel)

· NVIDIA Corp. (NVIDIA)

The Department's funding for this program is supporting R&D in three areas - hardware technology, software technology, and application development - with the intention of delivering at least one exascale-capable system by 2021.
https://energy.gov/articles/departm...-contracts-totaling-258-million-accelerate-us
 
Summit Up and Running at Oak Ridge, Claims First Exascale Application
The Department of Energy’s 200-petaflop Summit supercomputer is now in operation at Oak Ridge National Laboratory (ORNL). The new system is being touted as “the most powerful and smartest machine in the world.”

And unless the Chinese pull off some sort of surprise this month, the new system will vault the US back into first place on the TOP500 list when the new rankings are announced in a couple of weeks. Although the DOE has not revealed Summit’s Linpack result as of yet, the system’s 200-plus-petaflop peak number will surely be enough to outrun the 93-petaflop Linpack mark of the current TOP500 champ, China’s Sunway TaihuLight.

Assuming all those nodes are fully equipped, the GPUs alone will provide 215 peak petaflops at double precision. Also, since each V100 also delivers 125 teraflops of mixed precision, Tensor Core operations, the system’s peak rating for deep learning performance is something on the order of 3.3 exaflops.

Those exaflops are not just theoretical either. According to ORNL director Thomas Zacharia, even before the machine was fully built, researchers had run a comparative genomics code at 1.88 exaflops using the Tensor Core capability of the GPUs. The application was rummaging through genomes looking for patterns indicative of certain conditions. “This is the first time anyone has broken the exascale barrier,” noted Zacharia.
...
The analytics aspect dovetails nicely with Summit’s deep learning propensities, inasmuch as the former is really just a superset of the latter. When the DOE first contracted for the system back in 2014, the agency probably only had a rough idea of what they would be getting AI-wise. Although IBM had been touting its data-centric approach to supercomputing prior to pitching its Power9-GPU platform to the DOE, the AI/machine learning application space was in its early stages. Because NVIDIA made the decision to integrate the specialized Tensor Cores into the V100, Summit ended up being an AI behemoth, as well as a powerhouse HPC machine.

As a result, the system is likely to be engaged in a lot of cutting-edge AI research, in addition to its HPC duties. For the time being, Summit will only be open to select projects as it goes through its acceptance process. In 2019, the system will become more widely available, including its use in the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.
https://www.top500.org/news/summit-up-and-running-at-oak-ridge-claims-first-exascale-application/

 
Interestingly this is the first time USA doesnt have the most computers in the top 500, china is leading now 202 > 143. I don't know if its the 'Trump effect' (i.e. science is not important) but considering just 6 months ago the USA had more than China I wouldnt be surprised if its playing a part
 
Interestingly this is the first time USA doesnt have the most computers in the top 500, china is leading now 202 > 143. I don't know if its the 'Trump effect' (i.e. science is not important) but considering just 6 months ago the USA had more than China I wouldnt be surprised if its playing a part

No, in terms of both number of systems and performance of all systems combined (on top500 at least), China surpassed the US on June 2016. It just continues the trend of Chinese rise and the US's decline on the number of supercomputers.

However, one important factor is that top500 is based on LINPACK, which is a relatively easy benchmark. It can be seen as a "practical peak performance" of a supercomputer. The actual performance when doing some real world workloads can be vastly lower than that. For example, the HPCG benchmark, which contains broader types of operations typical for a supercomputer, shows a different picture of performance characteristics of these supercomputers on the list (the current #1 on top500 is only #5 on HPCG's list). Although, as it's not as prominent as top500, supercomputer centers are probably less likely to spend much time optimizing for HPCG than for LINPACK, thus it depends on memory subsystem and interconnect much more than LINPACK does (though you can say that to many real world workloads anyway).

It'd be interesting to see how Summit does on HPCG's list though. I believe it'll be revealed soon.
 
Last edited:
I think total number in top 500 isn't that impressive without knowing the utilisation rates of each supercomputer.
 
Back
Top