Supercomputers: Obama orders world's fastest computer

pharma

Legend
President Obama has signed an executive order calling for the US to build the world's fastest computer by 2025.

The supercomputer would be 20 times quicker than the current leading machine, which is in China.
It would be capable of making one quintillion (a billion billion) calculations per second - a figure which is known as one exaflop.
A body called the National Strategic Computing Initiative (NSCI) will be set up to research and build the computer.

The US is seeking the new supercomputer, significantly faster than today's models, to perform complex simulations and aid scientific research.
Today's fastest supercomputer, the Tianhe-2 in China's National Computer Centre, Guangzhou, performs at 33.86 petaflops (quadrillions of calculations per second), almost twice as fast as the second-quickest machine, which is American.

For Parsons, the latest US initiative is a clear attempt to challenge the dominance of the Chinese in this field.
"The US has woken up to the fact that if it wants to remain in the race it will have to invest," he told the BBC.
Chief among the obstacles, according to Parsons, is the need to make computer components much more power efficient. Even then, the electricity demands would be gargantuan.

"I'd say they're targeting around 60 megawatts, I can't imagine they'll get below that," he commented. "That's at least £60m a year just on your electricity bill."

Efforts to construct an exascale computer are not entirely new.

Recently, IBM, the Netherlands Institute for Radio Astronomy (ASTRON) and the University of Groningen announced plans to build one to analyse data from the Square Kilometre Array (SKA) radio telescope project.

SKA will be built in Australia and South Africa by the early 2020s.
http://www.bbc.com/news/technology-33718311
 
I suspect it will be a race between Intel Xeon/Xeon Phi and IBM/Nvidia to win this bid.
Volta's successor in TSMC 10nm based on Echelon / Exascale project is the obvious candidate
S2012010703505825.png


S2012010703494554.png

S2012010703500217.png
 
It was hoped to have an exascale supercomputer in 2018. Apparently it's impossible and now 2025 is the new timeframe...
The original projection by the US Department of Energy put a power cap at 20 MW, which this executive order ups to 30. The story has a quote of a guess at 60.

Is this separate from this? http://www.hpcwire.com/2015/07/28/doe-exascale-plan-gets-support-with-caveats/#/
The goals there are a bit more broad, with multiple exascale systems by 2023 at 20MW.
 
The goals there are a bit more broad, with multiple exascale systems by 2023 at 20MW.
60GW is frighteningly huge. It's a large-sized Swedish town's worth of electricity. City I'm in, 2nd largest in the country, pulled around 280GW peak IIRC on very cold winter days in the late 1990s. I'm not sure what current consumption levels lie at, because the display that showed these figures is no longer there. :( The city has grown quite a bit since then of course, but also things like remote heating (and cooling for that matter) have been expanded on a lot as well to reduce electricity (and fossile fuel) consumption.

Hopefully they don't locate this monster computer anywhere there's a shortage of water, because that would be totally irresponsible. :p Preferably, you'd cool it with something like deep-ocean water, and then capture the waste heat for household use for example.
 
Today's fastest supercomputer, the Tianhe-2 in China's National Computer Centre, Guangzhou, performs at 33.86 petaflops (quadrillions of calculations per second), almost twice as fast as the second-quickest machine, which is American.

For Parsons, the latest US initiative is a clear attempt to challenge the dominance of the Chinese in this field.
"The US has woken up to the fact that if it wants to remain in the race it will have to invest," he told the BBC.

And the easiest way to win the race is to kneecap the leader with export sanctions.
 
60GW is frighteningly huge.
It is a large estimate, and large enough that it might be an erroneous one by someone not aware of the numbers being given as goals.
There are pragmatic reasons for aiming for 20 or even the executive order's 30.

Hopefully they don't locate this monster computer anywhere there's a shortage of water, because that would be totally irresponsible. :p
Perhaps not responsible, but it's a consideration that is sometimes discounted, particularly for any systems not on the public HPC lists. The NSA's Utah data center consumed 6.2 million gallons of water in 2014, in a desert state with drought conditions. DP floating point might not really figure into its workload, but otherwise it's a very large collection of networked compute racks.
 
It is a large estimate, and large enough that it might be an erroneous one by someone not aware of the numbers being given as goals.
There are pragmatic reasons for aiming for 20 or even the executive order's 30.


Perhaps not responsible, but it's a consideration that is sometimes discounted, particularly for any systems not on the public HPC lists. The NSA's Utah data center consumed 6.2 million gallons of water in 2014, in a desert state with drought conditions. DP floating point might not really figure into its workload, but otherwise it's a very large collection of networked compute racks.

Is the water actually consumed, or does it simply pass through the cooling system?
 
The method suggested elsewhere is evaporative cooling when the temperatures rise high enough that the facility's standard cooling cannot dissipate heat fast enough. The amount varies with the season, although the periods of high temperature can unhelpfully coincide with drought.

The increasing difficulty in terms of facility-wide power delivery and power dissipation is why the Exascale initiative and the recent executive order have rather constraining MW limits to them.
 
The method suggested elsewhere is evaporative cooling when the temperatures rise high enough that the facility's standard cooling cannot dissipate heat fast enough.
Yes, and also, you consume water in much the same way to cool the powerplant generating the electricity the facility runs on. So water losses could essentially more or less double by building this thing in the wrong state.
 
More specifically — here come the juicy parts — from a technical perspective, the order will establish the following:

  1. Accelerating delivery of a capable exascale computing system that integrates hardware and software capability to deliver approximately 100 times the performance of current 10 petaflop systems across a range of applications representing government needs. (emphasis added)
  2. Increasing coherence between the technology base used for modeling and simulation and that used for data analytic computing.
  3. Establishing, over the next 15 years, a viable path forward for future HPC systems even after the limits of current semiconductor technology are reached (the “post- Moore’s Law era”). (emphasis added)
  4. Increasing the capacity and capability of an enduring national HPC ecosystem by employing a holistic approach that addresses relevant factors such as networking technology, workflow, downward scaling, foundational algorithms and software, accessibility, and workforce development.
  5. Developing an enduring public-private collaboration to ensure that the benefits of the research and development advances are, to the greatest extent, shared between the United States Government and industrial and academic sectors.
July 29, 2015 articles:
http://www.extremetech.com/extreme/...er-to-build-first-ever-exascale-supercomputer

http://www.extremetech.com/extreme/210872-extremetech-explains-what-is-moores-law
 
Last edited:
Crysis

Or maybe they are trying to break 30fps in Arkham Knight. If so I have bad news for them...

I was about to make the exact two same jokes! :LOL:


On a more serious note, it's probably the usual suspects: simulations of climate models, molecular dynamics, nuclear explosions, fluid dynamics, etc.
 
Boring.

I want full benevolent AI taking over the lunatics that are running the world.
What if it is benevolent to all organic life on earth as a whole and deems that all industrialized nations and populations must be exterminated. Then you only have a small amount of farmer population left and some remote Amazonian, African and Austrialian tribes surviving. Guerrilla Games should make a game around something like that.
 
The original projection by the US Department of Energy put a power cap at 20 MW, which this executive order ups to 30. The story has a quote of a guess at 60.

Is this separate from this? http://www.hpcwire.com/2015/07/28/doe-exascale-plan-gets-support-with-caveats/#/
The goals there are a bit more broad, with multiple exascale systems by 2023 at 20MW.

60GW is frighteningly huge. It's a large-sized Swedish town's worth of electricity.

One is talking about Megawatts and the other about Gigawatts...
 
One of the most immediate applications--and a giant reason why the DOE is so interested--is nuclear weapons modeling. The finer points of what makes atomic weapons do what they do best and how aging affects them are harder to verify since actually detonating a few for research purposes has been banned.
Besides that, materials science, particle physics, chemistry, climate modeling, and astronomy are noted areas where the demand is extreme and where exascale is still far from sufficient.


What if it is benevolent to all organic life on earth as a whole and deems that all industrialized nations and populations must be exterminated. Then you only have a small amount of farmer population left and some remote Amazonian, African and Austrialian tribes surviving. Guerrilla Games should make a game around something like that.
That would be unsustainable. A giant computational device or network would need those industrialized nations to give it the highly pure materials, energy, and infrastructure it would take to last more than a few years, or potentially months if the HVAC goes down or an unattended municipal water supply shuts down.
The remnants of humanity would be back to where they were in some thousands of years. An entity that could only be born from a multibillion dollar clean room with obscenely dust-free atmosphere, precise chemistry, impurities measured in parts per billion, almost no vibration, and on some of the rarest elements in the universe needs that civilization of biologics like an tribe on a coral atoll depends on billions of simple polyps.
The AI could build its own civilization equivalents, but then that's trading one type of grasping, emergent organization for another.

To the topic at hand, the article I linked had a list of challenges beyond just the consideration of a shiny new chip, which amounts to a fraction of several bullet points at best.

  • Energy efficiency: Creating more energy-efficient circuit, power, and cooling technologies.
  • Interconnect technology: Increasing the performance and energy efficiency of data movement.
  • Memory technology: Integrating advanced memory technologies to improve both capacity and bandwidth.
  • Scalable system software: Developing scalable system software that is power- and resilience-aware.
  • Programming systems: Inventing new programming environments that express massive parallelism, data locality, and resilience
  • Data management: Creating data management software that can handle the volume, velocity and diversity of data that is anticipated.
  • Exascale algorithms: Reformulating science problems and redesigning, or reinventing, their solution algorithms for exascale systems.
  • Algorithms for discovery, design, and decision: Facilitating mathematical optimization and uncertainty quantification for exascale discovery, design, and decision making.
  • Resilience and correctness: Ensuring correct scientific computation in face of faults, reproducibility, and algorithm verification challenges.
  • Scientific productivity: Increasing the productivity of computational scientists with new software engineering tools and environment
 
Research!

I am following ExaScale computing since years...it is an extreme scientific challenge....not only on the hardware side, but also on the software side: my code scales perfectly to about O(10^5) up to O(10^6) mpi ranks. But this is on Blue-Gene systems, with O(petaflop) performance...ExaScale Computing is basically everything times 1000 :)

But I this would mean O(10^8)-O(10^9) processors for a Blue-Gene type of architecture :oops:

How many of those will break down during a simulation due to hardware faults? Can your software handle failing MPI ranks?

One of the biggest problems...parallel filesystems and I/O! 1mil processors writing simultaneously in a file?!


Just to name a few issues :mrgreen:
 
Back
Top