Nvidia shows signs in [2021]

Status
Not open for further replies.
There never was any signs of supply issues on Nvidia side. Demand on the other hand is still insane, and despite them saying that ~80% of shipments in Q2 were LHR cards I still don't see who else can be creating it besides miners.
 
"The combination of Crypto to gaming revenue is difficult to quantify. CMP revenue, which is recognized in OEM was 266 million, lower than our original 400 million estimates on reduced mining profitability. And we expect a minimal contribution from CMP going forward."
 
There never was any signs of supply issues on Nvidia side. Demand on the other hand is still insane, and despite them saying that ~80% of shipments in Q2 were LHR cards I still don't see who else can be creating it besides miners.

There is still a pandemic and most people are not ready to do more outside. So they will further spend their free money on home entertaiment.
 
"The combination of Crypto to gaming revenue is difficult to quantify. CMP revenue, which is recognized in OEM was 266 million, lower than our original 400 million estimates on reduced mining profitability. And we expect a minimal contribution from CMP going forward."
Or maybe miners are just still buying GeForces, even LHR ones, since they can actually sell these at some point.
 
Now, the first exascale computer in the United States will likely be a different machine at a different lab - Oak Ridge National Lab in Tennessee - built by Hewlett Packard Enterprise Co (HPE.N) with chips from AMD expected to be delivered later this year.

Nvidia must be salty. First US exascale supercomputer will be running AMD and 2nd Intel.
 
Yes, FP64 as that’s the standard for rating supercomputer performance today. Lower precisions don’t count.

The HPL-AI project built a benchmark which is able to utilize lower precision units to calculate FP64 precision results. For example, Fugaku is able to achieve 2.0 EFLOPS using HPL-AI. Summit did ~1.15 EFLOPS.
 
The HPL-AI project built a benchmark which is able to utilize lower precision units to calculate FP64 precision results. For example, Fugaku is able to achieve 2.0 EFLOPS using HPL-AI. Summit did ~1.15 EFLOPS.

It’s not being used yet in any official ranking is it? Top 500 is still using the classic HPL.
 
It’s not being used yet in any official ranking is it? Top 500 is still using the classic HPL.
https://top500.org/lists/hpcg/2021/06/
HPL-AI Results

The HPL-AI benchmark seeks to highlight the convergence of HPC and artificial intelligence (AI) workloads based on machine learning and deep learning by solving a system of linear equations using novel, mixed-precision algorithms that exploit modern hardware.
It's already moving. You can't fight history...

Edit: Ninja'd by Troyan :runaway:
 
Sure but the exaflop barrier has already been crossed for low precision AI. So really none of this has anything to do with the first 64-bit exaflop computer.

Well I guess it depends on how you consider the definition of the barrier. HPL-AI is able to calculate FP64 precision results from lower precision units, the "2 EFLOPS" is "effective" FLOPS, not how many operations the low precision units done, so if we only look at the results, I'd say that counts as effectively exaflops computer (for example, if a computer has a lot of integer units and able to calculate FP64 results quickly, does it really matter that it has no FP units at all?)

If we want to use the narrower definition of running LINPACK then I guess we'll have to wait for a few months for the first exaflops computer :)
 
Well I guess it depends on how you consider the definition of the barrier.

It would make sense to use the generally accepted definition.

HPL-AI is able to calculate FP64 precision results from lower precision units, the "2 EFLOPS" is "effective" FLOPS, not how many operations the low precision units done, so if we only look at the results, I'd say that counts as effectively exaflops computer (for example, if a computer has a lot of integer units and able to calculate FP64 results quickly, does it really matter that it has no FP units at all?)

If the result has the same range and precision then yeah it should count. Is this truly apples to apples though? If it was then there should be a version of the original HPL benchmark that is accelerated using the same method.

If we want to use the narrower definition of running LINPACK then I guess we'll have to wait for a few months for the first exaflops computer :)

Yup, it’s easier when the goal posts don’t move.
 
It would make sense to use the generally accepted definition.
If the result has the same range and precision then yeah it should count. Is this truly apples to apples though? If it was then there should be a version of the original HPL benchmark that is accelerated using the same method.
Yup, it’s easier when the goal posts don’t move.

There was (is?) also debates on whether Linpack is still a useful benchmark for HPC workloads. Generally it does not stress the interconnections enough. Some supercomputers with weaker interconnections perform pretty well on LINPACK but badly on HPCG (e.g. Sunway TaihuLight). The reason why Top500 keeps using Linpack is mostly for historical reason and for ease of comparison.

HPL-AI was introduced rather lately (only since 2019 IIRC) so many HPC does not have results running HPL-AI. And since Top500 does not officially include HPL-AI, newer HPC installations does not run it (a very limited list of results is here https://hpl-ai.org/doc/results ).

Personally I think HPCG is a more useful benchmark(s). Considering the advances in AI (also, more and more HPC are dedicated to AI workloads), it's probably better to have some AI oriented benchmarks too.
 
Status
Not open for further replies.
Back
Top