There never was any signs of supply issues on Nvidia side. Demand on the other hand is still insane, and despite them saying that ~80% of shipments in Q2 were LHR cards I still don't see who else can be creating it besides miners.
There never was any signs of supply issues on Nvidia side. Demand on the other hand is still insane, and despite them saying that ~80% of shipments in Q2 were LHR cards I still don't see who else can be creating it besides miners.
Or maybe miners are just still buying GeForces, even LHR ones, since they can actually sell these at some point."The combination of Crypto to gaming revenue is difficult to quantify. CMP revenue, which is recognized in OEM was 266 million, lower than our original 400 million estimates on reduced mining profitability. And we expect a minimal contribution from CMP going forward."
Also Ethereum isn't the only coin people are mining, there's apparently several "bubbling under" coins which have started growing and they're not throttled by LHROr maybe miners are just still buying GeForces, even LHR ones, since they can actually sell these at some point.
https://www.reuters.com/technology/...its-delayed-intel-machine-sources-2021-08-24/The U.S. Department of Energy is nearing a deal to purchase a supercomputer made with chips from Nvidia Corp (NVDA.O) and Advanced Micro Devices Inc (AMD.O) as a key lab waits for a larger supercomputer from Intel Corp (INTC.O) that has been delayed for months, two people familiar with the matter told Reuters.
Looks like even the government is tired of waiting for Intel...
https://www.reuters.com/technology/...its-delayed-intel-machine-sources-2021-08-24/
Now, the first exascale computer in the United States will likely be a different machine at a different lab - Oak Ridge National Lab in Tennessee - built by Hewlett Packard Enterprise Co (HPE.N) with chips from AMD expected to be delivered later this year.
FP64. The first Exaflop calculation was done on nVidia.
Yes, FP64 as that’s the standard for rating supercomputer performance today. Lower precisions don’t count.
The HPL-AI project built a benchmark which is able to utilize lower precision units to calculate FP64 precision results. For example, Fugaku is able to achieve 2.0 EFLOPS using HPL-AI. Summit did ~1.15 EFLOPS.
https://top500.org/lists/hpcg/2021/06/It’s not being used yet in any official ranking is it? Top 500 is still using the classic HPL.
It's already moving. You can't fight history...HPL-AI Results
The HPL-AI benchmark seeks to highlight the convergence of HPC and artificial intelligence (AI) workloads based on machine learning and deep learning by solving a system of linear equations using novel, mixed-precision algorithms that exploit modern hardware.
It's already moving. You can't fight history...
Sure but the exaflop barrier has already been crossed for low precision AI. So really none of this has anything to do with the first 64-bit exaflop computer.
Well I guess it depends on how you consider the definition of the barrier.
HPL-AI is able to calculate FP64 precision results from lower precision units, the "2 EFLOPS" is "effective" FLOPS, not how many operations the low precision units done, so if we only look at the results, I'd say that counts as effectively exaflops computer (for example, if a computer has a lot of integer units and able to calculate FP64 results quickly, does it really matter that it has no FP units at all?)
If we want to use the narrower definition of running LINPACK then I guess we'll have to wait for a few months for the first exaflops computer
It would make sense to use the generally accepted definition.
If the result has the same range and precision then yeah it should count. Is this truly apples to apples though? If it was then there should be a version of the original HPL benchmark that is accelerated using the same method.
Yup, it’s easier when the goal posts don’t move.