We don't know for gaming yet, but am I the only one to see MA-SSI-VE architecture changes that provide HU-GE performance jump, thus efficiency jump, in the intended workloads?
View attachment 3905 View attachment 3906 View attachment 3907 View attachment 3908
3 to 7 times real world performance gain on BERT training/inferencing is above expectations
Isn't HPC intended workload too? I mean, of course they focused more on the AI stuff this time around, but still that's their big HPC chip for everything else too.
There the improvements aren't that impressive, considering the 2,5x transistor budget and higher consumption.
Also it seems the AI comparisons aren't really apples to apples, they're using lesser TF32 precision with A100 on BERT-Large FP32 Training