And if you look at the progression between that through Turing you may see why it can be a 3 slot top end now, without any mystical not so good reasons.
0, [no show], 0, 1A, 1A, 1A, 1A, 1A, 1A, 2B, 2B, 2A, 2B, 2B, 2B, 2B, 2B, 2B, 2B, 2B, 2B, 2A, 3A? Yeah, sure seeing a progression, but no pattern.
But nVidia has changed the cooler from the radial design to the two axial design. They did a "radical" decision otherwise the Titan RTX would just be as load as a jet.
Did you check Quadro RTX 8000?
I'm having a very hard time believing that Nvidia totally fucked their power power draw advantage over AMD while doing a node shrink. 5700xt drew almost as much power as a 2080FE, and Turing was 12nm while 5700xt was 7nm. I know people are saying Samsung 8nm isn't great, but it seems very weird to me that a 2080FE would be 210W max and suddenly the 3080 is some kind of thermal monstrosity even with a node shrink that should give some advantages.
Possible explanation: They had to up the clocks considerably in the last minute, going way outside of the optimal curve. Because RDNA2.
It's Moore's Law is Dead nonsense by origin. Just think how much bandwidth inside the chip it would take to pass everything through tensors in addition to all the other traffic, and for what I've heard (don't really have clue but I think the guy who said it does) tensors aren't even really suited for compression/decompression
And this surely isn't just a fancy name for Ampere's sparsity feature?