Nvidia Ampere Discussion [2020-05-14]

Let's wait for the real boost clocks to account for actual IPC increase. It certainly would explain the higher power consumption vs. RTX 2070. I just don't think the Ampere CUDA is such a huge jump from Turing as Nvidia wants us to believe. Just more of it. Nothing wrong with that per se.

Yeah clearly, twice the amount of FP units per SM wouldn't explain the higher power consumption at all, only clocks do. I'm seing the light of truth shinning on me now.
 
It is interesting that ASUS mention half the amount of CUDA cores in their press release than NVIDIA.

This: NEW SM – 2X FP32 THROUGHPUT

... might just mean that NVIDIA count that change as a double up on CUDA cores, never mind the SM count. Int32 replaced and quickly forgotten?
 
Let's wait for the real boost clocks to account for actual IPC increase. It certainly would explain the higher power consumption vs. RTX 2070. I just don't think the Ampere CUDA is such a huge jump from Turing as Nvidia wants us to believe. Just more of it. Nothing wrong with that per se.

RTX3070 with ~10TFLOPs delivers 36%+ more performance than the RTX2080 FE (10,5 TFLOPs) and could be on par with the 2080TI FE with around 14,2 TFLOPs.

It is interesting that ASUS mention half the amount of CUDA cores in their press release than NVIDIA.

This: NEW SM – 2X FP32 THROUGHPUT

... might just mean that NVIDIA count that change as a double up on CUDA cores, never mind the SM count. Int32 replaced and quickly forgotten?

Int32 was never a think in the spec sheets.
 
eg2b7g0xcamf_tp3jks4.png

eg2b8rcwoamgm6wy6kzx.png


 
Well that was a fun couple of hours and it seems to me that a ~70% performance gain for the top two cards is better than anyone was hoping for.
We don't know the real perf or power consumption characteristics, but yea. The 70% figure is a bit more than the "nV standard 60%" for "node jump" architectures. Maybe that's why the TDP is seemingly higher.
 
Not match.
2080TI 250 watts
same performance 3070 220 watts
Less than theorically per node wattage reduction. It seems efficiency went worst.
Price reduction is another hint that AMD could have a winner with RDNA2.
2080Ti was 280W.
Price reduction is for 3070 only which is like a 2080Ti plus 15% or so.
A winner over that? Sure.
A winner over 40TF 3090, especially in cases with RT (and DLSS) where the performance actually counts? I kinda doubt that.
The real question is will they be able to beat 3080?
 
Not match.
2080TI 250 watts
same performance 3070 220 watts
Less than theorically per node wattage reduction. It seems efficiency went worst.
Price reduction is another hint that AMD could have a winner with RDNA2.

3070 is faster, not same performance. But yeah, 1.9 is stretching it.
 
I don't see amd even beating the 3070. Going from 5700xt to 2080ti+ in one génération jump, with the same process ? And it will be their first rt gpu...
You dont think AMD can make a GPU 50% faster than 5700xt?

Do we know when Ampere review embargo lifts?
 
2080Ti was 280W.
Price reduction is for 3070 only which is like a 2080Ti plus 15% or so.
A winner over that? Sure.
A winner over 40TF 3090, especially in cases with RT (and DLSS) where the performance actually counts? I kinda doubt that.
The real question is will they be able to beat 3080?
https://www.nvidia.com/es-es/geforce/graphics-cards/rtx-2080-ti/
250 watts product reference. Even with 280 watts is less than the theorical reference node wattage reduction.
2080 was 799 dollars, 3080 is 699.
Ampere is the new Fermi, this is, a regression in perf/watt architecture made too hot to compensate.
 
Back
Top