Speculation: GPU Performance Comparisons of 2020 *Spawn*

Status
Not open for further replies.
NVIDIA GeForce RTX 3080 ‘Ampere’ Graphics Card Maxes Out at 2.1 GHz GPU Clock, Features 19 Gbps GDDR6X Memory
August 16, 2020
Additional specifications of NVIDIA's GeForce RTX 3080 Ampere Gaming graphics card have leaked out over at UserBenchmark which were spotted by Rogame.
...
Moving over to the specifications, the NVIDIA GeForce RTX 3080 was spotted with 10 GB of VRAM which was running 19 Gbps GDDR6X memory dies (4750 MHz QDR effective) across a 320-bit bus interface. This should deliver a memory bandwidth of 760 GB/s. This is almost a 53% jump in the memory bandwidth over the GeForce RTX 2080 SUPER which is very impressive. As for the GPU clocks, the card has a BIOS limit set to 2100 MHz so we should be looking at clock speeds similar to the Turing GPUs which also peak around 2.1 GHz.
...
Moving on to the GeForce RTX 3080, the rumor reports that the card will be featuring the GA102-200-KD-A1 SKU. This cut down SKU will feature the same 4352 CUDA cores as the RTX 2080 Ti that will be arranged in a total of 68 SMs. The card is reportedly going to feature up to 20 GB of memory that is also going to be GDDR6X. Assuming the memory is running at 19 Gbps across a 320-bit bus interface, we can expect a bandwidth of up to 760 GB/s.

...
In addition to the GeForce RTX 3080 graphics card specifications leak, Chiphell has posted what seems to be an alleged performance chart of the GeForce RTX 30 series lineup which includes the GeForce RTX 3090, GeForce RTX 3080, GeForce RTX 3070 Ti, GeForce RTX 3070 & the GeForce RTX 3060. The chart claims to be an average performance measurement of the graphics cards purely in gaming benchmarks at various resolutions from 1080p and all the way up to 4K. It also showcases the performance per watt gains for each respective generation. Again, this chart should be taken with a grain of salt as the information on performance is yet to be verified.


NVIDIA-GeForce-RTX-3090-GeForce-RTX-3080-GeForce-RTX-3070-TI-GeForce-RTX-3070-GeForce-RTX-3060-Gaming-Performance-Benchmarks.png
https://wccftech.com/nvidia-geforce-rtx-3080-graphics-card-specs-leak-2100-mhz-gpu-19-gbps-gddr6x-memory/
 
I'm not buying that performance chart. There's been no mentioned to date that I'm aware of, of standard and Ti versions of each performance tier.
Agreed. Even wccftech mentioned to take the performance chart "with a grain of salt".

The Userbenchmark score might have some validity even though it only reflects DX9 performance. The score has since been deleted from the Userbench database.
 
So, if true, they're adamant about 390-400 Watts for Titan Ampere?

edit: Which in turn means - following the linked graph - that Titan Ampere is roughly 50% faster than 2080 Ti but will use >55% more power (390+ vs. 250 watts) despite being on a more advanced node and using more power efficient memory? I find that hard to believe.
 
Last edited:
So, if true, they're adamant about 390-400 Watts for Titan Ampere?

edit: Which in turn means - following the linked graph - that Titan Ampere is roughly 50% faster than 2080 Ti but will use >55% more power (390+ vs. 250 watts) despite being on a more advanced node and using more power efficient memory? I find that hard to believe.

Me too, especially considering A100 on PCIe has a TDP of 250W. Maybe the consumer GPUs really are on some process other than TSMC 7nm, and whatever it is provides the density increases over 16/12nm required to increase perf by 50% without ridiculous die sizes, but not much in terms of perf/power benefits? Seems like a potentially big misstep on their part, if so.
 
Bold.

Lisa likes winning too much, sorry.
Either way the perf wars thread is two blocks down.

Let me move this to proper thread.
Series X pretty much revealed how RDNA2 looks like, I don't see major changes besides addition of RT.
Expecting 3080 to be around 15+20% better than 2080ti, which is where I expect Big Navi to be.
 
Maybe you should explain how AMD can improve efficiency by 2x with Navi #2 instead of writing those onelines...
Describing their very arcane VLSI flow is hard and dumb (and you'll get your ISSCC presentation next year anyway).
Also I like cheesy oneliners.
Can you come up with any better reason than Lisa's "will to win"?
Engineering.
A whole load of engineering, including just inhumane boatload of physdes work.
 
So, AMD can change their transistor layout and improving efficiency by 2x without increasing transistors? And how are they getting rid of the power consumption from twice the bandwidth? HBM?
 
Status
Not open for further replies.
Back
Top