Nvidia Ampere Discussion [2020-05-14]

Please note: Anyone going for a 3080 would be best off going for a 20gb version if there is one. The new console already have 10gb (effective) gpu ram, in a few years they'll be using it more efficiently than PCs and your shiny new card could end up seriously slowed down at max settings.

Doubt it. That 10GB GDDR6X is already heaps faster to begin with, aside from that that 10GB vram can be used efficiently too.
 
I have many inhouse ML algorithms that can be benefited greatly from more FP32 CUDA core counts, but not so much from more tensor cores, so to me if Nvidia's new FP32 cuda "cores" are really just as capable as the old ones, than I am impressed with the product (I actually has the feeling of skip this generation completely after I saw the disappointing Tesla A100's specs, what a waste of 54 trillion transistors and TSMC's expensive 7nm process).

Anyway I am waiting for more benchmarks and hoping a dual-slot Titan solution for GA102.
 
Doubt it. That 10GB GDDR6X is already heaps faster to begin with, aside from that that 10GB vram can be used efficiently too.


Not that I should even need to appeal to authority. The average user can see how Rdr2 can use more ram than consoles even have available altogether, just on the gpu. (PS4 only has a bit over 4gb guaranteed) and part of that ram is taken up by non gpu assets. The new console have 13.5gb of ram available, they can squeeze that non high-speed ram usage into that extra 3.5gb pool that the Series X had without touching the 10gb pool.

Speaking of, while 10gb might be minimum requirements in the near future, that means 8gb could easily be below. Definitely don't buy a 3070 till the 16gb versions come out.
 
Last edited:
PCI Express 4 looks like it will be relevant for those buying these new cards:


EDIT: Oh and I forgot to say, some of what was shown in the Digital Foundry "3080 performance preview" may have been slower specifically because it's a PCI Express 3 system...
 
Last edited:

Not that I should even need to appeal to authority. The average user can see how Rdr2 can use more ram than consoles even have available altogether, just on the gpu. (PS4 only has around 4gb guaranteed) and part of that ram is taken up by non gpu assets. The new console have 13.5gb of ram available, they can squeeze that non high-speed ram usage into that extra 3.5gb pool that the Series X had without touching the 10gb pool.

To get RDR2 anywhere near 8GB you need to be running at well above the console settings.

I think you'd struggle to find a game on the PC today that can't match console settings with only 4GB VRAM, let alone 8GB.
 
PCI Express 4 looks like it will be relevant for those buying these new cards:


EDIT: Oh and I forgot to say, some of what was shown in the Digital Foundry "3080 performance preview" may have been slower specifically because it's a PCI Express 3 system...

Meh, there's no hard evidence there to attribute performance differences to PCIe bandwidth. We're talking about 2-3% here which is well within the margin of error for PC benchmarking.
 
Take a step back from the $700 price tag and the 3080 is as if nvidia cut one more memory channel off of 1080Ti, clocked it to the max and then claimed their biggest generational leap ever, easily more than 2x 980. The perf/W improvement, not the joke that Jensen presented, is also quite lackluster,


3090 should be 15% faster than 3080 and looks like a poor substitute for the former Ti designations which far better, mostly being cut-down of 50% larger chips.

Objectively, this is a far worse improvement than the last node change, the saving grace being that nvidia didn't go all out with Pascal, only ~450mm2 for the biggest chip.
 
phama do you know how the ichill x3 differs from the ichill x4 (3070) theres £20 difference does one support pcie x4 and the other x3 ?
ps: at overclockers most of the 2070 supers are the same or more expensive than the 3070
 
Last edited:
Take a step back from the $700 price tag and the 3080 is as if nvidia cut one more memory channel off of 1080Ti, clocked it to the max and then claimed their biggest generational leap ever, easily more than 2x 980. The perf/W improvement, not the joke that Jensen presented, is also quite lackluster,


3090 should be 15% faster than 3080 and looks like a poor substitute for the former Ti designations which far better, mostly being cut-down of 50% larger chips.

Objectively, this is a far worse improvement than the last node change, the saving grace being that nvidia didn't go all out with Pascal, only ~450mm2 for the biggest chip.

I think this is where the cheaper Samsung 8nm process comes to play. I believe that they went with 7nm TSMC, they PPW would be better, but at 799 instead of 699. Which would you choose?
 
phama do you know how the ichill x3 differs from the ichill x4 (3070) theres £20 difference does one support pcie x4 and the other x3 ?
Not sure, but could be anything from boost clocks to slightly different capacitors on the board. I imagine more information will be forthcoming around Sept. 17. They should all be PCIe-4.

Edit: They just got a lot of new cards.
 
I think we should wait for reviews before assuming the typical clocks on the new cards. Nvidia tends to sandbag a bit, when claiming those boost clocks.

Take a step back from the $700 price tag and the 3080 is as if nvidia cut one more memory channel off of 1080Ti, clocked it to the max and then claimed their biggest generational leap ever, easily more than 2x 980. The perf/W improvement, not the joke that Jensen presented, is also quite lackluster,


3090 should be 15% faster than 3080 and looks like a poor substitute for the former Ti designations which far better, mostly being cut-down of 50% larger chips.

Objectively, this is a far worse improvement than the last node change, the saving grace being that nvidia didn't go all out with Pascal, only ~450mm2 for the biggest chip.

34% efficiency uplift from 2080 to 3080. Part of it comes from the TSMC 12N to Samsung 8N transition, part comes from adopting GDDR6X (i. e. don't need 384bit or wider bus for higher bandwidth).
Doesn't look like a lot of those 34% are coming from architectural improvements.
 
Back
Top