Nvidia Ampere Discussion [2020-05-14]

When you say this are you referring to the "L2 Throughput" number that appears in the graphs or some other number?
I think he means that L2 throughput is considerably higher than VRAM throughput in some of the scenarios, which likely implies a higher L2 hit rate.
 
When you say this are you referring to the "L2 Throughput" number that appears in the graphs or some other number?

It’s a different number.

The throughput gives you a sense of how much traffic is coming through L2 independent of hit rate. Low throughput could be either due to high hit rates in L1 (so fewer L2 queries) or waiting for loads from VRAM on L2 misses. High throughput corresponds to high hit rates of course.

low-L2-usage.png
l2-hit-rate.png
 
The overlay he's using now shows "dedicated VRAM":

In my book, that's the same distinction the taskmanager does: dedicated vs. shared (i.e. system memory). So, still the memory allocated, not the memory absolutely needed.
And this is not gonna change, since games and/or allocate memory very liberally in order to potentially help performance for example by having the same data in different places at once. This is nice to have, but gets detrimental once you run out of abundant free space, since the gains do not outweigh the cliff you fall off going through PCIe.
 
In my book, that's the same distinction the taskmanager does: dedicated vs. shared (i.e. system memory). So, still the memory allocated, not the memory absolutely needed.
And this is not gonna change, since games and/or allocate memory very liberally in order to potentially help performance for example by having the same data in different places at once. This is nice to have, but gets detrimental once you run out of abundant free space, since the gains do not outweigh the cliff you fall off going through PCIe.
Yes, I hope 0.1% lows will show us the truth here...

3080Ti with 20GB versus 3090 with 24GB and 3080 with 10GB will be interesting...
 
Rumor: NVIDIA GeForce RTX 3080 Ti with 10496 Shader cores?
November 4, 2020
At least, according to Kopite7kimi GPU used is the GA102-250-KD-A1 and it would get shader cores joined with GDDR6X memory on a 384-bits wide bus. The RTX 3080 has 8704 cores and a 320-bit memory bus, the RTX 3090 has 10,496 cores and also 384-bit bus width. With the same bus width as that of the RTX 3090 which has 24 GB gddr6x that would mean a SKU with 12 GB of graphics memory.
index.php

https://www.guru3d.com/news-story/r...80-ti-with-12-gb-gddr6x-in-the-pipleline.html
 
Desperation move that would alienate almost all (2? 3?) intial 3090 buyers.
Life's a bitch for all 2? 3? initial buyers!

Edit: The 3080 Ti was never in doubt, only a matter of time.
 
Last edited by a moderator:
I agree with you that foolish early adopters of Nvidia cards have had it coming.
And the professional niche seems to narrow down to those who specifically need anywhere from 20 to 24 GByte, which seems niche inside niche.

Nvlink is still on the RTX 3090 only. Also in practice I'd assume there is a time cost benefit for professional use cases so the biggest advantage would be that they have and are already making money off the RTX 3090.

But in general I don't really understand the sentiment regarding this. The RTX 3090 just solely in Nvidia's product stack was clearly positioned in a segment that shouldn't even be considered for those looking for "value" in any sense. Personally I'd place myself in the concerned about value/money category and hence purchasing something in that segment wouldn't even cross my mind for personal entertainment use.
 
Value clearly wasn’t a priority for anyone buying a 3090. If there is a cheaper 3080 Ti with slightly lower performance I expect most 3090 buyers won’t care.
 
AMD saves the day IMO. Their finally competing again, atleast in normal performance. NV seems to be panicking a bit. Good, this mega company will have to adjust pricing and finally also innovate and improve performance even more.
 
There are things I don't like about Nvidia but I don't think they're very short on innovation or performance.

True, not what i mean. I ment they will have to work even harder to attain even more performance for their next GPU as they would without AMDs competition.
I assume both want to have that performance crown (even Intel), and with the heat of serious competition, prices will be better and performance too i assume.
 
True, not what i mean. I ment they will have to work even harder to attain even more performance for their next GPU as they would without AMDs competition.
I assume both want to have that performance crown (even Intel), and with the heat of serious competition, prices will be better and performance too i assume.
I'm not so sure as AMD seems to need a node advantage to remain marginally competitive. Would the existing lineup be as competitive if on the same node?
 
I'm not so sure as AMD seems to need a node advantage to remain marginally competitive. Would the existing lineup be as competitive if on the same node?

Yes, what i mean is, NV is still the performance king in all regards, especially RT and DLSS and probably more advanced overall. But AMD is getting much closer then before. The NV team most likely will offer another generational leap with their RTX4000 lineup.
We can at the least say, AMD is trying this time around. We are having 20+TF gpus now at reasonable prices, never expected that to happen.
Both NV and AMD are doing great i think.
 
I'm not so sure as AMD seems to need a node advantage to remain marginally competitive. Would the existing lineup be as competitive if on the same node?

Turing and Ampere don't seem efficient at gaming, as RDNA. And I don't mean in power, but in terms of performance, per transistor.
 
Back
Top