Nvidia Ampere Discussion [2020-05-14]

We know that these newest process nodes have fairly high variability in clocks/power - the silicon lottery only gets worse as time goes by.

84SM + best silicon + ~8% better bandwidth?

If you're interested in a 3090, are you not going to wait to find out what the 3090Ti does? Or is that going to be the Titan, which gamers just ignore because $4000?
I guess that might depend on how much you could win in the silicon lottery. I was under the impression, that things got a bit less variable after the introduction of finfet, but honestly haven't really made an effort to stay up-to-date with 7nm and beyond yet. Where there vastly different clock speeds in Navi10 with standard 7nm already?

And frankly, if 3090 already has 24 GiB G6X and since GA100 has no tensor cores, I could see this round going without a Titan-branded card. Just a few pro-features via driver for SPEC Viewperf is narrowing the niche even more.
 
8pkgo.png


 
Performance in Ray Tracing, the details given in Xbox Series X puts RDNA 2 to be less performant than Turing, Ampere could be significantly higher than both. Also DLSS 2, also performance in Mesh Shaders is a mystery.
We don’t know what we don’t know yet. It’s not like we are benchmarking release candidates. The nvidia RTX version of Minecraft could be very well optimized compared to the 4 week port DXR version of Minecraft.


Granted I still am on the same boat as you; but I don’t want to make a definitive statement yet on the whole of Navi 2 on an unoptimized product with a not mature devkit and possibly drivers. It is unfair to dog them before they had a chance to showcase.
 
Granted I still am on the same boat as you; but I don’t want to make a definitive statement yet on the whole of Navi 2 on an unoptimized product with a not mature devkit and possibly drivers. It is unfair to dog them before they had a chance to showcase.
I am not basing this on the Minecraft demo alone, but also on the technical specs, RDNA 2 RT cores are shared twice, first with the Texture units for doing Intersections, and second with Shader cores for doing BVH traversal, it's a system optimized for die savings, NVIDIA's solution in Turing is fully independent and hence logically will be faster.
 
I am not basing this on the Minecraft demo alone, but also on the technical specs, RDNA 2 RT cores are shared twice, first with the Texture units for doing Intersections, and second with Shader cores for doing BVH traversal, it's a system optimized for die savings, NVIDIA's solution in Turing is fully independent and hence logically will be faster.
I think that you're oversimplifying something that's not so simple. We'll find out soon enough how the two compare.
 
1.4 (incr. in TGP, 250 -> 350 watts) * 1.9 perf per watt = 2.66 (best-case), no?
Powering 8K is certainly a very bold claim that should be substantiated by a significant performance increase. The claim of the greatest generational leap in the history of NVIDIA is also a very bold claim that I have never seen NVIDIA make use of in the last 10 years.
 
Not really that bold to be honest. Context means everything here.
It could be slightly faster than turning at 8k and it would still make that correct.
Bold enough that the same was not mentioned during the Turing unveiling.
 
Back
Top