Nvidia Ampere Discussion [2020-05-14]

I think you got that backwards, a GA-103 would probably be a smaller die than GA-102 and hopefully more power efficient all the while providing the same number of SMs as the scaled down part used in 3080 now.

Looks like I did, which would make more sense. Thought this would also assume the yield on the GA102 chip goes up enough that they don't have to bin off all the 3080 chips they're currently using.

On another note, while all the problems with the 30xx series so far validate my hypothesis that it was a compute focused arch rushed into gaming duty, I didn't expect Nvidia to push anything like these problems onto their customers. Is it really as bad as this thread is making out, are there just a small percentage of bad boards or is it more widespread?

Either way the problems explain why Anandtech's review is taking so long. Good on them for, assumedly, putting in the extra work rather than rushing a review out the door to be relevant.
 
Looks like I did, which would make more sense. Thought this would also assume the yield on the GA102 chip goes up enough that they don't have to bin off all the 3080 chips they're currently using.

On another note, while all the problems with the 30xx series so far validate my hypothesis that it was a compute focused arch rushed into gaming duty, I didn't expect Nvidia to push anything like these problems onto their customers. Is it really as bad as this thread is making out, are there just a small percentage of bad boards or is it more widespread?

Either way the problems explain why Anandtech's review is taking so long. Good on them for, assumedly, putting in the extra work rather than rushing a review out the door to be relevant.

I think this is pretty good explanation of wtf is going on with crashing.

 
Looks like driver fixed the most prominent issue. No need to go hunting for board with specific capacitors. pcworld hard card that consistently crashed with old driver and no crashes with new driver.

Nvidia's new Game Ready 456.55 drivers fix the issues we've had with a GeForce RTX 3080 crashing during games, but slightly limits the top GPU Boost clock speed.

To put this in proper context, these maximum GPU Boost clock speeds remain well above the rated boost speeds for these cards. The “fix” results in such a minor speed difference that you won’t practically notice it in the game itself.

https://www.pcworld.com/article/3583894/nvidia-fix-rtx-3080-crashes-new-drivers-clock-speed.html
 
NVIDIA RTX 30 Fine Wine: Investigating The Curious Case Of Missing Gaming Performance
Rendering applications are designed to use a ton of graphics horsepower. In other words, their software is coded to scale exponentially more than games (there have actually been instances where games refused to work on core counts higher than 16 in the past). If *a* rendering application can demonstrate the doubling in performance than the hardware is not to blame. The cores aren't inferior. If *all* rendering applications can take full advantage then the low-level driver stack isn't to blame either. This would point the finger at APIs like DirectX, GameReady drivers, and the actual code of gaming engines. So without any further ado, let's take a look.
...
Because the problem with the RTX 30 series is very obviously one that is based in software (NVIDIA quite literally rolled out a GPU so powerful that current software cannot take advantage of it), it is a very good problem to have. AMD GPUs have always been praised for being "fine wine". We posit that NVIDIA's RTX 30 series is going to be the mother of all fine wines. The level of performance enhancement we expect to come for these cards through software in the year to come will be phenomenal. As game drivers, APIs, and game engines catch up in scaling and learn how to deal with the metric butt-ton (pardon my language) of shading cores present in these cards, and DLSS matures as a technology, you are not only going to get close to the 2x performance levels - but eventually, exceed them.

While it is unfortunate that all this performance isn't usable on day one, this might not be entirely NVIDIA's fault (remember, we only the problem is on the software side, we don't know for sure whether the drivers or game engines or the API is to blame for the performance loss) and one thing is for sure: you will see chunks of this performance get unlocked in the months to come as the software side matures. In other words, you are looking at the first NVIDIA Fine Wine. While previous generations usually had their full performance unlocked on day one, NVIDIA RTX 30 series does not and you would do well to remember that when making any purchasing decisions.

Fine wine aside, this also has another very interesting side effect. I expect next to no negative performance scaling as we move down the roster. Because the performance of the RTX 30 series is essentially being software-bottlenecked and the parameter around which the bottleneck is revolving appears to be the number of cores, this should mean that less powerful cards are going to experience significantly less bottlenecking (and therefore higher scaling). In fact, I am going to make a prediction: the RTX 3060 Ti for example (with 512 more cores than the RTX 2080 Ti) should experience much better scaling than its elder brothers and still beat the RTX 2080 Ti! The less the core count, the better the scaling essentially.
https://wccftech.com/nvidia-rtx-30-fine-wine-investigating-the-curious-case-of-missing-gaming-performance/
 
Last edited by a moderator:
Let's assume that NVidia has a basically perfect GPU simulator that was used to design Ampere. This would mean the compiler is already "fully optimised" for game shaders' use of the ALUs. So it seems unlikely that existing games will "get better" due to the shader compiler and it also seems unlikely that console games ported to PC in the near future will be transformed by unlocked-math on Ampere, aging fine-wine style as the months go by.

Of course, in using this "perfect simulator", NVidia would have acted rationally in making Ampere. It's very tempting to be dismissive, but you'd have thought that a few hundred (at the very least) software guys who aren't fully committed to AI research were put on the job of coming up with things to do with Ampere's FLOPS.

Maybe ray tracing will turn out to consume lots of ALU cycles, as devs get deeper in to it. Cyberpunk 2077 could be a real showcase?
 
No need to go hunting for board with specific capacitors.
That hunt is still on, though. Partially because there is still up to a 10-15% spread in achievable boost clocks, independent from the TDP limit.

And if someone claims that the driver fixed "everything" there are still a few weird reports of audio glitches (in analog soundcards) triggered by Gigabyte and some MSI models around the corresponding models top boost steps. Even though the cards themselves run stable at that point, now.

I'm still curious as to what NVidia actually did to the boost mechanic in order to patch it though? Reducing the noise by introducing a cooldown period in between frequency switches? Or just a simple "if crash, don't" logic which keeps the GPU in a failsafe state during the brownout?
 
Looks like I did, which would make more sense. Thought this would also assume the yield on the GA102 chip goes up enough that they don't have to bin off all the 3080 chips they're currently using.

On another note, while all the problems with the 30xx series so far validate my hypothesis that it was a compute focused arch rushed into gaming duty, I didn't expect Nvidia to push anything like these problems onto their customers. Is it really as bad as this thread is making out, are there just a small percentage of bad boards or is it more widespread?

Either way the problems explain why Anandtech's review is taking so long. Good on them for, assumedly, putting in the extra work rather than rushing a review out the door to be relevant.

You are giving me deja vu of Charlie claiming that Fermi was a compute focused architecture quickly adapted for graphics.. which was a huge pile of shit, when it was revealed that it was a geometry monster and shockingly outperformed HD5870 in tesselation.
 
You are giving me deja vu of Charlie claiming that Fermi was a compute focused architecture quickly adapted for graphics.. which was a huge pile of shit, when it was revealed that it was a geometry monster and shockingly outperformed HD5870 in tesselation.

Fermi was a compute revolution too... I don't know what Charlie said at the time, but compute focused doesn't mean it won't perfom well in games... For Ampere, my feeling is yeah it's a compute monster, but nvidia is pushing things that need compute / tensor cores, like dlss. Maybe their bet is that gaming and compute will get closer and closer in futur years, while amd decided to have 2 differents architectures (but at what cost for them ?). And right now it doesn't matter a lot, since nVidia is king in compute, AND gaming, so...
 
Ampere is "compute architecture" only in a sense that it's compute performance isn't fully tapped by current gen games yet.

And as for next gen consoles being on AMD again, this is irrelevant mostly - what matters is the change in compute to bandwidth ratio which these consoles will have in comparison to current gen ones.
 
https://www.igorslab.de/wundertreib...-gleich-noch-die-netzteile-verschont-analyse/
Analysis of what the driver update did.

Rough summary:
Driver update changed the boost strategy and applied undervolting to all cards.
With the most unstable cards, there were frequent, unfiltered, microsecond-range spikes of up to 600W all the way to the PSU. Frequency of these spikes has dropped significantly, and magnitude of the spikes has been reduced by ~70W (that's still 530W spikes if you have a bad model).
Impact is a 1-2% performance drop, and a moderate regression in frame times:
33-Variances.png
 
Last edited:
Earlier today we have tested, analyzed (and confirmed) that NVIDIA has been tweaking the clock and voltage frequencies. Our homegrown AfterBurner can analyze and help here. Below you can compare the 456.38 and new 456.55 driver VF curve, it now is slightly different and clearly shifted to precisely 2000 MHz in the upper range. So NVIDIA has taken the edge off the frequency as well as a slightly lower voltage seems to be applied. The plot below is based on the FE card, not even AIB. So NVIDIA is applying this driver wide and for their own founder cards as well.

During testing, we also re-ran the benchmarks, and it had offset effects that are close to zero, meaning at 100 FPS you'd perhaps see a 1 FPS differential, but that can be easily assigned to random anomalies as well. As to why there is so little performance decrease is simple, not many games trigger the GPU all the way the end of the spectrum at say 2050 MHz. That's isolated to very few titles as most games are GPU bound and hover in the 1900 MHz domain.
https://www.guru3d.com/news-story/g...ely-due-to-poscap-and-mlcc-configuration.html
 
Well at least it's still performing very well.

Yes, the performance delta with the new drivers seems negligible and the RTX 3080 cards still seem like good purchases assuming you manage to find one at MSRP.
I do think it's a bad idea to buy one right now even for die hard nvidia fans, as Navi 21/22 may trigger nvidia to lower their current prices and/or launch the super/Ti versions with more VRAM at the same price as the current one.
I guess it's like the most level-headed reviewers are saying, "if you really need a card right now and you can find one then go ahead, otherwise you should wait".


Though the late electrical engineer in me can't stop feeling wary of what seems to be capacitors being saturated on day one, to the point of SoC failure. Now the SoC isn't failing, but I wonder if those capacitors are being driven at healthy load levels.
I didn't see der8auer's video, though. He might have provided further insight on this.
 
Though the late electrical engineer in me can't stop feeling wary of what seems to be capacitors being saturated on day one, to the point of SoC failure. Now the SoC isn't failing, but I wonder if those capacitors are being driven at healthy load levels.
There's something else I'm a bit worried about, EMC. We do have reports of glitches induced into sound cards, and just going by the numbers, this is still likely to be critical.

EDIT: It's not going to fail EMC under EU regulation that easily, as the casing is going to filter about all issues. Compatibility with other parts inside the same system is still at peril though.
 
Last edited:
Back
Top