Nvidia Ampere Discussion [2020-05-14]

I would be curious to see how a 64SM, 8GPC, 128 ROP card stacks up to the 3080.
Yes, I've been wondering why NVidia didn't go "wide" on the GPC count, each with less SMs. Clearly less ALU per pixel, but GA102 appears to be over-the-top in this respect for years to come.
 
Yes, I've been wondering why NVidia didn't go "wide" on the GPC count, each with less SMs. Clearly less ALU per pixel, but GA102 appears to be over-the-top in this respect for years to come.

Is it less taxing on the transistor count/density ?
 
GPC is geometry processing cluster, all NV GPUs since Fermi were overkill in geometry processing power, no need to go wider.
Doesn't Nvidia attribute their so-called Polymorph Engines to the TPC (texture processing clusters) - whatever sense that naming makes? GPCs are graphics processing clusters and grouped by rasterizers, AFAIR.
 
Doesn't Nvidia attribute their so-called Polymorph Engines to the TPC (texture processing clusters) - whatever sense that naming makes? GPCs are graphics processing clusters and grouped by rasterizers, AFAIR.

Back in G80 days TMUs sat outside the SMs and were shared by multiple SMs in a TPC. It went away for a while then TPC was reintroduced with Pascal to group SMs that shared a polymorph engine. No idea why they didn't change the name back then.
 
Lots of users reporting reduced boost clock, some report reduced default voltage, consistent reports of less stability issues. Looks like boost clocks of 2.1 Ghz have been disabled (for cards with stock clocks), and the cards are in general boosting far more conservatively.

Shouldn't the reviews be erm.. reviewed, then?
 
https://www.computerbase.de/2020-09/nvidia-geforce-rtx-3080-ctd-abstuerze/

Google translate:
One day after the GeForce 456.55 was released, the first impression from Monday has strengthened: The new driver seems to solve the problems previously reported by many GeForce RTX 3080 buyers, as unanimously affected readers of ComputerBase report in the thread about this news.

Treatment with no known side effects

The adjustments that Nvidia made have been discussed and speculated since then. Messages range from reduced maximum clock rates under partial load to reduced voltages and the exact opposite. The MSI Afterburner shows a slightly shifted voltage-cycle curve: The same cycle is possible with the new driver with a lower voltage.

On Wednesday, the editors could not work out a clear picture based on a GeForce RTX 3080 TUF Gaming OC from Asus: The maximum clock rate retrieved by the graphics card was on the same level with the new and old driver, average clock rates, VCore and consumption under load were sometimes a bit higher, sometimes a little lower - depending on the game and resolution. Performance does not seem to be affected on average. The complete GPU test course could not yet be run due to the vacation.

Gigabyte and MSI comment

Against the background of the symptoms that seem to have been disabled by the driver, Gigabyte and MSI have now also publicly expressed themselves and referred to the new driver.

Gigabyte also points out that all GeForce RTX 3080 graphics cards that have been shipped meet the specifications published by Nvidia and have passed all tests. To assess the reliability of a graphics card, more is necessary than taking into account the number and type of backup capacitors used, which the manufacturer uses in various types for good reason.

MSI also refers to the conscious decision made to use different types and continues to regard them as correct. All models delivered so far have been based on an unchanged design that includes one MLCC group for the RTX 3080 and two MLCC groups for the RTX 3090 - even if different product photos convey a different picture. One continues to stand behind this decision.

What was the cause?

The symptoms seem to have been eliminated, but the question of how the problems came about remains open. Without a doubt, the manufacturers who created custom designs based on the design guidelines were surprised by the crashes and overwhelmed by the subsequent discussion about good and bad backup capacitors, which quickly became independent. In many places, it was carried out exclusively via PCB assembly for days.

It cannot be assumed that Nvidia or board partners will officially comment in detail on the background, should the driver adjustments prove to be a sustainable solution. It remains to be seen whether partners behind the scenes will make further adjustments to the PCB design of certain models based on new design guidelines. However, there is no question that adjustments were made at short notice in the run-up to the market launch. In addition to deviating product photos in press releases and at dealers, manufacturers such as EVGA and Gainward have also confirmed deviations between the pre-series, which went to testers, and the series over the past few days.
 
https://www.guru3d.com/news-story/g...ely-due-to-poscap-and-mlcc-configuration.html

Update 5: We've been examining post anew pre-driver status to observe what NVIDIA has been doing. Today I have tested many games like DOOM Eternal, Strange Brigade, Control, Battlefield V and high extreme FPS pushing Resident Evil, all without crashes at three resolutions tested. Apparently it's at titles like Horizon Zero Dawn that seems to be effected to most, specifically in a Quad HD resolution.

Reports from the web are that the driver does fix the issue at hand for most if not all people. So what did NVIDIA do?

Earlier today we have tested, analyzed (and confirmed) that NVIDIA has been tweaking the clock and voltage frequencies. Our homegrown AfterBurner can analyze and help here. Below you can compare the 456.38 and new 456.55 driver VF curve, it now is slightly different and clearly shifted to precisely 2000 MHz in the upper range. So NVIDIA has taken the edge off the frequency as well as a slightly lower voltage seems to be applied. The plot below is based on the FE card, not even AIB. So NVIDIA is applying this driver wide and for their own founder cards as well.

During testing, we also re-ran the benchmarks, and it had offset effects that are close to zero, meaning at 100 FPS you'd perhaps see a 1 FPS differential, but that can be easily assigned to random anomalies as well. As to why there is so little performance decrease is simple, not many games trigger the GPU all the way the end of the spectrum at say 2050 MHz. That's isolated to very few titles as most games are GPU bound and hover in the 1900 MHz domain.

We think it's fixed at driver level this way, but this leaves open the topic of AIB card OC products and tweaking stability, of course. Granted I have been stating in our reviews that Ampere seemed very hard to tweak. That picture fits wide into what we have seen and read in the past couple of days. Typically in the past, you had ~10% playroom for tweaking, these days it's just a few percent. I think the margins are so small these days (this goes for processors as well) that if something goes wrong, it falls outside that margin of error and immediately presents itself into behavior we have seen the past couple of days.

Should you still experience CTDs and have a GeForce RTX 3080, we'd love to hear from you in the comment thread below. But at this point, it seems stabilized with the 456.55 driver band-aid.
 
Good to see its fixed so quick with drivers. Early drivers tend to be abit problematic.
Il still wait for navi21 and see what that does. And zen3 ofc.
 
Good to see its fixed so quick with drivers. Early drivers tend to be abit problematic.
Il still wait for navi21 and see what that does. And zen3 ofc.

I'm lurking for zen3 + 3080 20gb or big navi 16gb, if perfs are great. It's an interesting time. Happy to see that drivers are fixing the problem.
 

The testing process for launch is sketchy as hell.

Edit: One of the main points brought up (if this is TLDR for you) is that the AIBs only had drivers that would run 3dmark and furmark, not regular games. So they were able to properly test their cooler designs, but they did not have drivers for testing games where they might see lighter loads with higher boosting behaviour. It also doesn't sound like they have much time to test everything before launching. I'm actually pretty shocked at how rushed it all is for a product that's so expensive. I'll never buy a gpu right at launch.
 
Last edited:
Edit: One of the main points brought up (if this is TLDR for you) is that the AIBs only had drivers that would run 3dmark and furmark, not regular games. So they were able to properly test their cooler designs, but they did not have drivers for testing games where they might see lighter loads with higher boosting behaviour. It also doesn't sound like they have much time to test everything before launching. I'm actually pretty shocked at how rushed it all is for a product that's so expensive. I'll never buy a gpu right at launch.

All in the name of preventing leaks. What a dumb move.
 

The testing process for launch is sketchy as hell.

Edit: One of the main points brought up (if this is TLDR for you) is that the AIBs only had drivers that would run 3dmark and furmark, not regular games. So they were able to properly test their cooler designs, but they did not have drivers for testing games where they might see lighter loads with higher boosting behaviour. It also doesn't sound like they have much time to test everything before launching. I'm actually pretty shocked at how rushed it all is for a product that's so expensive. I'll never buy a gpu right at launch.

Does that really make sense? Would they really issue drivers that look for an executable and refuse to run it if the program was not 3DMark or Furmark? It's not like it's hard to fool it by changing the name? How would they limit the drivers otherwise? How much effort would it be? Calling bullshit on this one honestly.
 
It's more likely they just whitelisted the exectuables hashes. And this is the first time i heard 3DMark and furmark being on the list. Before, it was NV-internal tools with NVPunish mentioned more than once.

edit: too many commata
 
Last edited:
Back
Top