Nvidia Ampere Discussion [2020-05-14]

Again, which is a normal mode of GPU boost operation.

There's thermal throttling which happens if the card hits above max temp but I doubt that these FEs will throttle like this. If they will then these weird looking new coolers on them are actually bad.

Was Turing throttling down from specified boost speeds in normal operation?
 
Eh, there's no way the 3090 is at 70C / 30dB when pushing 350w. Nobody would buy AIB cards if that's true.
I seem to be in the minority here, as I really like this design and have liked it from the very beginning. If ever there was a graphics card put in MoMA, it'd be this.

I also strongly suspect this will make buying a third-party triple/quadruple fan monstrosity cooler a mistake on sheer performance. Additionally, that the only way up is watercooling.

So I expect that the AIBs are left with their "stock-overclocked" models to get sales. Setting the power boost slider up by 10 or 15% with the NVidia cooler will get the same performance, be quieter and cost less...
 
Was Turing throttling down from specified boost speeds in normal operation?
NV boost clocks aren't what a card is "throttling down" from, it's basically what it starts boosting up from when thermals and power allow it to.
So in a worst case scenario you could probably see a Turing GPU running on specced boost clocks but generally they are running some +100 MHz higher than that.
Ampere is unlikely to be different I guess.
 
I seem to be in the minority here, as I really like this design and have liked it from the very beginning. If ever there was a graphics card put in MoMA, it'd be this.

You're not alone. I really like the FE aesthetic and hate the vomit that most AIBs are putting out. Hopefully the FE can back up its looks with good cooling performance.
 
You're not alone. I really like the FE aesthetic and hate the vomit that most AIBs are putting out. Hopefully the FE can back up its looks with good cooling performance.
Worst part about FE is the pewter-ish hue it has, it won't fit literally anything on the market.
 
The German publication's sources claim that 30%, 60% and 10% of the GeForce RTX 3080's dies belong to the Bin 0, Bin 1 and Bin 2, respectively. The data is reassuring since it suggests that the majority of consumers should get a good sample, unless you're one of the unlucky ones to fall inside the 30%.

This should be the same for the GeForce RTX 3090 as well, except that the percentage of Bin 2 chips should be higher, maybe. This is due to the fact that manufacturers are only receiving a small number of GA102 dies from the initial production. However, RTX 3090 will necessitate using the higher binned GA102 chips in the first place since it has 82 SMs instead of only 68 SMs.

That last part will likely impact the overall quality of RTX 3080 chips. If most of the Bin 2 chips end up going into RTX 3090, and there's a reasonable chance that most of the Bin 0 chips will end up in RTX 3080, your chance of getting a 'very good' RTX 3080 drop. Beyond that, the AIBs will further bin whatever chips they receive and offer varying models of each GPU, with the best chips being reserved for higher factory overclocks ... and higher prices.
https://www.tomshardware.com/news/o...3080-rtx-3090-samples-will-be-hard-to-come-by
 
Not all chips are equals, so of course some will run better than others...

Maybe I’m not being clear. Chip quality doesn’t explain why temperatures would be lower at the same voltage. Chip quality typically determines how much voltage you need. Better chip = lower voltage = lower temps.
 
Well that’s not helpful at all. It certainly doesn’t explain the physics behind lower temps at the same voltage and clocks.
In the past, you had high leakage and low leakage parts. Low leakage would go to chips in Mobile or 2nd bin, where frequencies would not be pushed too high. High leakage parts drew more power, but could scale better with higher voltage. Driven at the same parameters, there would be differences in power consumption.

I was under the impression though, that this got remedied a lot with 16 nm process tech already and that the much more granular power management thingies did their part as well in order to level that particular playing field.
 
Was Turing throttling down from specified boost speeds in normal operation?
What is "throttling down" in the first place? Normal cards had an upper freq cap of 1920 MHz, if the card went above somewhere-in-the-40ish-i-cannot-remember-the-exact-value °C, it scaled back a bin or two. The next step occured at 63 °C IIRC. Is the first one downclocking already, is the second one? What if both are 100+ MHz above advertised boost clocks?
 
In the past, you had high leakage and low leakage parts. Low leakage would go to chips in Mobile or 2nd bin, where frequencies would not be pushed too high. High leakage parts drew more power, but could scale better with higher voltage. Driven at the same parameters, there would be differences in power consumption.

I was under the impression though, that this got remedied a lot with 16 nm process tech already and that the much more granular power management thingies did their part as well in order to level that particular playing field.
Yes, that is what was always funny about the "grading" or "rating" of GPU dies that gpu-z(?) was doing.
Most overclockers/underclockers wanted a higher leakage part, aka lower rated.
Most regular people thought a higher rated part is better. Well, it just depends on your definition of "better."

Like you said, much more granular/finer power management with a lot more control over the power/speed states greatly helped to even out the playing field between different bins. 16/14nm helped because of FinFet and the density improvements going from 28nm to 16/14Finfet allowed them to implement the increased power management.

Little bit of speculation here on my part but from what I have heard, 7nm became a bit of an issue with leakage and clockspeeds again because of quad patterning. I believe AMD talked about how closely they worked and integrated their designs for the process to break through some of those limitations.
 
FE lights via Reddit via Twitter.

caa576c.jpg
 
Some history:

https://videocardz.com/70838/gpu-base-boost-typical-and-peak-clocks-whats-the-difference

The behaviour of cards under extended thermal load is something not usually tested. Gamers Nexus is good at this. Thermal load generally leads to throttling.

A lot of published tests use "open air" rigs where thermals are not representative of gamers' case setups. Gamers don't game for the duration of your typical benchmark, they game for much longer. The card clock specifications are an attempt to "guarantee" user experience because of the silicon lottery and the variation in case cooling that will be found across gamers' systems.

 
Back
Top