Nvidia GeForce RTX 4090 Reviews

22% is a huge uplift for just a single feature!

The delta can be much bigger depending on the scene, in this case the uplift is almost 40%

SER Disabled
ser_disabledcsdnh.jpg


SER Enabled
ser_enabledrldwe.jpg
 
Has the frequency/power curve changed with Ada or are they just pushing it faster because they can? Like if they’d kept the 4090 officially at 300W, how much slower would it be?
 
Has the frequency/power curve changed with Ada or are they just pushing it faster because they can? Like if they’d kept the 4090 officially at 300W, how much slower would it be?

 
That’s what I suspected. It seems whack to me that they made the thing such a chungus for basically no reason.
 
That’s what I suspected. It seems whack to me that they made the thing such a chungus for basically no reason.
It's been the norm on higher end cards for a while, 4090s extreme TDP just exaggerates the effect
 
It's been the norm on higher end cards for a while, 4090s extreme TDP just exaggerates the effect
But is it necessary for the TDP to be so high? Seems they could have aimed for ~300W with hardly any noticeable performance loss. Let the AIBs go crazy with the 450W triple/quad slot absolute units.
 
Last edited:
But is it necessary for the TDP to be so high? Seems they could have aimed for ~300W with hardly any noticeable performance loss. Let the AIBs go crazy with the 450W triple/quad slot absolute units.
I think Nvidia want to make AIBs irrelevant, so they made the FE a bit crazy, huge cooler + insane max power draw to avoid some variants like the Strix and HOF outshining it. The main difference between FE and AIB 3090s was the higher TDP that allowed slightly higher boost clocks, which gave them the edge over FE. It also makes sure that all benchmark graphs have good numbers from the FE when comparing against AMD/Intel, I think most reviewers use the FE numbers for performance comparisons.
 
But is it necessary for the TDP to be so high? Seems they could have aimed for ~300W with hardly any noticeable performance loss. Let the AIBs go crazy with the 450W triple/quad slot absolute units.
The TDP for the 4090 (and perhaps the 4080 as well) is a data point that inadequately describes the diverse power characteristics of the card. We've come to expect over the past several generations that GPUs will run up against their TDP more often than not. The 4090 does not do that for, I believe, three main reasons. One, the GPU is so damn wide that the typical gaming load doesn't nearly saturate it (either due to game engine/software limitations or hardware bottlenecks present in the CPU or other system hardware). Two, the TSMC process has tremendous voltage /clock scaling, and Nvidia appears to have tapped into the process's full potential. Three, Nvidia's driver smartly throttles down the 4090 when a workload cannot exploit the GPU fully (e.g. low load or frame cap), guiding the GPU to a more efficient position on the v/f curve.

Why the 450W TDP, then, when a 300W or 350W, for most modern gaming work loads, would yield practically the same performance (which is certainly what I've seen in my personal gaming use of my 4090)? I expect the answer is multi-faceted and, among other things, involves the economics of the competitive scene and Nvidia paving the way for changes in PC infrastructure down the road. But, one part of the answer is that a lower TDP would not allow future, more demanding games or software to make use of the 4090 to its fullest. There's no way around it, you need a bunch of power to light-up all those transistors at once. As CPUs and system memory get faster and games evolve to make more use of the 4090, I expect you'll see its power consumption trend upward--but so will performance or, at least, performance won't buckle as it would if the card became power limited at 300W or 350W.
 
Back
Top