Nvidia GeForce RTX 4090 Reviews

Can some people here chill and stop being way defensive of a technology (DLSS3) that was just introduced and is being analysed? What do you want? For people, especially in a technical forum, to just accept anything and everything from one company without scrutiny? Enough with shooting down anyone who isn't yet onboard with "DLSS3 is great!". The tech looks very complex and naturally raises a lot of questions. AMD's FSR1 and FSR2 were scrutinized to oblivion, DLSS3 should not be any different.
Yep. I mean this is brand new stuff, and to me it's perfectly logical that if a game doesn't properly account for such things, a complete sudden change of all the pixels on the screen will cause issues with this tech.

I fully expect as time goes on that games and engines will become better at interfacing with the technology and do a much better job. I'm still wildly impressed.
 
Yep. I mean this is brand new stuff, and to me it's perfectly logical that if a game doesn't properly account for such things, a complete sudden change of all the pixels on the screen will cause issues with this tech.

I fully expect as time goes on that games and engines will become better at interfacing with the technology and do a much better job. I'm still wildly impressed.
Well, unlike DLSS2 where it "just" upscales an image based on the existing frames, DLSS3 is generating completely new frames using solely AI. From my understanding, it's a technology that is pretty much engine agnostic. It would kind defeat its purpose if said engine had to do more work to feed it specific data other than the frames it output.
 
I expect camera cut issues to be an early teething/ per-game Implementation issue - DLSS 3 should have the Info (Motion vectors) based on its design to make informed decisions about camera cuts.

There are flags in DLSS 2.x for example to make it camera cut aware.
Another thing which I feel is being "lost in translation" a bit is that FG isn't a "fixed h/w unit", it's an NN program which uses a number of data inputs to generate a frame - with the NVOF h/w data being just one of said inputs. This means that the FG component of DLSS will likely see improvements over time similar to those which we've seen with DLSS2 reconstruction between 2.0 and 2.4.
 
3x pure raytracing is not enough. Would be barely better than Ampere.

/edit: Cyberpunk with Pyscho RT setting:
4090 is 4x faster than 6900XT in a hybrid rendering game. And that is without SER and Overdrive RT...
Rich said 3.5X faster. If my maths is correct based on Rich's words, the 3090 Ti is ~89% faster than the 6900 XT in that benchmark. So a 3X increase would make RDNA 3 59% faster than the 3090 Ti.

Edit: Unless you mean 3X just for the raytracing portion, not for the overall performance.
 
A lot of those games at 8k were run with DLSS Ultra Perf mode, lol.
But the games that ran natively kind of tell a different story.

Forza Horizon 5 sees the 4090 delivering 2.1X the fps of the 3090 @8K
Overwatch 2, the 4090 is 2.3X the 3090 @8K.
Warzone, the 4090 is 90% faster than 3090 @8K.

So with 8K, removing the CPU bottleneck can really put these transistors to work on delivering the theoretical 2X performance uplift.
 
But the games that ran natively kind of tell a different story.

Forza Horizon 5 sees the 4090 delivering 2.1X the fps of the 3090 @8K
Overwatch 2, the 4090 is 2.3X the 3090 @8K.
Warzone, the 4090 is 90% faster than 3090 @8K.

I'm commenting on the purpose of even referring to the resolution as "8K" when you're using ultra performance mode, not on its uplift over a 3090.
 
One thing I'm left wondering about after reading many reviews is if the b/w "constraints" of the rebuilt memory subsystem is somewhat hidden by the fact that pretty much everything is CPU limited below 4K. There are some examples where scaling in 4K is worse than in 1440p but admittedly overall it looks like Nv has basically solved the b/w problem in Lovelace - 8K results above kinda highlight that. Guess we'll have to wait for smaller chips (with smaller L2s and less prone to running into CPU limits in 1440p) to have a proper answer.
 
Once upon a time there was a company that didn't care about certain features, it was called 3dfx, it didn't last for long.
They of course downplayed everything they didn't have but they cared. They just couldn't get the new architecture out the door for whatever reason and ran out of money.
 
Last edited:
But the games that ran natively kind of tell a different story.

Forza Horizon 5 sees the 4090 delivering 2.1X the fps of the 3090 @8K
Overwatch 2, the 4090 is 2.3X the 3090 @8K.
Warzone, the 4090 is 90% faster than 3090 @8K.

So with 8K, removing the CPU bottleneck can really put these transistors to work on delivering the theoretical 2X performance uplift.

Increasing resolution may remove CPU bottlenecks but it also shifts GPU bottlenecks, and should not be used to infer GPU-bottlenecked 4K results.
 
With the proper syncing going on DLSS 3 added latency is pretty unimportant to me subjectively. My review Shows how much it is quite well in a real game scenario. I think Most will Not notice it. NV Just needs a solution to VSync for it. I think they are working on it
Lol I reread my comments here by chance and I realise I typed "important" here even though I meant "unimportant" in a way.
I hate typing on a phone. :p
Basically - the added input latency? Not noticable when not in a pure vsync situation. In a pure Vsync situation ATM, you can notice it.
 
Yep. I mean this is brand new stuff, and to me it's perfectly logical that if a game doesn't properly account for such things, a complete sudden change of all the pixels on the screen will cause issues with this tech.

I fully expect as time goes on that games and engines will become better at interfacing with the technology and do a much better job. I'm still wildly impressed.

DLSS3 is the same as 2, it just takes the required engine inputs and spits out the frames, there's no engine side improvements to be made at all once it's functional. What you see is what you get.

It's neat though. Drops responsiveness, but you don't have to turn it on, a new switch is always cool. That being said, who has a screen that goes above 4k 120hz? And if you do why would you want one instead of a better image quality OLED? DLSS3 seems like it'd be better for lower end cards, going from 40 to 80fps even with glitches would be a much more obvious jump. Hopefully the high overhead or Nvidia's fixed function hardware doesn't get in the way of that.

Regardless, Nvidia's graphics engineering team needs an overhaul in goals. They need to eliminate work on weird vendor extensions that might never get used because they just assume they'll dominate the graphics scene profitably forever and concentrate on delivering performance per dollar improvements. Jensen's lame "Moore's Law is dead" excuse is just that, we've known Moore's Law as dead for over a decade now and you've done absolutely nothing about it. AMD and Intel are both dealing, Nvidia should be able to just as well.
 
RT h/w, mesh shaders, DLSS, and pretty much everything new Nv has done in GPUs since Turing is them "dealing" with the fact that "Moore's law is dead". They did way more to combat this fact than either AMD or Intel has so far if you think about it. And most of it has actually become industry standards.
 
DLSS3 is the same as 2, it just takes the required engine inputs and spits out the frames, there's no engine side improvements to be made at all once it's functional. What you see is what you get.
Ah, so it takes the required engine inputs you say.....
 
So for some RT focused numbers, the 4090 is anywhere from 70% to 2X faster than 3090 depending on scene complexity, however, vs the 6900XT, the 4090 can be as much as 4X faster than 6900XT in heavy path traced games, 3X faster in heavy RT titles, and 2.5X in other titles.

from PCGH, @4K the 4090 is:

Cyberpunk 2077:
2X faster than 3090, and 3.3X times faster than 6900XT

Dying Light 2:
2X faster than 3090, and 3.5X times faster than 6900XT

Lego Builder's Journey:
2X faster than 3090, and 3.2X times faster than 6900XT

F1 2022:
90% faster than 3090, and 2.6X times faster than 6900XT

Ghostwire Tokyo:
88% faster than 3090, and 2.7X times faster than 6900XT

Metro Exodus:
80% faster than 3090, and 2.6X times faster than 6900XT

Doom Eternal:
78% faster than 3090 and 2.9X times faster than 6900XT

The Riftbreaker:
75% faster than 3090, and 2.7X times faster than 6900XT

Guardians of the Galaxy:
57% faster than 3090, and 2.6X times faster than 6900XT


From comptoir-hardware, @4K the 4090 is:

Quake 2 RTX:
80% faster than 3090, and 4X times faster than 6900XT

Minecraft RTX:
77% faster than 3090, and 4.3X times faster than 6900XT

Cyberpunk 2077:
90% faster than 3090, and 3.7X times faster than 6900XT

Dying Light 2:
85% faster than 3090, and 3.3X times faster than 6900XT

Hitman 3:
90% faster than 3090, and 3X times faster than 6900XT

F1 2022:
90% faster than 3090, and 3X times faster than 6900XT

Metro Exodus:
94% faster than 3090, and 2.7X times faster than 6900XT

Watch Dogs Legion:
78% faster than 3090, and 2.6X times faster than 6900XT

Ghostwire Tokyo:
95% faster than 3090, and 2.9X times faster than 6900XT

Control:
75% faster than 3090, and 2.6X times faster than 6900XT

Doom Eternal:
70% faster than 3090, and 2.8X times faster than 6900XT

Spider-Man:
83% faster than 3090, and 2.4X times faster than 6900XT


 
Last edited:
Yeah, interestingly gains in path traced applications don't seem to be out of the ordinary. I wonder if this is another indication of insufficient memory bandwidth increases.
 
Justice @4K max ray tracing, the 4090 is 2.25x faster than 3090Ti at native resolution. With DLSS3 performance is boosted by 5X, while power consumption goes down to 370w.

Yes, specific scenarios in obscure titles explicitly pointed to by a vendor tend to be outliers.
 
Jensen's lame "Moore's Law is dead" excuse is just that, we've known Moore's Law as dead for over a decade now and you've done absolutely nothing about it. AMD and Intel are both dealing, Nvidia should be able to just as well.

...maybe? I mean if AMD significantly undercuts Nvidia with RDNA3 it will be a relatively recent development, we're very far from the 9800 Pro days. With the die size of Arc it's pretty apparent that Intel's price target is largely a result of their driver situation and not necessarily a market they targeted from the outset - I think they really wanted a 3070/3070ti competitor and the end result just didn't meet their expectations and had to take this route.

Still, the price vs performance is ultimately what matters, and it's clear with Nvidia's profit margins they're not coming to us with hat in hand despite Jensen scolding us on Moore's Law. I just don't think though that you can point to AMD/Intel as bulwarks against Nvidia's swelling prices at this point, they haven't delivered yet. ARC is largely a curiosity for a small segment of the market that knows exactly what it's getting into, AMD's raster price/performance is very good on retail right now for the sub $600 segment, but you pay an obvious cost in RT performance. Both of those are pretty large qualifiers.

So you can't really say Nvidia has been frivolous with spending their silicon budget on proprietary features when no competitor has really stepped up with a value offering that really makes Nvidia's GPU's wholly dependent on that proprietary tech to warrant their asking price. The best AMD has done is ballpark-raster at comparable prices, they've been 'dealing' with 3 of a kind at best. We'll find out soon enough if they decide to lay down a royal flush with RDNA3.
 
FG pretty much requires 60 fps minimum prior to frame generation though so if you aren't hitting this you will likely be seeing the artifacts too often.
Arstechnica.com did fg from 40-50fps and they say it looks fine.
I have not watched any fg videos tho.
 
Last edited:
Back
Top