Nvidia GeForce RTX 4090 Reviews

Got my RTX 4090 today, did some pure RT benchmarks.
(Note: Nothing has implemented SER here)

1. ReSTIR PT Sample
View attachment 7262
MSI RTX 3080 Ventus 3X OC - 21 FPS
INNO3D RTX 4090 X3 OC - 56 FPS
INNO3D RTX 4090 X3 OC 315W - 56 FPS

2. Ray Tracing In Vulkan - Scene 3
View attachment 7263
MSI RTX 3080 Ventus 3X OC - 40.8 FPS
INNO3D RTX 4090 X3 OC - 123.3 FPS
INNO3D RTX 4090 X3 OC 315W - 113.3 FPS

3. Same above - Scene 5
View attachment 7264
MSI RTX 3080 Ventus 3X OC - 40.2 FPS
INNO3D RTX 4090 X3 OC - 119.7 FPS
INNO3D RTX 4090 X3 OC 315W - 106.3 FPS
Falcor might be worth trying too as it's used in recent research by nvidia. In their presentations they talk about the importance of the caches so I expect 4090 would benefit a lot.
 


Seems the 4090 can stretch its legs a bit more with a 13900K. Some of the results seem absurd though. 47% extra performance in Far Cry 6 ? For what intel calls a stop gap release until Meteor Lake is done ?

Meteor Lake supposedly is the next true leap for Intel. Raptor Lake still is 10nm.
 
Falcor might be worth trying too as it's used in recent research by nvidia. In their presentations they talk about the importance of the caches so I expect 4090 would benefit a lot.
Hmm. Tried test Falcor, opened .pyscene file but it doesn't display anything but the default green background..
 


Seems the 4090 can stretch its legs a bit more with a 13900K. Some of the results seem absurd though. 47% extra performance in Far Cry 6 ? For what intel calls a stop gap release until Meteor Lake is done ?

Those are some utterly insane speed ups for a CPU.

Tombraider going from 285 fps to 351 fps in gameplay performance and the internal CPU renderer going from 327 fps to 625 fps!!! How is this even possible??
 
Those are some utterly insane speed ups for a CPU.

Tombraider going from 285 fps to 351 fps in gameplay performance and the internal CPU renderer going from 327 fps to 625 fps!!! How is this even possible??

Techpowerup review shows the same speedups, 20 to 30fps improvements across games at the minimum using a 3080Ti.

Timestamped to benchmarks, this 13900k is just ridiculously capable, 40% faster in Blender. A mini-TR if you will indeed, the fastest consumer cpu in the world.

 
Last edited:
I've seen some people suggest gaming performance is better with ecores disabled because ecores don't really get used in games, but maybe that has been fixed either architecturally or in Windows updates?
 
Seems the 4090 can stretch its legs a bit more with a 13900K. Some of the results seem absurd though. 47% extra performance in Far Cry 6 ? For what intel calls a stop gap release until Meteor Lake is done ?
Foremost, unlike 12900K the 13900K got virtually unlimited TDP using a high-end board. Those 300+W surely make a difference in some cases.

Some games might like those additional threads provided by the E-cores spam.

Some might profit from the bigger L2.

Some games don't like the slow intercluster communication of Zen and 12k.
 
80% seems like the best deal imo"

Absolutely, trading 2fps for 50watts tdp seems very reasonable. Its akin to CPU's these days, pushed beyond the sweet spot for that couple of % increase. All that for that extra performance against the competition.
 
Undervolting is definitely interesting with this architecture. Testing various setups with Speed Way

Stock - 94 fps - 413W
OC (+230 core +1400 mem) - 101 fps - 428W
OC (same as above, undervolt to 1000mv) - 99 fps - 398W
OC (undervolt to 950mv) - 97 fps - 353W
OC (undervolt to 950mv but memory at +0) - 92 fps - 338W

So by undervolting and overclocking core/memory you can end up with better perf than stock with 60W less. But running at the same clock but lower voltage gives slightly lower results, like other people have already seen.
 
Last edited:
Undervolting is definitely interesting with this architecture. Testing various setups with Speed Way

Stock - 94 fps - 413W
OC (+230 core +1400 mem) - 101 fps - 428W
OC (same as above, undervolt to 1000mv) - 99 fps - 398W
OC (undervolt to 950mv) - 97 fps - 353W
OC (undervolt to 950mv but memory at +0) - 92 fps - 338W

So by undervolting and overclocking core/memory you can end up with better perf than stock with 60W less. But running at the same clock but lower voltage gives slightly lower results, like other people have already seen.

Hmm, bandwidth concerns in some scenarios already then if memory needs an overclock. Starting to doubt those rumors of a "4090ti", or at least doubt they'd end up with any super appreciable performance increase. Of course AMD successfully delivered overclocked memory in the severely limited edition 6900LC. I suppose if Nvidia wanted it enough they could do their own 600 watt tdp liquid cooled FE only version.
 
Hmm, bandwidth concerns in some scenarios already then if memory needs an overclock. Starting to doubt those rumors of a "4090ti", or at least doubt they'd end up with any super appreciable performance increase. Of course AMD successfully delivered overclocked memory in the severely limited edition 6900LC. I suppose if Nvidia wanted it enough they could do their own 600 watt tdp liquid cooled FE only version.
Clearly the one thing that hasn't changed this gen is bandwidth... At least in Speed Way most of the gains in perf come from overclocking the memory. The cache probably helps mitigate the hit quite a bit.

I could see a 4090 Ti in 9-12 months with the full fat AD102 with higher binned/clocked memory, with 10-15% uplift from stock 4090.
 
Back
Top