Nvidia Geforce RTX 5090 reviews

So that includes the 2000 series? And what is mega geometry?

It's a new api that basically extends ray tracing to support streaming triangle clusters, to be compatible with mesh shading and nanite. It also seems the bvh rebuilds are accelerated on the gpu with less cpu requirements. Not sure how it all works yet, but the intent is to fit detailed models into the bvh by streaming in clusters and not have a big cpu hit. So you won't have reflections that show simplified lods of objects etc. Not sure how well it'll work on older gpus, because the new RT cores on blackwell have support for cluster intersections, on top of triangles and box/volume intersections.
 
Finally, the reason I specified all releases is because if you look at top 50 most played game on steam, that list is not remotely ruled by AAA games.... So most people spend a lot of time playing games that are not considered AAA which makes filtering by AAA entirely pointless.

Those games also don’t particularly benefit from cutting edge GPU hardware so how are they relevant in this convo?
 
Those games also don’t particularly benefit from cutting edge GPU hardware so how are they relevant in this convo?
They are very relevant because of high frame rates. Many people including myself buy high end gpus to play at high framerates in competitive games. For example, I purchased a 4k240hz monitor that I'd like to be able to run at native with DLAA in competitive games. When a 4k480hz monitor comes out, I'll purchase that as well. My 4080 super is not cutting it and I need something more powerful. I would have purchased the 5090 if it was actually a meaningful upgrade over the 4090 while not blasting 575W of power. However, it's not a meaningful upgrade and I don't want a next gen "fermi" card in my house during the hot summers.
 
that list is not remotely ruled by AAA games.... So most people spend a lot of time playing games that are not considered AAA which makes filtering by AAA entirely pointless.
AAA games are the reason we buy high end GPUs for, nobody buys a 5090 to play games with basic graphics.

very much doubt it.... The devs that are most likely to adopt mega geometry first are likely the ones that will need it for their projects
Alan Wake 2 already has it, NVRTX branch of UE5 already has it.
That's provided that the consoles support a similar feature. If consoles don't support such a feature, I expect an even slower uptake for multi-platform projects.
That's a very old irrelevant misconception, heavy RT and PT are not properly supported on consoles, yet their uptake on PC is the quickest in recent memory.
 
AAA games are the reason we buy high end GPUs for, nobody buys a 5090 to play games with basic graphics.
...

This is actually not even remotely true. COD bros buy high-end gpus, set the graphics to low and max out their frames. That's true of every competitive shooter. People even do that with Valorant to try to get uncapped 800 fps or whatever. Marvel Rivals is pretty gpu demanding, so same deal. They'll turn the settings down as much as they can and throw a big gpu at it, mostly at 1440p now.
 
Yea, the 5090 is definitely a gaming card this time around.
$2000 gets you the 5090.

$3000 gets you the Digit, which has 128GB of memory. That's way more useful for data science for only 33% more, especially if you're working LLMs. 128GB going to give you access to tune 220B parameter models.

I can't believe the cost of the Digit. It's so much cheaper than the workstation cards.
 
They are very relevant because of high frame rates. Many people including myself buy high end gpus to play at high framerates in competitive games. For example, I purchased a 4k240hz monitor that I'd like to be able to run at native with DLAA in competitive games. When a 4k480hz monitor comes out, I'll purchase that as well. My 4080 super is not cutting it and I need something more powerful. I would have purchased the 5090 if it was actually a meaningful upgrade over the 4090 while not blasting 575W of power. However, it's not a meaningful upgrade and I don't want a next gen "fermi" card in my house during the hot summers.

That’s fair and a valid use case. It’s not the primary target workload for these monster GPUs though. 99% of games on Steam don’t need a 5090 which is an obvious fact but not a relevant one.
 
AAA games are the reason we buy high end GPUs for, nobody buys a 5090 to play games with basic graphics.
Please speak for yourself.
Alan Wake 2 already has it, NVRTX branch of UE5 already has it.
Congratulations, 1 game out of 19000 released last year.....
That's a very old irrelevant misconception, heavy RT and PT are not properly supported on consoles, yet their uptake on PC is the quickest in recent memory.
Based on what? That's certainly not remotely true at all. The RT uptake is faster than pbr, taa, smaa, fxaa? Lol.... In terms of technologies, the RT uptakes has been one of the slowest in recent memory.
 
That’s fair and a valid use case. It’s not the primary target workload for these monster GPUs though. 99% of games on Steam don’t need a 5090 which is an obvious fact but not a relevant one.

If you've got a 1440p 360Hz or 480Hz monitor, the big gpus are definitely relevant for competitive games that people play on low settings. They're actually very necessary. Try getting 480 fps on Marvel Rivals. You would basically need a 9800x3D and a 4090 to hope to hit 300 with 1440p low settings.
 
That’s fair and a valid use case. It’s not the primary target workload for these monster GPUs though. 99% of games on Steam don’t need a 5090 which is an obvious fact but not a relevant one.
Well the primary use case for these GPU's are AI not gaming so.....
 
If you've got a 1440p 360Hz or 480Hz monitor, the big gpus are definitely relevant for competitive games that people play on low settings. They're actually very necessary. Try getting 480 fps on Marvel Rivals. You would basically need a 9800x3D and a 4090 to hope to hit 300 with 1440p low settings.

Yes but how many games like that are there 10? Certainly a lot fewer than blockbuster AAA fare. The idea that cutting edge features don’t matter because low fidelity twitch shooters don’t use them isn’t a particularly strong point. Low fidelity twitch shooters don’t drive the high end GPU market.
 
If you've got a 1440p 360Hz or 480Hz monitor, the big gpus are definitely relevant for competitive games that people play on low settings. They're actually very necessary. Try getting 480 fps on Marvel Rivals. You would basically need a 9800x3D and a 4090 to hope to hit 300 with 1440p low settings.

And most of them have awful frametime consistency because they don’t tune their CPU and mem.
 
This is actually not even remotely true. COD bros buy high-end gpus, set the graphics to low and max out their frames. That's true of every competitive shooter. People even do that with Valorant to try to get uncapped 800 fps or whatever. Marvel Rivals is pretty gpu demanding, so same deal. They'll turn the settings down as much as they can and throw a big gpu at it, mostly at 1440p now.
With the current high end GPUs being CPU limited even at 4K, you won't need a 5090 for that, I am guessing a 4080 will be more than enough at 1440p low.


The RT uptake is faster than pbr, taa, smaa, fxaa? Lol.... In terms of technologies, the RT uptakes has been one of the slowest in recent memory.
Compared to other high end API features such as DX11 tessellation.

Congratulations, 1 game out of 19000 released last year.....

Again, this falacy of 19K games .. you are basically counting 2D games and graphics novels on Steam, most don't even need a dGPU to run.
 
Are they? Do we know that FSR is optimized to run on Nvidia h/w as well as on AMD's or Intel's? Are there no h/w specific paths in it?
There are not any hardware specific paths for FSR. You may be thinking of XeSS having a different hardware implementation when running on ARC-based GPUs versus all others. FSR is the same shader code everywhere, which was billed by AMD as an overall software win because anyone can use it anywhere. Whether anyone should use it anywhere is obviously another matter :)
Reviews are supposed to show the capabilities of the product review is about.
This assumes a lot about the review. When comparing discrete performance, this scope is necessarily narrowed to performance where workload is consistent. IF this isn't the case, then we're back to allowing DLSS and FG to become "comparable" to native rastering, because hey that's a capability of the product so the extra "performance" is apples to apples, right?

No, it's not.
 
Last edited:
Again, this falacy of 19K games .. you are basically counting 2D games and graphics novels on Steam, most don't even need a dGPU to run.
You're free to gatekeep however you'd like but, I'll be counting all games. As long as people play and pay money for the games, they need to be counted. Once we start getting into deliberations about what games need rt, what games don't, the waters get muddy. I don't only play AAA games on my 4080 super. I play old games, indie games, AA games, etc. Many people also have a variety in tastes and use cases. As it stands, we have people arguing that games should go back to worse graphics because of the effect the focus on graphics is having on the other parts of the game. No, what I wrote is not a joke. People are actually having that argument.

Please do not take this the wrong way but trying to pass off your use case as the standard of evaluation is certainly not something I can get behind. Let everyone evaluate for themselves and arrive at their own conclusion. You're entitled to feel the way you do and I'm entitled to disagree. It's really not that deep.
 
There are not any hardware specific paths for FSR.
Last time I've looked into FSR2 there were definitely h/w specific paths for GPUs which support FP16 precision. Are these optimized to run on Nvidia's h/w?

This assumes a lot about the review. When comparing discrete performance, this scope is necessarily narrowed to performance where workload is consistent
By whom? I don't see any point in narrowing the scope of anything. Review should provide maximum possible relevant data. Which also means that the data which doesn't say much about the product which is being reviewed would be irrelevant to that review.

IF this isn't the case, then we're back to allowing DLSS and FG to become "comparable" to native rastering, because hey that's a capability of the product so the extra "performance" is apples to apples, right?

No, it's not.
From a visual perspective it well may be. We've talked about this and how reviews which are using HL2 in 2025 to compare how GPUs perform aren't really telling us the whole truth. Reviews which use FSR on GF cards or even FSR3 and DLSS3 with the same scaling ratios aren't telling us the actual truth - and yet there are a lot of these and some of these are even including these results in their final "average performance" percentages or use these averages for relative value calculations.
TPU's average result is among the highest and it's one of those reviews which managed to not use any upscaling at all. What does that tell us about 5090 and other reviews?
 
Back
Top