Nvidia Geforce RTX 5090 reviews

I will absolutely be doing this when I get one... Still gotta decide which one to get, although the FE still speaks to me with that swanky dual slot cooler.
I'd get the FE for sure. That's the kind of engineering that impresses me. Unfortunately it'll probably be impossible to purchase. I can't recall seeing and FE models of the 4090 listed on Newegg. Even finding any model 5090 for MSRP should be an interesting challenge. I have access to distributors like D&H and Ingram Micro and they are usually no better than Newegg or Amazon in terms of GPU stock.
 
I will absolutely be doing this when I get one... Still gotta decide which one to get, although the FE still speaks to me with that swanky dual slot cooler.

If I were in the market for a 5090 I'd very much lean towards one of the chunky traditional design AIB variants. These cards are going to be viable for a very long time (>10 years), and sooner or later you might want to disassemble that thing to clean, repaste, repad it. Nickel plating or not, I don't think there's a lot of reason to be confident that LM TIM is not going to eventually soak in and degrade itself or the cooler. Or that the gasket won't eventually fail. Granted none of these designs have adequate power connectors, so they're kind of a sloppy/compromised design no matter how you slice it. The idea that one of the hottest parts in your computer is a plastic power connector that isn't temperature monitored or controlled is embarrassing.
 
If I were in the market for a 5090 I'd very much lean towards one of the chunky traditional design AIB variants. These cards are going to be viable for a very long time (>10 years), and sooner or later you might want to disassemble that thing to clean, repaste, repad it. Nickel plating or not, I don't think there's a lot of reason to be confident that LM TIM is not going to eventually soak in and degrade itself or the cooler. Or that the gasket won't eventually fail. Granted none of these designs have adequate power connectors, so they're kind of a sloppy/compromised design no matter how you slice it. The idea that one of the hottest parts in your computer is a plastic power connector that isn't temperature monitored or controlled is embarrassing.
I forgot the FE uses liquidmetal. I don't like that. Do we know if the AIB models will also use that?
 
Last time I've looked into FSR2 there were definitely h/w specific paths for GPUs which support FP16 precision. Are these optimized to run on Nvidia's h/w?
It's just FP16 math, it isn't specific to any vendor unlike XeSS.
I don't see any point in narrowing the scope of anything. Review should provide maximum possible relevant data.
Relevant data, when we're talking about performance comparisons, is NOT changing the software workload to generate incompatible performance results. Comparisons of FSR versus DLSS belong in a journalistic piece focused on upscaling, which abolutely should delve into performance and qualitative analysis -- but not in a physical, bare metal comparison between vendors. We don't change drag races to half miles versus quarter miles because one car happens to have only been optimized for quarter miles versus another for half miles. A journalist with integrity can and should certainly mention the caveats while providing the data, but a consistent set of testing variables to produce a consistent set of results is absolutely required for rational performance comparisons.

You're welcome to disagree with me, but this isn't something I'm going to budge on. So much so, in fact, that this reply will be my last on this topic.

From a visual perspective it well may be.
They aren't; FSR and DLSS do not generate equivalent video output to native, nor does FG, and you understand this more than so many other people that I'm completely lost on why you even floated such a statement. Reminding us that there are journalists who somehow try to make the data comparisons doesn't make their comparisons right in that regard, and again you know this.

I'm done having this specific conversation, because I fail to understand how you don't understand and ultimately I don't care enough to continue.
 
It's just FP16 math, it isn't specific to any vendor unlike XeSS.
FP16 math is anything but "just". It can be packed, it can run with various output formats and different storage optimizations. It should in fact do all that if the h/w allows for it but then it wouldn't be "the same workload" being executed by all h/w.

Relevant data, when we're talking about performance comparisons, is NOT changing the software workload to generate incompatible performance results. Comparisons of FSR versus DLSS belong in a journalistic piece focused on upscaling, which abolutely should delve into performance and qualitative analysis -- but not in a physical, bare metal comparison between vendors.
Then we should probably remove like 3/4 of reviews from yesterday and today from being a valid assessment of Blackwell's performance.

They aren't; FSR and DLSS do not generate equivalent video output to native, nor does FG, and you understand this more than so many other people that I'm completely lost on why you even floated such a statement.
Because this statement actually make a lot of sense when comparing GFs to other GPUs where DLSS Performance can be close to Quality presets of other upscalers. Should this be mentioned in a good review somewhere probably?
 
That's one way to interpret that post.... Obviously the incorrect way but it's certainly a way..... The correct way to interpret that post is that Nvidia will lose talented employees because they can afford to go seek other pursuits in life. It's hard to find and replace really talented employees because the base required knowledge to even compete for that type of position severely limits the available talent pool....

Anyway, it's often interesting how people can read similar things and some can derive extremely ridiculous interpretations of said thing.....

Have you actually looked at what Nvidia pays their engineers? Very few are going to think of vested options as a windfall to leave when they are already earning up to $1M a year.
 
My 4090 neither saw FSR usage nor DLSS performance usage.
Ever.
Neither will my coming 5090.
Those metrics are 100% irrelvant for me.

DLAA, FG, DLSS Quality, RT.
Those matter to me.

DF's video also had some game running with FSR...that was the time I closed the video 🤷‍♂️
The point of benchmarks isn't "real world usage", they're benchmarks, they almost never reflect actual gameplay realities. The point is to normalize visual quality across a set of GPUs to see how they perform rendering the exact same set of frames.

If you use DLSS in a benchmark you basically cannot compare apples-to-apples across vendors. Perhaps a compromise is having a separate slide with just Nvidia cards (lets be real, that's all anyone cares about anyways lol).
 
Because this statement actually make a lot of sense when comparing GFs to other GPUs where DLSS Performance can be close to Quality presets of other upscalers. Should this be mentioned in a good review somewhere probably?
This is mentioned in every discussion of DLSS vs FSR ever. This is the exact reason why mixing upscalers is not going to happen lol.
 
AAA games are the reason we buy high end GPUs for, nobody buys a 5090 to play games with basic graphics.
I would wager over half of the 4090 (and 5090) buyers probably use it mostly for random older multiplayer titles. AAA games aren't very big compared to most of the market, and the 'popular' opinion of new AAA titles is that they're bland and uninspired.
 
Which are 1080, 1080Ti, 2080Ti, 3090, 3090Ti, 4090, 5090. No 2080s or 3080s there.


That depends on how you calculate Blackwell's gains.
TPU shows +35% for example while some others show +20%.
I tend to think that anything less than 30 is likely heavily CPU limited so looking at that isn't relevant.


Not really. There was like 2% difference on Ampere between non-RT and RT scaling.
It’s actually absurd to claim the 2080 and 3080 weren’t higher performing skus.



Ampere absolutely scaled higher when heavy RT was in use.
 
Microcenter: Please don’t camp out at the store for 50 series launch.

Also Microcenter: We expect lines to form before stores open so please arrive early :cautious:

No FE in store at launch. Will give it a few weeks / months then check the local Microcenter. If that doesn’t work this generation may be a skip.

 
Why do we need a video from them when the conclusion is always the same? You can go back to 2018 and it is and was always the same - Raytracing, DLSS 1, DLSS 2, FG etc. Funny, we have never seen a video about Reflex. Guess you cant hate that...

I dont see the point in slowdown. On the PC playing with a mouse means movement is alot more rapid. So the difference between 30, 60, 120+ frames is much bigger than just move a character slowly forwards. So reducing the speed and only having 60FPS cant even closely capture the "experience" of FG.
 
Why do we need a video from them when the conclusion is always the same? You can go back to 2018 and it is and was always the same - Raytracing, DLSS 1, DLSS 2, FG etc. Funny, we have never seen a video about Reflex. Guess you cant hate that...

Tim’s coverage here is very good and not biased in any way. His conclusions make perfect sense. Frame gen works well with slow linear movement and not as well with rapid dynamic movement and benefits from higher base fps. Nothing surprising there. The killer app for framegen may be 250->1000fps interpolation one day.

I dont see the point in slowdown. On the PC playing with a mouse means movement is alot more rapid. So the difference between 30, 60, 120+ frames is much bigger than just move a character slowly forwards. So reducing the speed and only having 60FPS cant even closely capture the "experience" of FG.

YouTube’s 60fps cap can’t show the experience of FG so you have to slow it down if you want to show each frame. DF has explained this multiple times and Tim does here as well.
 
His conclusion is nonsense. He is faking his scene like with the "Raytracing has a noise problem". Have you played Alan Wake 2 with a mouse? Nobody is only moving foward or backwards. If FG would work only "well with slow linear movement" than this would mean that higher frames have no positive effective at all.

I played Spider-Man with FG on my 4090 from the beginning. Spider-Man would be a worst case scenario for FG. And it just worked here. I find it interesting that everyhting has to be perfect when it comes to nVidia.
 
Tim’s coverage here is very good and not biased in any way. His conclusions make perfect sense. Frame gen works well with slow linear movement and not as well with rapid dynamic movement and benefits from higher base fps. Nothing surprising there. The killer app for framegen may be 250->1000fps interpolation one day.
That's not my experience at all. I've just finished playing CODMW3 campaign and I've used FG and upgraded that to 4.0 mid-campaign. Both options where completely fine with "rapid dynamic movements" (I've played on m+k), base FPS was about 100 and with FG it was locked to 138 all the way, no noticeable change in input lag either. The new FG is a bit better on input lag I'd say (margin of error stuff though), a lot better on frametime health (even on 40 series) and seem to produce different kinds of HUD artifacting - which are generally invisible during normal play anyway.

And 250->1000 is nice of course. Why not up the level all the way to 1000->4000 while we're at it? I'm 100% sure than MFG can be used with 60-100 FPS baseline, no 250 is needed, that just HUB's FUD again.

Edit: While we're on this topic - FG 4.0 does seem to have some sort of compatibility with Reflex 2 frame warp tech. I'm still unsure how it would work with (M)FG though considering how the frames are generated and presented. Maybe it's "turning warp off when FG is active" sort of compatibility.
 
Last edited:
Back
Top