That's just not at all true, though.
The thumbnail HUB, like many/most gaming reviewers, haven't figured out that gpu "performance" only really scales with power now.
That's just not at all true, though.
Lovelace is massively more performant than Ampere at similar power draw.
And plenty of tests have shown that you can decrease the power on these latest GPU's quite a bit without losing a lot of performance. And all these high TDP's are really doing is causing the prices of graphics cards to go up because manufacturers have to engineer them to be able to support the stock TDP. It's all getting fairly out of hand, simply to juice out the last little bits of performance for benchmarks/reviews. We used to get more reasonable TDP parts, and then we'd have the higher end graphics card variants that can support higher power draw for a premium, but it's like even baseline graphics cards nowadays are equivalent to one of those super premium graphics cards from like 8-10 years ago.
I know there's been heavier spiking behavior on modern GPU's as well for a couple generations now, which might be contributing to the raising of TDP so people dont underbuy on the PSU side, but I dont think that's the main thinking here.
Either way, the idea that we've reached a wall in terms of efficiency, core scaling and IPC scaling is definitely not true.
Blackwell has the "benefit" (if you can call it that) of using the old proven process which means that cost of similar sized die should be considerably lower than it was for Lovelace at its launch. And we kinda see it in the launch prices of 50 SKUs. So the situation is similar but for a different reason - Lovelace had to use smaller dies at higher prices which limited the perf/price gain despite a huge process upgrade. Blackwell is using a "cheaper" node but it's the same old node so perf/price is increasing only through the costs improvement on that node and not the perf gains from a newer node. The end result in perf/price changes seem to be fairly comparable really.Lovelace had the benefit of a full two node jump from Samsung's 8nm process. On the other hand wafer prices are >3x higher...
They’ve never done this, and I’m not even sure how you could claim they’ve done this with Intel considering their latest piece is about how Battlemage sucks lol.untenable to defend AMD/Intel as a noble, underrated underdog.
They’ve never done this, and I’m not even sure how you could claim they’ve done this with Intel considering their latest piece is about how Battlemage sucks lol.
I’d probably agree that FG is frame smoothing and their logic is sound (the additional frames don’t reduce latency like ‘real’ frames would, which is why people never use FG for competitive games). That said, it’s not a bad technology, I’ve used it in the past (I hate it on keyboard and mouse but it works well on controller).
FG has nothing to do with anykind of "smoothing". This is like saying that a CRT or plasma do not have better motion clarity because they may have higher input lag than a LCD or OLED. FG is like an advanced black frame insertion method to provide the benefit of interrupting the current frame on the display device. And with LCD and VRR it comes with perks of let the monitor run in the optimized frame range.
You aren’t going to believe what people usually call BFI: motion smoothing.FG is like an advanced black frame insertion method to provide the benefit of interrupting the current frame on the display device.
This is like saying that a CRT or plasma do not have better motion clarity because they may have higher input lag than a LCD or OLED.
First off this isn’t true unless your player supports refresh rate matching. Many internal players don’t so you need 3:2 pulldown.Movie material runs with 24fps at 24hz. So it will always look unnatural. With gaming it looks much better because we do not have a proper sync between frametime and hz. And seeing a stuttering rendered character feels just noch right.
We are. It's just going to cost so much to engineer them that the per-dollar gains are completely negated.I think the keyword here is "now."
Ampere (consumer) is on Samsung's 8N process, while Lovelace is on TSMC 4N. These process have huge differences in performance.
The key point is whether we are going to have similar advances in process in the future? Right now all signs are pointing to a big negative.
This sounds like HUB’s position, restated.Is it "real" performance? No. Is it useless and shouldn't be mentioned when talking about new features of the new GPUs? No.
This sounds like HUB’s position, restated.
Using FG frames as ‘performance’ in a graph is at best misleading. I don’t think the tech is useless or whatever (I’ve been using frame gen from before FG even came out since Oculus implemented it years ago) but I dislike how Nvidia is blurring the line here.
Case in point; does the 5070 beat the 4099 or does it require the new 4x FG to do so?
If this is a comparison of 40 series FG to 50 series MFG then it's a valid comparison. It would be misleading to compare any FG to non-FG - like what Nvidia did back at 40 launch. So all in all it is an improvement.Using FG frames as ‘performance’ in a graph is at best misleading. I don’t think the tech is useless or whatever (I’ve been using frame gen from before FG even came out since Oculus implemented it years ago) but I dislike how Nvidia is blurring the line here.
I mean, just call it motion smoothing and interpolation. We don’t need to make this complicated, it’s a motion interpolation system that works very well.They add something, and if it's not performance, then I don't know what the language should be.