Lol. Lowering resolution is fake performance
You can
all you want. It is fake performance if you claim that it renders at a higher resolution when it's not. Just like frame generation produces fake frames, upscaling produces fake pixels. Simple as that.
That absolutely does not change the value proposition of a GPU with a more powerful upscaling hardware component though even if you did that silly hypothetical. The GPU with the inferior upscaling will still require more power to deliver the equivalent visual result of the card that can use a lower scaling value to reconstruct it at 8k.
Yes, and that would be measurable.
Also, a card performing 8K reconstruction is in fact, rendering at that resolution - this is the precise difference between reconstruction and simply upscaling, despite the terms being used interchangeably now. That 8K reconstructed image has new detail added to it.
Is it really rendering?
Remember interlacing? That was also considered reconstruction when the missing lines were added, but doing that was never actually considered rendering.
As I mentioned as well, even that 'native' 4K image is likely composed of many different render targets with varying levels of internal resolution.
That has always been the case, where textures and effects all can have different resolutions. Ultimately they all come together, which is called rasterization. DLSS/FSR/XeSS, neither of them are rasterization.
So if we are measuring rasterization performance, DLSS/FSR/XeSS is an additional variable that distorts results, meaning it should be measured separately.
If we are measuring rasterization performance, RT is an additional variable that distorts results, meaning it should be measured separately.
If we are measuring rasterization performance, frame generation is an additional variable that distorts results, meaning it should be measured separately.
For the record, I did not say that it should not be measured, or that it should not be taken into account. But rasterization is the main feature for the vast majority of games, and it is only fair to use that as a baseline, and consider the rest bonuses.
And the end result can be quite different, that's the point. It's not actually 'like for like' if the upscaling algorithm on one card is delivering a better result due to it using the hardware resources allocated to that GPU for that express purpose of making the rendering more efficient wrt final output image quality.
No disagreement here. That doesn't mean that it should be the basis for choosing a graphics card. And that statement doesn't mean that it should be ignored either.
I never said it should become the 'standard' way of benchmarking in that it should be the only way, what I did say however, is that it does need to factor into the final conclusions and overall value assessment in reviews. The numbers always have to be put into context, and it works both ways - you wouldn't simply refer to frame generation benchmarks either when comparing the performance.
Glad to see we can agree.
You're not actually evaluating the reconstruction abilities of competing GPU's if you simply take the names given to each quality level and benchmark it against the equivalently named feature of your competitor, if they actually behave quite differently in delivering the final image they're designed to - that matters. It's their entire purpose. If it didn't, then AMD wouldn't even need FSR2, we could just use FSR1. Most reviewers at least thankfully, recognize that would be ridiculous as the quality levels are so drastically different despite AMD also naming the internal res settings of FSR1 the same.
That is exactly why separate dedicated articles & videos are created to point out these differences. Besides, some people claim to need Ferraris and that everyone else needs Ferraris, and that if you don't have a Ferrari you can't drive. But most people simply need a car to get around.
As for the bizarre stipulation that you can't include proprietary technologies in reviews,
If that is what you got from what I said, that was not my intended message. It can be included but it should not be the core of the benchmarking, in the interest of benchmarking apples with apples.
When you measure a car's performance, you measure acceleration and steering. That the other car has a backup camera is nice and might make you get that car instead, but it's not the primary measurement of a car's performance.
if the actual implementation is widespread enough, then it being 'proprietary' is completely irrelevant when evaluating the value of hardware. We buy these GPU's to play games, if the games commonly support a feature of a card that can deliver a superior experience with the same base resolution input, it doesn't matter if it's open source or not. You are not making a political statement with what gaming GPU you buy.
Actually, you are making a political statement, whether you are aware or not. Voting with money
always works. If you buy shoes that were created by child slavery, even if it was never your intention, and even if you are unaware, you are literally funding child slavery. Whatever we buy influences the world, because where the money goes, the same things grow.
But I guess that is too raw for most. Everyone prefers to see themselves as good, rather than contributing to atrocities.
How that relates to GPUs? Simple. The market adapts to what is bought, what is required, what is desired and by what is available. By nature, FSR and XeSS have a larger potential market than DLSS. At some point anyone that bought an nVidia GPU is going to have to live with the fact that DLSS will no longer be used, just like happened with PhysX, HairWorks, Ansel and G-sync. Obviously, if one pays a premium, one wants such features to be included to be used as much as possible. People don't want to think they are throwing away their money, so obviously they were mad when Starfield wouldn't have DLSS.
But it's they themselves that constantly buy products with features that are literally planned obsolescence. And as AMD and Intel compete more, there will be more and more instances where proprietary features will go unused. Out of the three, it's probably XeSS that's going to be the winner, while FSR and DLSS fade away.
Your purchases literally influence what nVidia, AMD and Intel do with their market policy. Choose wisely.
We compare the overall value proposition of hardware utilizing proprietary technologies all the time, in fact one of the reasons this generation of mid/low-end cards are so derided is because of the relatively weak value proposition they have when compared to the price of consoles.
I don't see how things are different than in the past regarding consoles. The PS5 is about a 5700XT, or a 2070. The $269 RX 7600 is 20% faster than the PS5's GPU, and a PS5 costs about as much as an RX 7800XT. The Arc770 is also about 15% faster than a PS5 for about $300.
Of course, if only nVidia is used as a point of reference, I can understand why one would think that this generation is derided. But I can't help but think that it's gamers themselves that created this situation by monetarily supporting constant price hikes on shiny new features with a short lifespan.
Hardware Unboxed benchmarks are ultimately useful, and close to how they should be. I've not always agreed with them, but, the fact that both AMD and nVidia fanboys find them biased is actually a good showing of their neutrality.