Value of Hardware Unboxed benchmarking *spawn

This thread is wild. Just find a reviewer you like and stick to it. If you like HU content, use it for your decision making, if you don’t like it, don’t use it, don’t read it, don’t watch it etc.

I don’t like the term reconstruction. All rasterized images are under-sampled. The trick is using your compute budget smartly to use more samples in the right places. TAA uses samples from previous frames. So do DLSS and FSR2. DLSS uses a trained network to select samples. FSR2 uses an algorithm. The difference between TAA and DLSS/FSR2 is DLSS and FSR2 include an upscale. They’re all constructing the current frame with temporal samples. There’s no “reconstruction.” So DLSS and FSR2 just have fewer samples per pixel than native with TAA. Some people prefer lowering settings instead of adding DLSS or FSR. I do both because I don’t do peasant gaming at 30 or 60 fps.
 
This thread is wild. Just find a reviewer you like and stick to it. If you like HU content, use it for your decision making, if you don’t like it, don’t use it, don’t read it, don’t watch it etc.

I don’t like the term reconstruction. All rasterized images are under-sampled. The trick is using your compute budget smartly to use more samples in the right places. TAA uses samples from previous frames. So do DLSS and FSR2. DLSS uses a trained network to select samples. FSR2 uses an algorithm. The difference between TAA and DLSS/FSR2 is DLSS and FSR2 include an upscale. They’re all constructing the current frame with temporal samples. There’s no “reconstruction.” So DLSS and FSR2 just have fewer samples per pixel than native with TAA. Some people prefer lowering settings instead of adding DLSS or FSR. I do both because I don’t do peasant gaming at 30 or 60 fps.

It's reconstruction in that it's taking a frame and then using data from multiple previous frames to then reconstruct that frame into a new frame (often higher resolution than the original frame). Sometimes it gets it right when it inserts new data/details into the frame being worked on, sometimes it doesn't which contributes to the image instability of various reconstruction techniques.

Regards,
SB
 
Those examples never muddied the concept of native resolution rendering for me.

I think it’s a relevant point when discussing IQ of native resolution rendering. The resolution of the primary view still has a huge impact of course. However, there are a ton of off screen buffers and views that contribute to the final image on screen and they are very often downrezzed.

Take Lumen for example. Sure primary visibility can be running at 4K but Lumen GI is rendering and updating at a much lower frequency. Another example is that you might be staring at a 4K mirror but the reflection rays bouncing off it may be running at 1/4 or 1/8 resolution so the image in the mirror certainly won’t be 4K.

Given all the stuff happening offscreen “native” doesn’t really mean anything in today’s games and definitely shouldn’t be held up as a beacon of purity. There are efficiency hacks happening all over and upscaling is just another hack in the tool belt.

One thing that we learned from all the upscaling reviews is that “native” or native+TAA” struggles with some content that looks better upscaled. It’s weird that Nvidia’s promised DLAA hasn’t really showed up. Presumably it’s the best of both worlds.
 
Last edited:
It's correct, DLSS has fewer internal pixels per frame to work with.

Especially performance mode which is 720p internally at 1440p.
In this example it is nativ 1440p with TAA and DLSS Performance in 4K (aka 1080p input). 1440p has 1,78x more pixel but i think that DLSS P in 4K has actually more usable informations to reconstruct a good 4K image than native 1440p with TAA on a native display.
 
It’s weird that Nvidia’s promised DLAA hasn’t really showed up.
It's integrated into a dozen or so games right now.
Take Lumen for example
Not just that, Depth of Field, Motion Blur, shadows, Ambient occlusion, volumetrics, transparencies, alpha, even texture resolution is not happeneing at native resolution. The whole pipeline is undersampled now.

Of course upscaling decreases those things further, but intelligent upscaling preserves most of these things at close to native levels (or even above native by strengthening tempotal stability), to the point that the loss of quality becomes imperceptible compared to either the increase in fps, or the increase of the quality of the image with things such ray or path tracing.

In fact ray tracing and path tracing often compensats for the lower than native effects, as shadows, lighting, reflections become more defined and more present, available, and dynamic in the image.
 
It's reconstruction in that it's taking a frame and then using data from multiple previous frames to then reconstruct that frame into a new frame (often higher resolution than the original frame). Sometimes it gets it right when it inserts new data/details into the frame being worked on, sometimes it doesn't which contributes to the image instability of various reconstruction techniques.

Regards,
SB
It's really just a semantic thing on my part. I think of reconstruction more like rebuilding, but in the case of DLSS and FSR2, they're outputting new frames, not rebuilding them. Ultimately DLSS and FSR2 are temporal upscaling. When people say they don't like reconstruction, they're saying they don't like upscaling. Native TAA has the temporal part, and it doesn't seem to bother them.
 
It's really just a semantic thing on my part. I think of reconstruction more like rebuilding, but in the case of DLSS and FSR2, they're outputting new frames, not rebuilding them. Ultimately DLSS and FSR2 are temporal upscaling. When people say they don't like reconstruction, they're saying they don't like upscaling. Native TAA has the temporal part, and it doesn't seem to bother them.
Actually DLSS2’s and FSR2’s super-resolution perform spatial upscaling using temporal data.

Frame generation is temporal upscaling.
 
Actually DLSS2’s and FSR2’s super-resolution perform spatial upscaling using temporal data.

Frame generation is temporal upscaling.

Yes. Every frame is a brand new frame generated by combining a low res image with temporal data from previous frames. That's why I don't like the term reconstruction. It's a new frame, not a rebuilt one. It'd be like saying, "We used a lot of reclaimed materials on the reconstruction of my new house." Everyone is going to think you tore your house down and rebuilt it. When people talk about not like reconstruction techniques like DLSS, they're generally complaining about the fact that it's more under-sampled than native, because it had to be up-scaled from a lower resolution. I don't think the temporal part is really what they're taking issue with, otherwise they would be playing native without TAA. Mind you, DLSS and maybe FSR2 can do a better job with thin geometry, like a chain link fence, by selecting better samples, but in general DLSS will look softer than native without some sharpening added.
 
Thing is, developers are going to start making games with reconstruction as the default. Or basically - they'll expect gamers to be using it, with native rendering only really for those with either lower resolutions or extreme setups that just have an abundance of GPU overhead.

I also dont understand this idea that reconstruction needs to be 'perfect' to be worthwhile. It's such a weird mentality, given the general prevalence of 'performance is king' thinking, and the idea that we've generally always accepted slight compromises in visuals in order to push performance with all these other graphics settings, yet somehow the same thinking doesn't get applied here, even when there's significant performance gains to be had in most cases.

As for the attacks on HUB, I find that in 95%+ of cases, those accusing HUB of being biased are themselves the biased ones. HUB aint perfect(no reviewer/benchmarker is), but they dont deserve even a tiny fraction of the hate they've been getting from the completely embarrassing PC gaming community.
Hub self admitted he is a amd fanboy on his discord
 
The reviewer who has been saying for a couple of years that ray tracing doesn't matter and that's why they won't benchmark it to just silently start benchmarking RT now since wow it does matter?
Makes perfect sense to me.
The reviewer who has been the center of such controversies for so many times now that I can't even remember the count?
The mere existence of controversies does not tell us whether a reviewer is doing something wrong.
 
This thread is wild. Just find a reviewer you like and stick to it. If you like HU content, use it for your decision making, if you don’t like it, don’t use it, don’t read it, don’t watch it etc.

Especially considering we're living in a world where there are virtually zero PC games made to utilize the most modern GPUs, while the price trends make them even less relevant as a development target than they were a few years ago. It's like getting bent out of shape over reviews of different airlines' leather seats on first-class flights to Moscow.
 
I'd prefer „reconstruction techniques“ that were not used to mislead gamers.

Instead of lowering the rendering resolution, making gamers think they can play just fine in 4k, when in reality they are playing at an upscaled 1080p, they should be offered as a means of image quality improvements.
  1. select rendering resolution (most of the time fitting your displays native res)
  2. select DLSS/FSR/XeSS as addition edit: not lowering resolution, but improving IQ and see where that gets you in terms of IQ
Yes, I'm aware that's mostly semantics from a technical standpoint, but much of the marketing today is centered around „innovative techniques“ that allow you to play at 120 fps in 4K as long as you spend 1500 Dollars on your shiny new graphics card, when in fact that card isn't capable of doing that in native res.

With DLSS3 we're already at the next level, where not pixels but frames are generated based on more or less educated guesswork (fancy-speech: specifically optimized and trained Deep-Learning Techniques/Deep Neural Networks).
 
Last edited:
Instead of lowering the rendering resolution, making gamers think they can play just fine in 4k, when in reality they are playing at an upscaled 1080p, they should be offered as a means of image quality improvements.
  1. select rendering resolution (most of the time fitting your displays native res)
  2. select DLSS/FSR/XeSS as addition edit: not lowering resolution, but improving IQ and see where that gets you in terms of IQ

I'm already using them for that, downscaling using DLSS works amazingly well and so far beats native while offering better performance.
 
And performance is only one aspect. In RE4 Remake and Last of Us FSR 2 is totally broken and introduces so much artefacts that it should be the job of the reviewer to tell you that. But channels like this one will just withold this information. Here is an example from Last of Us:
 
And performance is only one aspect. In RE4 Remake and Last of Us FSR 2 is totally broken and introduces so much artefacts that it should be the job of the reviewer to tell you that. But channels like this one will just withold this information. Here is an example from Last of Us:
At least here they seem to have more or less the same performance:

1680115268071.png
 
I'm already using them for that, downscaling using DLSS works amazingly well and so far beats native while offering better performance.
I really wish Nvidia exposed (DLSS, DLAA and DLDSR+DLSS) as a single continuum within the DLSS umbrella. I like @CarstenS 's idea of just exposing the base render resolution as the parameter that the user chooses instead of opaque labels. Or for something that's easier to parse, it could even be a percentage of native, e.g., ranging from 12.5% to 400%. That way you get a nice pareto-optimal frontier between render resolution and visual quality, and native (DLAA) is just a point in the middle of the space.

To add to @trinibwoy 's earlier point, having such a continuous scale may also help communicate that native isn't some magic reference resolution -- it's just another flawed, aliased, ugly point in the range of sampling options before you apply any temporal accumulation. Hopefully it will also communicate that most home-grown ad-hoc TAAs are just outdated alternatives to these newer-generation reconstruction algorithms.
 
I guess those negative comments regarding their past review bias is having an effect since HUB reviews now seem to be less one sided over the past 2 months.
Interesting to see if they can continue this model.
 
Back
Top