Value of Hardware Unboxed benchmarking

Actually, you are making a political statement, whether you are aware or not. Voting with money always works. If you buy shoes that were created by child slavery, even if it was never your intention, and even if you are unaware, you are literally funding child slavery. Whatever we buy influences the world, because where the money goes, the same things grow.
But I guess that is too raw for most. Everyone prefers to see themselves as good, rather than contributing to atrocities.

You made an analogy to not being concerned with a proprietary upscaling method in GPU's to...supporting child slavery.

Incredible. Like my god man. There's a reason I said "you're not making a political statement by which gaming GPU you purchase" vs deriding ethical consumer choice altogether (and for the record, while I think it's grossly overused "No ethical consumption under capitalism" generally holds true. You cannot hope to buy your way to a better world, actually effective political change needs to start deeper).

These GPU's are assembled by OEM's in the same factories my man, the reason you have to go to clothing sweatshops for the comparison is because there is a laughably threadbare moral argument to be made for purchasing a particular GPU over another with that angle entirely, but especially on the basis it that came late to the reconstruction party. Regardless, no one is advocating for DLSS only either - the argument that myself, Nixxes, and yes - Hardware Unboxed - have made, is that there is little reason not to support all three from most AAA developers. Any game that supports only DLSS should be critiqued just as heavily as any AAA game that has only FSR.

To think you're a more responsible/ethical person though due to the fact you refuse to buy a GPU because they...took advantage of the R&D they've done in this area and built custom hardware for accelerating their superior reconstruction method is frankly a condemnation of what you value as 'ethics' and your understanding of how meaningful change to the world is actually brought.

Now hey, if you introduce evidence that AMD is actually incentivizing Radeon's to be produced in factories with better compensation for workers than the OEM's Nvidia contracts out to, or AMD's manufacturing process is more environmentally responsible than Nvidia, then we can make an actual moral argument that isn't laughable. Albeit with the far higher wattage usage for Radeon's, that environment argument for manufacturing at least better be pretty solid!
 
Last edited:
So if we are measuring rasterization performance
There is no such thing as rasterization performance anymore, has not been for a long time. Shadows Maps, Screen Space Shadows, Global Illumination (probes/screen space), Ambient Occlusion, Screen Space Reflections, Cubemaps, Motion Blur, and Depth of Field are all rendered at a fraction of the native resolution (1/2, 1/3, or 1/4). TAA is using temporal past frames to do Anti Aliasing, it's even integrated into the design of Ambient Occlusion and Screen Space Reflections to aid in performance.

RT is an additional variable that distorts results
RT distorts what? RT is just another Ultra option available to gamers, you shouldn't even isolate RT games from non RT games at all. You should just treat RT as another graphics preset and test GPUs against it like you do with any other game. Even HardwareUnboxed finally started doing this recently in the main reviews, they bundle RT games together with non RT games and give an average score.
 
HUB gives 3 summaries now. RT, no RT and mixed, which is probably the ideal way to go. DLSS and FSR should not be compared to native performance. They should be tested separately with the statement that while FSR enables similar performance gains, it looks like crap.
 
Lol, they're all generating pixels from samples, the samples are just collected differently. If I switch back and forth between trilinear filtering and anisotropic filtering, which one generates the real pixels, as they're sampling differently?
Doesn't matter; it's all rasterized in the end. They are different graphical settings, and nothing more.

Screen-space effects (depth-of field, motion blur, AO, GI, shadows etc) are not rasterization, Ray/path tracing is not rasterization, fragment/pixel shading is not rasterization. Rasterization in hardware is just one step that takes polygons (post-culling) and generates fragments to be shaded. If I play a game at 1440p "native" with full resolution screen-space effects or a game at 4k "native" with 1/4 resolution for screen-space effects, which one is the "real" pixels?
They are all real, with different graphics settings. The first one is 1440p native, the second is 4k native with low settings. Why is that so difficult? And pixel shading is a primary component for rasterization. Without it, rasterization can be considered incomplete. The whole rendering pipeline is considered rasterization by the likes of Hardware Unboxed, Gamers Nexus and so on, and for good reason. After all, all those techniques like AO, Dynamic Shadows etc. are there because of rasterization. If we would use path tracing, all those separate techniques would not be necessary at all, because they were developed specifically for improving graphics while using rasterization. The final resolution directly influences the workload of rasterization, because more pixels need to be produced.

Also, I find it funny how you include ray/path tracing, as that is a completely different rendering technique with different requirements. You don't even really need pixel shaders for it, but compute shaders. This is where the general term rasterization vs path tracing comes from.

The way rendering works is you sample data (geometry, textures, materials) and you produce pixels. You feed the data into a pipeline (collection of algorithms) and pixels come out the other side. You can change the sampling rates, and you can sample spatially and temporally. No matter what, the pixels that come out the other side are as "real" as any other. Subjectively quality will vary.
You've produced pixels twice in your explanation. But your explanation ignores the different amount of pixels produced at different resolutions. Different amount of pixels means different amount of work. As long as you have enough VRAM, detailed textures aren't really influencing the GPU significantly. It's the sampling, i.e. the number of pixels that need to be drawn from said texture, that determines how heavy the GPU is taxed. If you have a quarter the pixels from rendering at 1080p and pretend that it's the same as having 4k resolution, because you have an algorithm in the middle, you're deluding yourself.
 
Doesn't matter; it's all rasterized in the end. They are different graphical settings, and nothing more.
IMHO that's a very rigid and, honestly a sad point of view, in a graphical forum, to classify a GPU performance by its rasterization only, which nowadays represents a fraction of what is mostly compute workloads. 20 years ago, you would be fine with this reasoning, but in 2023? seriously?
The truth is that many people are against progress and evolution if it doesn't fit their wishes. They think RT will be a gimmick until a $300 GPU can run CP2077 in 4K PT at 60fps, and they forget that 3D real time rendering has always been a story of incremental features and performances, starting from high-end and slowly moving down the product stack...
 
Last edited:
Doesn't matter; it's all rasterized in the end. They are different graphical settings, and nothing more.


They are all real, with different graphics settings. The first one is 1440p native, the second is 4k native with low settings. Why is that so difficult? And pixel shading is a primary component for rasterization. Without it, rasterization can be considered incomplete. The whole rendering pipeline is considered rasterization by the likes of Hardware Unboxed, Gamers Nexus and so on, and for good reason. After all, all those techniques like AO, Dynamic Shadows etc. are there because of rasterization. If we would use path tracing, all those separate techniques would not be necessary at all, because they were developed specifically for improving graphics while using rasterization. The final resolution directly influences the workload of rasterization, because more pixels need to be produced.

Also, I find it funny how you include ray/path tracing, as that is a completely different rendering technique with different requirements. You don't even really need pixel shaders for it, but compute shaders. This is where the general term rasterization vs path tracing comes from.


You've produced pixels twice in your explanation. But your explanation ignores the different amount of pixels produced at different resolutions. Different amount of pixels means different amount of work. As long as you have enough VRAM, detailed textures aren't really influencing the GPU significantly. It's the sampling, i.e. the number of pixels that need to be drawn from said texture, that determines how heavy the GPU is taxed. If you have a quarter the pixels from rendering at 1080p and pretend that it's the same as having 4k resolution, because you have an algorithm in the middle, you're deluding yourself.

The entire pipleline: data -> transform -> pixels
Rasterization: one step (one transform) in the pipeline that transforms culled polygons into fragments/pixels to be shaded. What comes out of the rasterization stage is a fragment/pixel without materials or lighting.

Rasterization is a visibility test to see which polygons cover which pixels on the screen. Ray-tracing/path-tracing is a replacement for that visibility test which can also test for the visibility of reflections and lighting.

DX12 pipeline: Note that the rasterizer fits in the pipeline after geometry processing and before pixel shading.
pipeline.png



Edit: Sometimes you'll hear people say "Raster pipeline" and they mean the pipeline includes rasterization. The whole pipeline is not rasterization. For example, you could test primary visibility with ray tracing and then do screen-space reflections, screen-space shadows etc and a whole bunch of other stuff. I guess technically a lot of screen-space effects are done with screen-space rays, even if you're testing primary visibility with rasterization.

In the end there are tons of different ways to take samples from data and turn them into pixels, none of which is more "fake" than any other.
 
Last edited:
They think RT will be a gimmick until a $300 GPU can run CP2077 in 4K PT at 60fps
I mean, there's a bit of truth to this. Obviously your example is purposefully exaggerated, but it's not unreasonable to think that techniques that require high end, unaffordable-to-most hardware to use will not be adopted by developers heavily until this changes. The large majority of hardware RT implementations to date are not game changing, visually. With the RDNA2 consoles being the general AAA baseline, developers are going to be reluctant to use any overly heavy RT effects, and while there's certainly the possibility on PC to do more, there's not necessarily the financial incentive to go big there with some great effort.

So I think until next-gen, RT will still remain a sort of 'neat, but ignorable' optional feature for a large number of games. Especially for folks who demand 60fps.

Something like software Lumen in UE5 titles will probably the closest we get to games that build in any kind of major RT implementation as standard. And I think we can speculate that running UE5 w/Lumen at 60fps and with very highly detailed AAA presentation values is not gonna be easy to do at all. I'm sure there will be other occasional exceptions, but I dont see RT becoming standardized this generation. And so yea, it could still be fairly perceived by some as a sort of gimmick feature that just costs more than it's worth. We genuinely do need lower end, mainstream hardware that can run it better before this perception will change and developers can feel safer adopting it as standard and pushing with it more.
 
IMHO that's a very rigid and, honestly a sad point of view, in a graphical forum, to classify a GPU performance by its rasterization only, which nowadays represents a fraction of what is mostly compute workloads. 20 years ago, you would be fine with this reasoning, but in 2023? seriously?
The truth is that many people are against progress and evolution if it doesn't fit their wishes. They think RT will be a gimmick until a $300 GPU can run CP2077 in 4K PT at 60fps, and they forget that 3D real time rendering has always been a story of incremental features and performances, starting from high-end and slowly moving down the product stack...

The entire pipleline: data -> transform -> pixels
Rasterization: one step (one transform) in the pipeline that transforms culled polygons into fragments/pixels to be shaded. What comes out of the rasterization stage is a fragment/pixel without materials or lighting.

Rasterization is a visibility test to see which polygons cover which pixels on the screen. Ray-tracing/path-tracing is a replacement for that visibility test which can also test for the visibility of reflections and lighting.

DX12 pipeline: Note that the rasterizer fits in the pipeline after geometry processing and before pixel shading.
pipeline.png



Edit: Sometimes you'll hear people say "Raster pipeline" and they mean the pipeline includes rasterization. The whole pipeline is not rasterization. For example, you could test primary visibility with ray tracing and then do screen-space reflections, screen-space shadows etc and a whole bunch of other stuff. I guess technically a lot of screen-space effects are done with screen-space rays, even if you're testing primary visibility with rasterization.

In the end there are tons of different ways to take samples from data and turn them into pixels, none of which is more "fake" than any other.

Yeah that's great. But it wasn't me that came up with the term. Pretty much every single review site and video out there compares cards using "raster performance" and "RT performance".
 
With the RDNA2 consoles being the general AAA baseline, developers are going to be reluctant to use any overly heavy RT effects, and while there's certainly the possibility on PC to do more, there's not necessarily the financial incentive to go big there with some great effort.
We have to agree to disagree. I don't think consoles are the limiting factor in determining whether studios utilize full RT effects since it is just as easy to provide gamers the means to scale back RT effects, also if it was true we wouldn't currently have any RT games. As we have seen obligations due to contractual sponsorships might have more to do with studios providing "watered down" RT effects in games as a means to conceal IHV architectural weaknesses rather than any financial development hardship. I believe consensus is that the development workflow required for "bells and whistles" RT effects requires not much more developer effort than providing a "watered down" RT version.

If specific platforms/GPU tiers are not in a position to visually enjoy fully enabled RT effects with current hardware they will fall back to the weaker RT version. At a later date once hardware is capable they can run the fully enabled RT options with at most minimal developer involvement.
 
We have to agree to disagree. I don't think consoles are the limiting factor in determining whether studios utilize full RT effects since it is just as easy to provide gamers the means to scale back RT effects, also if it was true we wouldn't currently have any RT games. As we have seen obligations due to contractual sponsorships might have more to do with studios providing "watered down" RT effects in games as a means to conceal IHV architectural weaknesses rather than any financial development hardship. I believe consensus is that the development workflow required for "bells and whistles" RT effects requires not much more developer effort than providing a "watered down" RT version.

If specific platforms/GPU tiers are not in a position to visually enjoy fully enabled RT effects with current hardware they will fall back to the weaker RT version. At a later date once hardware is capable they can run the fully enabled RT options with at most minimal developer involvement.
Adding in completely different/new RT features is not simple, though. Sure, you can do full res reflections on PC versus like quarter res on console or something, but it's a whole different thing to add in RTAO just for PC, for example. It's nice when devs add in PC-only RT features, but I dont think most developers will feel compelled to do this necessarily. It's still gonna be viewed as an optional bonus, and I think this sets the tone in how gamers view it as well.
 
Sure, you can do full res reflections on PC versus like quarter res on console or something, but it's a whole different thing to add in RTAO just for PC, for example.
Not really. If your engine already have it then adding it is trivial - unless you're unwilling to deal with the art issues which may arise from that. If not then it's more complex but again if the engine does some RT already it's not exactly hard.

The point of consoles being a limiting factor on RT works both ways btw. It is limiting the amount of each frame a naive console port may do with RT, sure, but it also simplifies RT enough for the majority of PC h/w to be able to run it at the mythical 4K/60 rather sooner than later which means that in such naive console ports there soon won't be any reason to not use RT at all.
 
Not really. If your engine already have it then adding it is trivial - unless you're unwilling to deal with the art issues which may arise from that. If not then it's more complex but again if the engine does some RT already it's not exactly hard.
It is never as simple as just turning on an effect, even if it's built into the engine. As you say, there will be art issues and other tech conflicts that all need to be balanced/ironed out to make everything work as desired. Point is - it is absolutely more work. Very little in game development is ever 'easy'.
 
No, Raytracing is not more work. It just works. Would it be the other way no developer would have ever used SSAO, SSRs and some kind of real time GI.
 
No, Raytracing is not more work. It just works. Would it be the other way no developer would have ever used SSAO, SSRs and some kind of real time GI.
I mean, that was the promise, but it's definitely not how it actually works in practice from what I've read from developers. Especially anything with lighting. The relationship between lighting and other areas of visuals can be fairly sensitive.

Not to mention that sometimes what is 'realistic' doesn't actually look as presentable as you'd like. Movies use extensive use of 'unrealistic' lighting and tricks to make scenes look better for the viewer for this reason. Games are still gonna have similar issues, even when we have more camera control. Maybe you discover that some area is now just far too dark and requires you to go in and manually adjust brightness levels or perhaps add in a new light source somewhere. It may not be realistic, but it could still ultimately look better.
 
I dont know which developer has said something like this. 4A Games was very clear why they have gone to raytraced GI with Metro Exodus EE.

And there is no difference between Raytracing and Screenspace. Both are calculated at runtime.
 
I dont know which developer has said something like this. 4A Games was very clear why they have gone to raytraced GI with Metro Exodus EE.

And there is no difference between Raytracing and Screenspace. Both are calculated at runtime.
When they are calculated is really not relevant.

And I think you're still missing the point here - I'm talking about adding in extra RT features specifically for PC version. Meaning devs have to include both non-RT and RT implementations of whatever feature we're talking about. It is absolutely going to be more work. Your idea that RT can just be flicked on like a switch and everything will be good is simply not how it actually works.

Once we have more powerful hardware that is available at a mainstream price point, then yes, developers can start to use ray tracing as standard, where it could no longer be viewed as a gimmick and would involve less overall work.
 
It is never as simple as just turning on an effect, even if it's built into the engine. As you say, there will be art issues and other tech conflicts that all need to be balanced/ironed out to make everything work as desired. Point is - it is absolutely more work. Very little in game development is ever 'easy'.
Porting a game to PC is more work yet nobody seem to discuss the idea that you could just not port anything - because it's more work. Same thinking applies to RT - you could implement/improve it in your PC version, and it will make the game better which in turn may help you sell more copies.
 
When they are calculated is really not relevant.

And I think you're still missing the point here - I'm talking about adding in extra RT features specifically for PC version. Meaning devs have to include both non-RT and RT implementations of whatever feature we're talking about. It is absolutely going to be more work. Your idea that RT can just be flicked on like a switch and everything will be good is simply not how it actually works.

Once we have more powerful hardware that is available at a mainstream price point, then yes, developers can start to use ray tracing as standard, where it could no longer be viewed as a gimmick and would involve less overall work.

And how do you explain Ghostrunner 2? Or every Indie game with Raytracing? They are just activating Raytracing in UE4.
 
Porting a game to PC is more work yet nobody seem to discuss the idea that you could just not port anything - because it's more work. Same thinking applies to RT - you could implement/improve it in your PC version, and it will make the game better which in turn may help you sell more copies.
Porting/developing a game for PC is obviously sensible from a sales perspective. The idea of putting in more work to include high end options that only a small percentage of an already limited playerbase can actually use, much less will actually want to use, is a VERY different prospect. That's genuinely my whole point here. Like I said, it may make sense for studios who want to make their game a tech showcase title, but otherwise will not be seen as worth it for many, or even most.
 
Back
Top