Value of Hardware Unboxed benchmarking *spawn

I do agree that there’s a terminology problem at play. Just in case I’m not helping: when I personally say I want “more raster”, I literally mean more innovation in how we feed and program the actual hardware rasteriser in the modern graphics pipeline, and what happens in the hardware and software as a result of that pipeline stage, and how its role and functionality evolves and results in better graphics now and in the future.
Technically, rasterising is just the mathematical process of transforming triangle (or HOS) geometry to a flat screen-space or buffer. Back in the early days of CG and drawing flat triangles, we either raytraced them or rasterised them. Rasterising was faster and how GPUs were designed to work, and so everything they did became 'rasterising' regardless how complicated the process became. We can include texture sampling and shading in that process, but things like projecting shadows probably doesn't come under that. Calculating LOD probably doesn't come under that.

It might be worth drawing up a reference list of everything a GPU does, or maybe a graphics pipeline does as perhaps the CPU gets involved, and then identify different hardware solutions to different steps, and see what really is 'rasterisation' and what's 'ray tracing' and what's 'something else presently with no name'. And maybe a name can be found for the other stuff and the overlap?
 
Technically, rasterising is just the mathematical process of transforming triangle (or HOS) geometry to a flat screen-space or buffer.

I think this definition is the only one that enables proper debate. It doesn’t make sense to me to bucket workloads under “raster” when those workloads are equally applicable to a fully raytraced implementation. You still need to texture and light your pixels with raytracing. Those aren’t rasterization specific and use the same hardware either way.

If there are performance or quality tradeoffs when doing a texture lookup from a ray hit vs a pixel shader we should discuss that. But texturing isn’t a rasterization specific workload.
 
You’re free to run some shaders to compute an image using your ray tracing, but you still haven’t rasterised. This is effectively the whole communication problem (that I have, not saying you do) in a nutshell. Earlier in the thread we all played fast and loose with the word raster and it didn’t help, but now we’re being specific and using the definition in the graphics pipeline and we’re still a bit stuck.

The lack of agreed precision in the terminology is part of why I think it can often be problematic to discuss ray tracing. Maybe it’s (genuinely) just a problem I have and everyone else is following along with each other.
That's my point: people use the "rasterization" word as if it means "everything which isn't RT h/w" while "rasterization" is just one step of traditional rendering pipeline in h/w with most of other steps (again, in h/w) being used even for pure PT. So the whole division is misleading, RT won't work without general purpose FP32 processing units whatever you'd call them, it won't work without memory load/store units, without the very same caches, even without texture filtering h/w (you could do the same calculations via shaders of course but this again won't be "RT h/w").

But when people are using this distinction they mean "using RT h/w" vs "not using RT h/w". Which is a rather pointless distinction as you can render the exact same graphics with or without any h/w - even without a GPU at all.
 
  • Like
Reactions: Rys
@DegustatoR I think most gamers are not aware of what a rendering pipeline is, or what a hardware rasterizer is. Technically the rasterizer is a fixed way of doing ray tracing for primary visibility, I guess.

I think you've basically got it right. In the context of this forum I've always thought of it as a shorthand for the standard raster pipeline in opengl, vulkan, d3d that includes the rasterizer stage. But like you said, it gets blurry because we say ray tracing when we mean DXR, when I think pretty much all games still use the rasterization stage, including Nanite-enabled UE5 games that use it as a fallback for large triangles.

It's more like graphics pipeline without DXR = rasterization vs graphics pipeline with DXR = ray tracing. Like most people wouldn't call a UE5 game with software lumen ray tracing, even though it is. I think the settings menus in games are pretty much responsible for this, because in that context ray tracing almost always means DXR hardware-based ray tracing.
 
I find it ironic that the IHV that markets RT the hardest and dedicates the most die space to RT hardware also has the most die space dedicated to specific rasterization tasks. Nvidia's PolyMorph engine handles tessellation, vertex fetch, and other tasks. AFAIK AMD doesn't dedicate HW for that, which is forward-looking if mesh shaders or software rasterization take off since they don't utilize that dedicated HW. If future generations of Nvidia cards ditch the PolyMorph engine in response they could dedicate even more die space to RT. So paradoxically, new rasterization techniques like mesh shaders and SW rasterization could push ray tracing further. I think there will be a point at which most games will be using mesh shaders or SW rasterization for primary visibility and RT for lighting (with neural rendering techniques potentially added), and it will stay that way for a very long time.
I'm not sure I understand that line of thought that led to the conclusion in this post ...

Just because a particular IHV implements a lot of special HW states for specifically accelerated pipelines (GFX/compute/RT), it doesn't necessarily mean that they'll always have fastest implementation for those certain sets of pipelines. Developers will use those features if it makes sense (RT pipeline/texture sampling/etc) for their use case or other more programmable features (bindless/compute shaders) as well and IHVs can't force developers to use API/features that they don't like such as vertex buffers or tessellation ...

More specialized HW implementations aren't always better since it can make it harder for them to do work balancing/distribution compared to more flexible HW implementations as we saw in the past before the advent of unified shading pipelines or more recently async compute. At the end of it all, competitive hardware design isn't purely about mutually exclusively subscribing to diametrically opposed philosophies between special hardware acceleration or dynamic repartitioning schemes for hardware resources since in the real world IHVs are forced to find the optimal balance to match common API usage patterns ...
 
It doesn’t make sense to me to bucket workloads under “raster” when those workloads are equally applicable to a fully raytraced implementation
Take for example ray marching techniques, used nowadays to renders stuff like volumetrics (clouds, fog, smoke، god rays, ... etc), used also for advanced shading, screen space shadows, screen space reflections and others, people still classify them under rasterization, despite them having very little to do with rasterisation!
 
A lot more than the GTX 760.



Did you bitch about the GTX 1060's price?



Did you say that about the 9800GTX?



You can only drop prices so much before it's not worth making the product anymore. And without knowing exactly the cost of each GPU it's impossible to know how much their prices can be dropped, so yes, it's not that simple.

AMD have to drop prices to force sales, they have always done this.



Sure that's not DLSS?



Funny, I've never had an issue with either of those game, at native 4k.
Why would DLSS cause stuttering lol

Btw 960 was 200, 1060 was 250. I don’t care that neither of these cards did 4k, as even top end cards didn’t do 4k.

I understand liking Nvidia hardware but this generation was universally considered overpriced and that’s why the Super refreshes came out.
 
Why would DLSS cause stuttering lol

DLSS does cause stuttering, just like it did in The Witcher 3 next gen update.

Btw 960 was 200, 1060 was 250.

1060 6GB was $300 in 2016.

Nearly $400 in 2024 when adjusted for inflation, which is 4060ti money.

So what's your problem with cost again?

I don’t care that neither of these cards did 4k, as even top end cards didn’t do 4k.

Top end cards still can't do 4k in the best AAA games.

I understand liking Nvidia hardware but this generation was universally considered overpriced and that’s why the Super refreshes came out.

Adjust the prices for inflation and then talk.
 
1060 6GB was $299
Only Founders Edition, MSRP for the rest was $249 like 760

Edit: just because I know you won't take my word for it because NVIDIA can't do wrong anywhere
NVIDIA is pricing the GeForce GTX 1060 at a surprising $249 price point, which is just $20 more than the Radeon RX 480 8 GB. Its Founders Edition reference-design SKU, which we are reviewing today, is priced at a $50 premium, at $299.

 
Last edited:
Just because a particular IHV implements a lot of special HW states for specifically accelerated pipelines (GFX/compute/RT), it doesn't necessarily mean that they'll always have fastest implementation for those certain sets of pipelines. Developers will use those features if it makes sense (RT pipeline/texture sampling/etc) for their use case or other more programmable features (bindless/compute shaders) as well and IHVs can't force developers to use API/features that they don't like such as vertex buffers or tessellation ...

More specialized HW implementations aren't always better since it can make it harder for them to do work balancing/distribution compared to more flexible HW implementations as we saw in the past before the advent of unified shading pipelines or more recently async compute. At the end of it all, competitive hardware design isn't purely about mutually exclusively subscribing to diametrically opposed philosophies between special hardware acceleration or dynamic repartitioning schemes for hardware resources since in the real world IHVs are forced to find the optimal balance to match common API usage patterns ...
I'm not claiming specialized HW is inherently faster for a given task. Just that the current trend with games - including Nvidia-sponsored titles - is that the vertex shader pipeline is becoming less important and therefore the HW Nvidia dedicates to it will have less and less value but RT HW will only increase in value.

Nanite Skeletal Meshes and Nanite Foliage are leading to a future in which all UE5 geometry is Nanite, and Epic is doubling down on HWRT with Lumen and MegaLights. Alan Wake 2 uses mesh shaders on any GPU that supports it and has multiple RT options. Avatar: Frontiers of Pandora and (presumably) Star Wars Outlaws use mesh shading (on consoles) and have multiple RT options. AC Red will have ray tracing and some form of virtualized geometry. Capcom's next iteration of the RE Engine will support mesh shaders and double down on RT.

The logical conclusion is that Nvidia will eventually ditch the dedicated HW that only works with the vertex shader pipeline but continue to improve on RT performance.
 
Hasn’t AMD already ditched a lot of their legacy geometry hardware? Nvidia seems to be a few generations behind on that front.
 
For some reason Steve is trying to pass off DLSS balanced upscaling as “4K” when talking about CPU performance. No idea what he’s trying to accomplish with that.

“In CPU-limited games like Homeworld 3, even at 4K, the 9800X3D is still 34% faster than the 7700X, although the margin would be smaller without upscaling.”

Is this an attempt to normalize upscaled resolutions to avoid benchmarking native 4K in CPU reviews?

 
Is this an attempt to normalize upscaled resolutions to avoid benchmarking native 4K in CPU reviews?
I don't think they can normalize it. It seems impossible to explain to the masses why lower resolutions are needed to investigate differences between CPUs.
 
I don't think they can normalize it. It seems impossible to explain to the masses why lower resolutions are needed to investigate differences between CPUs.

I think everyone understands that lower resolutions are needed to show the raw impact of faster CPUs. The disconnect seems to be that people want to know if upgrading their CPU today will improve frame rates and Steve is claiming he wants to show the real performance differences because the faster CPU will provide benefits in the future on more CPU demanding games.

He should just show results at different resolutions and in the conclusion make it clear (even though it’s already super obvious) when a CPU upgrade today won’t help with 4K gaming. Maybe it’s just Intel users salty that their hardware is getting trounced and are looking for excuses. But either way it doesn’t make sense to drop native 4K testing and pretending DLSS balanced is a good substitute is very misleading.
 
Steve is claiming he wants to show the real performance differences because the faster CPU will provide benefits in the future on more CPU demanding games.

It seems that in itself would be a hypothesis that needs data supporting it? That assumption has been brought up I don't recall ever really seeing a strong data set necessarily supporting it. Particularly because an added complication is that future games may leverage/stress the different pros/cons of various architectures.

Does the 5800x3d increasingly pull ahead of the 5800x in 2024 games with rtx 4090 at 1440p/4k as reflected by 720p/1080p rtx 3090 tests in 2022?

I still feel the problem in general with CPU reviews is the large reuse with GPU testing. There's already current real usage scenarios that can show CPU gaps, just not the same that applies to GPUs.
 
Back
Top