Value of Hardware Unboxed benchmarking

Why are you drawing equivalence between games that require RT to run, and games where you can turn settings up high enough to cause problems for a 12GB card? There's no equivalence there at all.

The number of games that "absolutely require 12GB VRAM" is precisely zero. Less than the number of games that "absolutely require RT".

There are probably about 5 games out there that will run into serious issues on the 4070Ti due to it's VRAM at otherwise playable settings where the 7900XT would not run into the same issues due to it's larger frame buffer. And pretty much all of those can be resolved by turning textures from Ultra to High.

There are dozens of games which would run into serious issues on a 7900XT at settings which the 4070Ti would have no issue with due to its superior RT performance.



DLSS is a combination of hardware and software but that's entirely irrelevant to the end user experience. With a 4070Ti, you have it. With a 7900XT you don't. That absolutely needs to be factored into the value proposition.



I already agreed the price was too high for what you were getting compared to previous generations. The argument is that the 7900XT offered even less value when accounting for it's lack of AI upscaling and performant RT.
I mean ultimately it comes down to what games you play, if you play RT games then AMD is a nonstarter but personally I would rather have more VRAM than better RT, however that’s due to my own use cases.

I think any reviewer wishing to do right by their customers should have absolutely not recommended the launch 4070ti. The ti super maybe, and I actually quite like the 4070 super and I’ve recommended it to many people. The card is simply too expensive to be a 1440p card, and the 12GB VRAM holds it back from 4k gaming even with DLSS.
 
I play a lot of call of duty and I’ve run into many scenarios where it stutters until I turn down texture resolution. I’m using DLSS Perf so 1080p -> 4k.
Does it run without such stutters on a GPU with more VRAM? I've seen no benchmarks which suggest that CoD of all games have VRAM issues.

I’ve had issues with Spider-Man as well, particularly the more RT I turn on.
So does it run better on 7900XT the more RT you turn on?
The comparison with RT must account for the fact that despite having more VRAM similar priced Radeon GPUs have slower RT. Over the last year or so I've yet to see a game where a 16GB 6800XT would end up being faster in any resolution with RT than a 10GB 3080. It's one of the reasons why the choice here isn't easy.

Friend of mine has had tons of issues with FH5.
Don't see many issues here even on 8GB cards.
Also note that this is a 2021 game. SM games are 2022, modern COD engine hasn't changed much since 2020 I think.
This is pretty much what I was talking about - from what I'm seeing there are actually less such games over the recent couple of years than there were previously. Some were fixed with patches (RE4, TLOU), some got better memory management (related to UE5 taking over UE4 mostly), and there are less games now where VRAM usage is used as a marketing weapon (like FC6).
So when people say that "12GBs aren't enough!" I honestly don't understand what they mean cause for all intents and purposes 12GBs are enough, even in 4K, maybe not with RT but running RT on a 4070Ti in 4K is likely a heavy task anyway.
You could argue that the card with more VRAM would "age better" - and it's true to a degree if you ignore the fact that to "age better" such card would also need to run RT as well as the other card. From all the recent comparisons between 6800XT and 3080 (10GB one) I don't see any signs that the former is "aging better" yet - and both of them are starting to hit less than 30 fps in Ultra RT modes even in 1080p. There is no guarantee that the same thing won't happen between 4070Ti and 7900XT in a few years.
 
So I’m still in the same boat not understanding what people mean by “more raster”. Is there some raster technique out there that has the potential to give us better shadows, reflections and GI? And by raster I mean throwing triangles at a rasterizer vs casting rays at triangles.
(hopefully goes without saying but this is just my personal opinion)

Yeah, of course there is. We’ve had constant innovation in rasterised computer graphics hardware and software the entire time we’ve had GPUs (even NV1, even Larrabee). The pace ebbs and flows as the hardware and programming models evolve but the entire history of GPUs to date is your existence proof. Reductive I know, but UE5 looks better than UE4 in foundational ways if you turn ray tracing off. I confidently guess UE6 will keep going.

To reiterate what I was saying, you seem to be narrowing in on what new advances are left in rendering the triangles a rasteriser can see, versus rendering the triangles that ray tracing gets to see. I was hopefully clear that I don’t personally see it as a versus thing at all. I think anyone who does isn’t thinking properly about it (which is very easy to do, we’re almost forced into it being a this vs that).

Instead, to me personally anyway, the best uses of rasterisation and ray tracing in improving modern rendering are incredibly complimentary, not mutually exclusive. I would argue that capitalising and leaning on the well-honed abilities of the GPU when rasterising, to then guide where and how best to optimally drive available ray flow (and also to guide where and how best to apply a denoiser, and/or an upscaler, and…) is the best combined use of both rendering approaches, and the best use of the overall GPU machinery we have today.

There’s honestly so much road left before that runs out as a win for improving overall performance and image quality, and to maximise the utilisation and the balance of the (largely fixed function and hard to generally repurpose) logic being laid down to accelerate both ways of drawing.

There’s also lots and lots of room left to evolve how GPUs rasterise in hardware, and trace rays, and are programmed to do both together. The former still isn’t a solved problem after three decades or more of hardware approaches and programming model advances.

The latter is so inflexible in hardware and the programming model almost hilariously awkward (especially on PC) that it’s barely scratched the surface of what it could do and how it could work. The combination (and maybe even proper blend if you dream big enough) of both is the truly exciting part of innovating in real-time graphics hardware and software for me. I reckon I’ll see out my career witnessing innovation in both, in both hardware and software, and I’ve still got 20 years or so left.

So I kind of don’t follow along with anyone who believes ray tracing is the one true way to render real-time computer graphics, and that the industry should kind of just stop thinking about innovating in and, I dunno, piece wise throw away any other way now that we’ve added ray tracing acceleration. That’s too narrow.

“Thou must always rasterayse”, is actually a commandment if you squint closely at ye olde graphicks texts. Right next to “love thy denoiser”, and “thou shall not bear false witness against thy profiler”.
 
To reiterate what I was saying, you seem to be narrowing in on what new advances are left in rendering the triangles a rasteriser can see, versus rendering the triangles that ray tracing gets to see. I was hopefully clear that I don’t personally see it as a versus thing at all. I think anyone who does isn’t thinking properly about it (which is very easy to do, we’re almost forced into it being a this vs that).
When people talk about RT on GPUs these days they talk about RT h/w which is accessible through DXR API. Thus the comparisons are not in fact "RT" vs "no RT" but "h/w RT done via the GPU RT h/w" vs "some rendering done w/o using GPU RT h/w". The latter could easily be path tracing (wasn't there a game which used a "software" PT like that just recently?) but people would still say that it's "not RT" because when they say RT they mean that particular RT implementation using GPU RT h/w.

The funny part in that is that said RT h/w can't do jack shit without the "rasterization" h/w (i.e. shading, geometry processing, etc.). Which is the main reason why putting them as some opposites is completely misleading - to improve on RT hw/ performance a GPU must also improve on its "rasterization" performance. So maybe it's just a naming thing and we should just stop using the "rasterization" word or something.
 
Does it run without such stutters on a GPU with more VRAM? I've seen no benchmarks which suggest that CoD of all games have VRAM issues.


So does it run better on 7900XT the more RT you turn on?
The comparison with RT must account for the fact that despite having more VRAM similar priced Radeon GPUs have slower RT. Over the last year or so I've yet to see a game where a 16GB 6800XT would end up being faster in any resolution with RT than a 10GB 3080. It's one of the reasons why the choice here isn't easy.


Don't see many issues here even on 8GB cards.
Also note that this is a 2021 game. SM games are 2022, modern COD engine hasn't changed much since 2020 I think.
This is pretty much what I was talking about - from what I'm seeing there are actually less such games over the recent couple of years than there were previously. Some were fixed with patches (RE4, TLOU), some got better memory management (related to UE5 taking over UE4 mostly), and there are less games now where VRAM usage is used as a marketing weapon (like FC6).
So when people say that "12GBs aren't enough!" I honestly don't understand what they mean cause for all intents and purposes 12GBs are enough, even in 4K, maybe not with RT but running RT on a 4070Ti in 4K is likely a heavy task anyway.
You could argue that the card with more VRAM would "age better" - and it's true to a degree if you ignore the fact that to "age better" such card would also need to run RT as well as the other card. From all the recent comparisons between 6800XT and 3080 (10GB one) I don't see any signs that the former is "aging better" yet - and both of them are starting to hit less than 30 fps in Ultra RT modes even in 1080p. There is no guarantee that the same thing won't happen between 4070Ti and 7900XT in a few years.
Benchmarks don’t actually show real gameplay, they play a pre-made sequence that lasts a few minutes. So yes, you won’t see these issues on benchmarks, and the only people that are able to see the issue are people who can recognize stutter and think to check 1% lows.
 
(hopefully goes without saying but this is just my personal opinion)

Yeah, of course there is. We’ve had constant innovation in rasterised computer graphics hardware and software the entire time we’ve had GPUs (even NV1, even Larrabee). The pace ebbs and flows as the hardware and programming models evolve but the entire history of GPUs to date is your existence proof. Reductive I know, but UE5 looks better than UE4 in foundational ways if you turn ray tracing off. I confidently guess UE6 will keep going.

To reiterate what I was saying, you seem to be narrowing in on what new advances are left in rendering the triangles a rasteriser can see, versus rendering the triangles that ray tracing gets to see. I was hopefully clear that I don’t personally see it as a versus thing at all. I think anyone who does isn’t thinking properly about it (which is very easy to do, we’re almost forced into it being a this vs that).

Instead, to me personally anyway, the best uses of rasterisation and ray tracing in improving modern rendering are incredibly complimentary, not mutually exclusive. I would argue that capitalising and leaning on the well-honed abilities of the GPU when rasterising, to then guide where and how best to optimally drive available ray flow (and also to guide where and how best to apply a denoiser, and/or an upscaler, and…) is the best combined use of both rendering approaches, and the best use of the overall GPU machinery we have today.

There’s honestly so much road left before that runs out as a win for improving overall performance and image quality, and to maximise the utilisation and the balance of the (largely fixed function and hard to generally repurpose) logic being laid down to accelerate both ways of drawing.

There’s also lots and lots of room left to evolve how GPUs rasterise in hardware, and trace rays, and are programmed to do both together. The former still isn’t a solved problem after three decades or more of hardware approaches and programming model advances.

The latter is so inflexible in hardware and the programming model almost hilariously awkward (especially on PC) that it’s barely scratched the surface of what it could do and how it could work. The combination (and maybe even proper blend if you dream big enough) of both is the truly exciting part of innovating in real-time graphics hardware and software for me. I reckon I’ll see out my career witnessing innovation in both, in both hardware and software, and I’ve still got 20 years or so left.

So I kind of don’t follow along with anyone who believes ray tracing is the one true way to render real-time computer graphics, and that the industry should kind of just stop thinking about innovating in and, I dunno, piece wise throw away any other way now that we’ve added ray tracing acceleration. That’s too narrow.

“Thou must always rasterayse”, is actually a commandment if you squint closely at ye olde graphicks texts. Right next to “love thy denoiser”, and “thou shall not bear false witness against thy profiler”.
Agreed
 
The funny part in that is that said RT h/w can't do jack shit without the "rasterization" h/w (i.e. shading, geometry processing, etc.). Which is the main reason why putting them as some opposites is completely misleading - to improve on RT hw/ performance a GPU must also improve on its "rasterization" performance. So maybe it's just a naming thing and we should just stop using the "rasterization" word or something.
You can draw using just DXR for all visibility, no rasteriser or traditional VS or mesh shader pipeline to feed it. I thought that was part of what we were talking about.
 
You can draw using just DXR for all visibility, no rasteriser or traditional VS or mesh shader pipeline to feed it. I thought that was part of what we were talking about.
Using just DXR (as a superset of DX12) sure but not by just using the GPU RT h/w.
 
You can draw using just DXR for all visibility, no rasteriser or traditional VS or mesh shader pipeline to feed it. I thought that was part of what we were talking about.

To Shifty’s point we can try to be more specific. When people complain about RT’s poor performance or poor IQ impact they’re talking specifically about RT for shadows, reflections or GI. AFAIK the only use of RT for primary visibility so far is Quake2RTX.

So it follows that when people ask for more raster and less RT they either believe there are raster based solutions to those specific problems or they’re happy with the current state of shadows, reflections and GI and want to see improvements in other areas.

As others have noted there is a bit of a terminology issue here. Everything that’s not RT isn’t necessarily rasterization. So when people say they want more raster it would be helpful if they indicated what specific aspect of rendering they’re actually referring to.
 
Last edited:
I find it ironic that the IHV that markets RT the hardest and dedicates the most die space to RT hardware also has the most die space dedicated to specific rasterization tasks. Nvidia's PolyMorph engine handles tessellation, vertex fetch, and other tasks. AFAIK AMD doesn't dedicate HW for that, which is forward-looking if mesh shaders or software rasterization take off since they don't utilize that dedicated HW. If future generations of Nvidia cards ditch the PolyMorph engine in response they could dedicate even more die space to RT. So paradoxically, new rasterization techniques like mesh shaders and SW rasterization could push ray tracing further. I think there will be a point at which most games will be using mesh shaders or SW rasterization for primary visibility and RT for lighting (with neural rendering techniques potentially added), and it will stay that way for a very long time.
 
I find it ironic that the IHV that markets RT the hardest and dedicates the most die space to RT hardware also has the most die space dedicated to specific rasterization tasks. Nvidia's PolyMorph engine handles tessellation, vertex fetch, and other tasks. AFAIK AMD doesn't dedicate HW for that, which is forward-looking if mesh shaders or software rasterization take off since they don't utilize that dedicated HW. If future generations of Nvidia cards ditch the PolyMorph engine in response they could dedicate even more die space to RT. So paradoxically, new rasterization techniques like mesh shaders and SW rasterization could push ray tracing further.

Nvidia’s fixed function pipeline has been overkill since Kepler. I assume it’s helpful in ProViz/CAD workloads but there’s no evidence that it helps much in games. Nvidia’s advantage back then was in triangle setup, tessellation and rasterization. I think AMD had them beat on fillrate though and has caught up enough in the other areas for it to be a non issue.

I think there will be a point at which most games will be using mesh shaders or SW rasterization for primary visibility and RT for lighting (with neural rendering techniques potentially added), and it will stay that way for a very long time.

That’s a very likely outcome. While UE5 is a huge leap forward for rasterizing primary visibility it’s trending toward RT for lighting.
 
To Shifty’s point we can try to be more specific. When people complain about RT’s poor performance or poor IQ impact they’re talking specifically about RT for shadows, reflections or GI. AFAIK the only use of RT for primary visibility so far is Quake2RTX.

So it follows that when people ask for more raster and less RT they either believe there are raster based solutions to those specific problems or they’re happy with the current state of shadows, reflections and GI and want to see improvements in other areas.

As others have noted there is a bit of a terminology issue here. Everything that’s not RT isn’t necessarily rasterization. So when people say they want more raster it would be helpful if they indicated what specific aspect of rendering they’re actually referring to.

I agree with the poster who stated it's a language issue. Gamers like prettier graphics. I don't believe it's them being content with current rendering at all. RT just doesn't differentiate itself enough in most cases for such heavy performance costs, so people attribute this to rasterization being better. Most people don't have the knowledge to fully dive into the details of how things are actually being rendered. If a majority of RT implementations offered an improvement akin to Metro EE, the discourse around RT would be completely different.
 
Using just DXR (as a superset of DX12) sure but not by just using the GPU RT h/w.
Yes, just using the ray tracing hardware to compute primary visibility, no rasterisation (specifically the part of the pipeline that solves primary visibility by taking geometry and turning it into pixel fragments).
To Shifty’s point we can try to be more specific. When people complain about RT’s poor performance or poor IQ impact they’re talking specifically about RT for shadows, reflections or GI. AFAIK the only use of RT for primary visibility so far is Quake2RTX.

So it follows that when people ask for more raster and less RT they either believe there are raster based solutions to those specific problems or they’re happy with the current state of shadows, reflections and GI and want to see improvements in other areas.

As others have noted there is a bit of a terminology issue here. Everything that’s not RT isn’t necessarily rasterization. So when people say they want more raster it would be helpful if they indicated what specific aspect of rendering they’re actually referring to.
I’m not sure why you’re so fixated on shadows, reflections and GI. What about other effects that can be helped by ray tracing?

Narrowing it down to just three types of things you can render doesn’t make much sense to me, nor does talking about it as being only a choice between more of one way of rendering and less of the other.

I do agree that there’s a terminology problem at play. Just in case I’m not helping: when I personally say I want “more raster”, I literally mean more innovation in how we feed and program the actual hardware rasteriser in the modern graphics pipeline, and what happens in the hardware and software as a result of that pipeline stage, and how its role and functionality evolves and results in better graphics now and in the future.

I then always want that in tight concert with “more ray tracing”. Not just more of one and less of the other. I personally believe more of both should always be the goal.
 
Yes, just using the ray tracing hardware to compute primary visibility, no rasterisation (specifically the part of the pipeline that solves primary visibility by taking geometry and turning it into pixel fragments).
You won't be able to do anything with these fragments with RT h/w, you'd still need to shade them.
 
I’m not sure why you’re so fixated on shadows, reflections and GI. What about other effects that can be helped by ray tracing?

Those are the applications of RT we have in games today that people deem to be unimpressive or too expensive. So then the obvious question is what’s the alternative?

I do agree that there’s a terminology problem at play. Just in case I’m not helping: when I personally say I want “more raster”, I literally mean more innovation in how we feed and program the actual hardware rasteriser in the modern graphics pipeline, and what happens in the hardware and software as a result of that pipeline stage, and how its role and functionality evolves and results in better graphics now and in the future.

No argument there. Better rasterization for primary visibility is awesome and I haven’t seen anyone suggest otherwise.

I then always want that in tight concert with “more ray tracing”. Not just more of one and less of the other. I personally believe more of both should always be the goal.

Is that true in a general sense though? Take shadow maps for example. That seems to be a cut and dry case of reducing the amount of raster (no more rasterizing the scene from the perspective of a few lights) and doing a lot more RT.
 
How much did the GTX 1060 cost?

A lot more than the GTX 760.

I am bitching that a 1440p card costs $800.

Did you bitch about the GTX 1060's price?

Either make it appropriate for 4k or charge less, until then I will bitch lol.

Did you say that about the 9800GTX?

I would love to hear why it’s ’not that simple’ for Nvidia to drop prices when they did literally just that for the 4070, and AMD does it with almost all of their releases lol.

You can only drop prices so much before it's not worth making the product anymore. And without knowing exactly the cost of each GPU it's impossible to know how much their prices can be dropped, so yes, it's not that simple.

AMD have to drop prices to force sales, they have always done this.

I play a lot of call of duty and I’ve run into many scenarios where it stutters until I turn down texture resolution. I’m using DLSS Perf so 1080p -> 4k.

Sure that's not DLSS?

I’ve had issues with Spider-Man as well, particularly the more RT I turn on.

Friend of mine has had tons of issues with FH5.

Funny, I've never had an issue with either of those game, at native 4k.
 
You won't be able to do anything with these fragments with RT h/w, you'd still need to shade them.
You’re free to run some shaders to compute an image using your ray tracing, but you still haven’t rasterised. This is effectively the whole communication problem (that I have, not saying you do) in a nutshell. Earlier in the thread we all played fast and loose with the word raster and it didn’t help, but now we’re being specific and using the definition in the graphics pipeline and we’re still a bit stuck.

The lack of agreed precision in the terminology is part of why I think it can often be problematic to discuss ray tracing. Maybe it’s (genuinely) just a problem I have and everyone else is following along with each other.
 
Last edited:
Those are the applications of RT we have in games today that people deem to be unimpressive or too expensive. So then the obvious question is what’s the alternative?
I don’t think there has to be an alternative. To me it’s more about effort from the whole ecosystem to give devs the hardware and programming model that allows them to do what they need to keep improving what’s possible, with those applications and all the others. Better ways to use the hardware, better hardware.
Is that true in a general sense though? Take shadow maps for example. That seems to be a cut and dry case of reducing the amount of raster (no more rasterizing the scene from the perspective of a few lights) and doing a lot more RT.
I think it’s true. A combination of rasterising and ray tracing shadows can be a win (perf and image quality) depending on how you’re drawing. It’s a great classify and execute example to me in fact. Rasterise your shadows, including classification of pixels where ray tracing can improve image quality, then you get to focus your available ray flow on improving those pixels, rather than wasting it doing the guts of what rasterisation is already better at.

You probably need the guts of that rasterisation path anyway for the non-RT fallback while we’re still in the era of it not being ubiquitous.
 
A combination of rasterising and ray tracing shadows can be a win (perf and image quality) depending on how you’re drawing
We already have that in many games, for shadows and reflections، some games depoly screen space effects and use ray tracing as a fallback when screen space fails. Some games deploy this scheme even for global illumination. We've had this since Battlefield V (the first game to deploy hardware ray tracing).
 
Back
Top