Value of Hardware Unboxed benchmarking

The AA implementation was much easier and cheaper to add to the hardware than RT
Blenders (also known as ROPs) had to be larger to accelerate AA.

RT hardware is very small, much smaller than tensor cores even. So it's very cheap for the huge acceleration it adds. NVIDIA did a comparison 6 years ago detailing how large GPUs needed to be to do the same ray tracing performance but without RT cores, it needed to be much much larger for the same performance.

It can be easily increased, decreased, forced, and turned off without drastically changing the rendering
The same is true for most ray tracing implementations, no one is forcing you to do anything.

making it a non-controversial feature
AA was a high end only feature, it literally cut fps in half. Most GPUs couldn't handle it.

You bring the controversy on yourself by thinking and acting like RT is forced on you, it isn't, even though it's part of DirectX and Vulkan.
 
Tim is enjoying path tracing in Indiana Jones. Steve still refuses to get on board
I enjoyed the heated discussion between the two, Tim is trying to make Steve understand that if Indiana Jones, Star Wars Oulaws and Avatar are any indication, it's that ray tracing only games are becoming the norm, and as a result of that, RT capable cards are going to be better suited for the future, but Steve refused to acknowledge that, even partially!

The guy has an obvious axe to grind with ray tracing, I don't know what it is exactly, but it's very obvious.
 
If your claim that a majority of average users were complaining that something looked off with visuals prior to Ray tracing, that argument holds no water at all and there’s not an ounce of evidence to support it. On the contrary, a majority complain that they cannot see the difference between RT implementations in its current iteration however they can see a huge performance loss, >50% in many cases.
My claim is that the average user should be able to spot the lack of reflections on a given surface, or the absence of shadowing. Whether they "complain" about this or think it is worth the performance penalty to address it is an entirely different question. My position is neutral on this, so if you want to claim for example the majority of users don't care about having more accurate lighting, you are the one who will have to present evidence to support this. (I am curious what proportion of console players are using the RT modes in the Spider-Man games. The lack of proper reflections in the base mode is a pretty big eyesore to me).
 
Last edited:
I enjoyed the heated discussion between the two, Tim is trying to make Steve understand that if Indiana Jones, Star Wars Oulaws and Avatar are any indication, it's that ray tracing only games are becoming the norm, and as a result of that, RT capable cards are going to be better suited for the future, but Steve refused to acknowledge that, even partially!

The guy has an obvious axe to grind with ray tracing, I don't know what it is exactly, but it's very obvious.

Weird dynamic between them in that video. Steve was being a bit of a bully (for lack of a better word) and it’s obvious Tim doesn’t appreciate it.
 
The guy has an obvious axe to grind with ray tracing, I don't know what it is exactly, but it's very obvious.
Still angry he was rendering a babylon 5 scene in lightwave on his amiga 2000 back in the day and had a power outage after 2 weeks rendering with 1 day left ;)

i'm obviously joking, Steve would have been very anti amiga I would imagine (haha not a pun but it tickled me).
 
My claim is that the average user should be able to spot the lack of reflections on a given surface, or the absence of shadowing.
I think there’s a huge disconnect between what you think the average user should be able to spot and what they actually spot. Linus tech tips did a video on this a few years ago and the users struggled to notice the difference in games. Their experiment was flawed in the sense that the people they used were often around people who discuss graphic intricacies and even then the users struggled. You can sample other forums and discussion spaces where you might come across less informed users. The sentiment you find there is not in favour of RT.
Whether they "complain" about this or think it is worth the performance penalty to address it is an entirely different question. My position is neutral on this, so if you want to claim for example the majority of users don't care about having more accurate lighting, you are the one who will have to present evidence to support this. (I am curious what proportion of console players are using the RT modes in the Spider-Man games. The lack proper reflections in the base mode is a pretty big eyesore to me).
In the console space, 75% of games use performance mode and for most games on consoles, that means no RT.

As an aside, I often wonder if people on here just pigeonhole themselves into highly technical forums like this one or if they wander out into less technical spaces. Sometimes I think you can get an hint based on the things that are prioritized.

The thing about graphics in general is that it’s inferior to gameplay, design, mechanics in every single way. It cannot hold your attention for long and often fades into the background. The most played games are not necessarily the ones that have the best graphics and you can see this by looking at Steam’s most played games.
 
They cover raytracing since Turing and benchmark in subsequent videos.
Your missing the point. When a DF video was asked for they were saying a comparison of the performance hit of current raytracing games to old raster games on max settings. The idea was to show how current RT performance hits are comparable to old max raster settings as in people couldn't get a locked 60 back then on the latest and greatest at max settings either.

I believe the idea was to point out the hypocrisy of how now having to use lower settings to achieve ones desired performance levels is an insult yet it's pretty much what people have had to do since the days of kids making their wolfenstein window postage stamp size on their dads 286 so it wasn't like playing dungeon master with 90 degree movement.
 

They cover raytracing since Turing and benchmark in subsequent videos.
That's not what was suggested. The idea is comparing the delta between the top-end GPUs and the mid-tier across the generations to see what spending more got you over the generations. eg. If historically 5x the price netted you the best graphics of the era (4x MSAA, 16xAF, 4 dynamic lightsources) at 40 fps, and now 2x the price gets you the best graphics of nowadays (pathtracing, blah blah) at 90 fps, the current bang-per-buck would seem a much bigger deal.

The suggestion thus is a video exploring the different prices brackets of all the solutions over the decades, including SLI and Titan etc., and what that got you. Without that overall grounding, criticism of current value might be somewhat idealised versus what people should realistically expect.
 
I believe the idea was to point out the hypocrisy ...
The idea shouldn't be to prove any point, but to get a better perspective. An honest investigation will reveal a valid insight, and the purpose of the research should be to establish that insight without going in to it prejudiced with a desired outcome. This is the standard broken practice of most 'debate' and content creation (and I dare say science) these days, where instead of answering a question, the content is used to promote an existing way of thinking that people are either prejudiced in favour or against, thereby nullifying any useful factual discussion that can be derived from the content.
 
I think there’s a huge disconnect between what you think the average user should be able to spot and what they actually spot. Linus tech tips did a video on this a few years ago and the users struggled to notice the difference in games. Their experiment was flawed in the sense that the people they used were often around people who discuss graphic intricacies and even then the users struggled. You can sample other forums and discussion spaces where you might come across less informed users. The sentiment you find there is not in favour of RT.
Again, it's not a question of being "informed". That's the entire point of my argument. It's a question of whether your brain subconsciously notices the difference, and is able to present that to you, as something looking not quite realistic. You can know nothing about game technology or even games and still be highly sensitive to the artifacts produced current lighting techniques. There is even an example given in this thread of this.

I watched the Linus video and in Tomb Raider the participants were looking for differences that weren't there, since the title only has RT shadows. In Minecraft RTX the girl (Nicole?) was saying that one version looked more artificial but she wasn't sure if that was intended or not, which makes sense in for cartoon-like title. And I didn't see any footage of Wolfenstein: Youngblood to know what was shown or how the participants reacted to it.

I agree that depending on how a technique is used, its contribution to how realistic a scene looks might be small. The contribution of slightly more precise shadows in Tomb Raider is likely to be minor. If select surfaces are reflective in an urban environment, you might not perceive their contribution. (You could make the same argument about volumetric lighting or particle effects, or any technique that contributes towards the realism of a scene.) I don't think it follows from this however, that no improvement in the quality of lighting is going to be perceptible. I would like to see the Linus experiment re-run with Spider-Man, Cyberpunk RT/PT and Indiana Jones PT.

In the console space, 75% of games use performance mode and for most games on consoles, that means no RT.

As an aside, I often wonder if people on here just pigeonhole themselves into highly technical forums like this one or if they wander out into less technical spaces. Sometimes I think you can get an hint based on the things that are prioritized.

The thing about graphics in general is that it’s inferior to gameplay, design, mechanics in every single way. It cannot hold your attention for long and often fades into the background. The most played games are not necessarily the ones that have the best graphics and you can see this by looking at Steam’s most played games.

That's why Spider-Man is the perfect example, because it has a performance RT mode. And as for graphics being "inferior", I think they should not be considered in competition with gameplay and design, but precisely as a tool for defining the gameplay and narrative space. When Valve invested so much time in getting character rendering to look right with Half-Life 2, they did it so you could feel like Alyx was a real person fighting alongside you, not because of some abstract love for "good graphics". Equally, I think the GTA 6 trailer is so compelling because of the consistency of world building that Rockstar can achieve through their crazy investment in environmental detail. The promise of that game is not "better graphics", but being able to explore a modern day rendition of Vice City.
 
That's not what was suggested. The idea is comparing the delta between the top-end GPUs and the mid-tier across the generations to see what spending more got you over the generations. eg. If historically 5x the price netted you the best graphics of the era (4x MSAA, 16xAF, 4 dynamic lightsources) at 40 fps, and now 2x the price gets you the best graphics of nowadays (pathtracing, blah blah) at 90 fps, the current bang-per-buck would seem a much bigger deal.

The suggestion thus is a video exploring the different prices brackets of all the solutions over the decades, including SLI and Titan etc., and what that got you. Without that overall grounding, criticism of current value might be somewhat idealised versus what people should realistically expect.
Ahh, ok I see. I misunderstood. Such a comparison would also need to factor salary growth between then and now to be relevant. What people can realistically expect is also heavily influenced by what people can realistically purchase?
 
...

While this may seem controversial to some, I’m strongly of the opinion that RT in its current iteration is not about improving graphics. It’s about helping devs save money and time. The performance of hardware the average user has access to is only able to perform raytracing with significant compromises. Frankly, whether a studio saves money or time is not my concern. However, if they deliver a substandard product in the quest to save time and money, then it becomes a huge problem for me.

...

I don't know how you can justify this as pretty much every game has fallbacks to a non DXR alternative. Yes, devs are dumping shadow maps to save money and time, but they're also using pure software alternatives that do not require HW RT support. It's also a very cynical attitude, as if all of the devs that proudly showcase their work at conferences etc are just lying about how proud they are to have improved lighting, materials etc.

As a gamer, you might not care about devs saving time, but if you actually want to play games and not have the industry go belly up, it's probably in your best interest that developers streamline their process. For one, baking lightmaps limits game designs, because they do not support dynamic objects. I remember how much people complained about games becoming less dynamic and more static, especially during the PS4/Xbox One era. On the dev side, they just cost a ton of time and money, and considering we have a whole thread detailing industry collapse and layoffs it's probably best if game devs find alternatives to the technologies that cost them time and money. I can't think of an example, but one of my favourite bugs is seeing baked shadows for objects that no longer exist. It's a rare thing, but I know I've seen it. Plus, they don't really work in large/open-world type games with time of day etc.

Appeasing the market is tough. There's a crowd of multiplayer gamers that don't care about graphics at all, and they'll just set everything to low to get wins. Then there's a large crowd of kind of console-type gamers that mostly play story-based narrative games and they get mad if their PS4 or their PS5 launch games don't look better than the previous gen. There's a crowd of pc gamers that only care about resolution and sharpness and want 4k or even 8k displays. Then you have indie gamers that only care about art and games looking nice, but not necessarily caring about cutting-edge technology. Technology is just options, and it's up to a particular dev to figure out what best suits their game. There's no global singular answer for whether ray-tracing hardware is good or bad, or needed or not needed.
 
Ahh, ok I see. I misunderstood. Such a comparison would also need to factor salary growth between then and now to be relevant. What people can realistically expect is also heavily influenced by what people can realistically purchase?
As a more extended investigation, yes. I suggested inflation adjusted, but it should maybe be considered in terms of buying power. However, the investigation only need compare GPUs within a generation. Subsequent investigation on pricing and value can be held independently as part of the discussion on the findings.

That is, focus the question to not require too much work make answering it impossible, so a video can actually be created! ;)
 
Ahh, ok I see. I misunderstood. Such a comparison would also need to factor salary growth between then and now to be relevant. What people can realistically expect is also heavily influenced by what people can realistically purchase?

Does that include salaries for employees of TSMC, Micron, Nvidia, AMD etc :sneaky:

The idea shouldn't be to prove any point, but to get a better perspective. An honest investigation will reveal a valid insight, and the purpose of the research should be to establish that insight without going in to it prejudiced with a desired outcome. This is the standard broken practice of most 'debate' and content creation (and I dare say science) these days, where instead of answering a question, the content is used to promote an existing way of thinking that people are either prejudiced in favour or against, thereby nullifying any useful factual discussion that can be derived from the content.

On one hand you have people who appreciate the technical advancements and progression of graphics tech and the gradual improvements they bring. I’m not sure what the other camp is arguing for. If you don’t like the performance hit of RT you can just not use it.

If their view is that we would have better visuals without RT they haven’t provided any evidence to that effect.
 
If their view is that we would have better visuals without RT they haven’t provided any evidence to that effect.
There'll never be any evidence. The argument is highly theoretical. It's kinda like suggesting if we hadn't invented silicon chips, we'd have a different, better tech by now. Maybe. ¯\_(ツ)_/¯

Prior to RT hardware in the current consoles, we had a lengthy thread looking at alternative technologies and questioning whether RTRT was really of value in the next-gen consoles. There were lots of promising options. None has really developed into anything, but at the same time far less effort has been invested into exploring alternatives than would have happened if RTRT hardware hadn't been introduced to GPUs. If the past 6 years didn't have RTX, who know what developments would have happened.

There's a great deal of faith involved in the arguments.
 
There's a great deal of faith involved in the arguments.
I suppose this is true to a point, however literal "ray tracing" follows (to the best of our ability) the physical model of how photons move through our universe. Once the photon is emitted, it follows a 1D trajectory through space and time until it is either fully absorbed or partially reflected / refracted during transit. The underlying math and methods replicate* how our eyeballs perceive the world, Of course, our current models and implementations of ray tracing are limited by the compute we have available; we've certainly optimized out or fully ignored a lot of the true physical model of what can happen to photons between when they're created and when they're absorbed.

However, does that unequivocally translate to "better visuals?" I suppose it depends on the game. There are countless games which do not depend, and arguably will never depend, on a "true to life" lighting model.

* - our replication of visible light photons is obviously flawed by the limits of our time and compute power. While most of us consider photons as "light", the reality is they cover everything we would describe as radiation. They start as very low powered particles found in radio and micro-radio waves, they move up the energy scale to what we consider visible and invisible light, with even higher energy forms turning into what we consider ionizing radiation -- alpha, beta, gamma rays. When we talk about ray tracing, we're limiting ourselves to visible light photons, but the rest can actually matter for emissivity reasons. Need a real world example of how a more accurate visualization from photon-based ray tracing could benefit from considering higher energy states? One word: fire.
 
Back
Top