Should full scene RT be abandoned? *spawn

How is graphics technology ever supposed to advance if you're stuck waiting for the hardware to be up to the task before trying it? You could very well easily argue that RT is as good as it is today because the hardware isn't up to the task. You hack solutions together until eventually the hardware is capable of doing so without it... then you move on to the next hacky solution.
The issue with this is that we're making a game first and foremost. So most users should be able to play the game at acceptable settings and acceptable visual quality/clarity. As a person who enjoys playing games, I'm yet to see how this RT push meaningfully benefits me as a consumer. All I see is developers saving money and improving their workflows while passing the costs onto me. It's just another way of trying to continue create slack to allow their wasteful spending and poor project management to continue. In an era where we're getting the smallest jumps in graphics hardware coupled with aggressive increases in price, it doesn't seem wise to shrink your TAM based on the hardware they have.

As it stands right now, I can't think of any game where the presence of RT made it more fun or enjoyable to play. The best game in terms of RT implementation for me to date is Cyberpunk and the addition of PT did not even make it 1% more fun to play. In fact, I'd argue that there's an inverse relationship between the enjoyment I derive from a game and the more time they spend implementing RT. Avatar, Star wars Outlaws, Indiana Jones, Alan Wake 2 and Control are some of the most boring games I've ever played.
And I think they've done such a good job of it in such a short time that people forget just how intensive ray /path tracing anything is.. The fact that we're getting not just old games rendered with Path Tracing, but brand new current gen games with AAA visuals.. It's absolutely insane.
At the cost of visual clarity with poor upscaling algorithms upscaling from really low resolutions. But like you mentioned, you're ok with the tradeoffs. I am certainly not.
Nothing is being forced on anyone. I think the speed at which things are progressing is just as it's supposed to be. 10-15 years from now will be insane.
It is being forced on users because developers are foregoing the use of tried and tested methods like SSR, with fall back perspective correct cube maps, planar reflections, etc. Now you either have to pick between low sampled RT with a high performance cost or really bad SSR and cubemap implementations for lower performance costs. The sad part about this is that instead of using the additional hardware resources to push the game design and visuals to be more expansive, it's instead being wasted of features that deliver very marginal impact. Most users can't even tell the differences because the hardware is not nearly powerful enough to ray trace at a level that would show significant differences. If you show an average user a game with baked gi and real time gi, most can't even tell the difference. So, when you look at it from that perspective, it's hard on the consumer side to justify the additional cost.
 
I'll argue that that's in part because of how RT was heralded. It was supposed to be photo real from the moment RTX 20xx appeared, with all sorts of preview and concept and demos ushering in a brave new world. The realities of the complexities of RT mean these targets were never realistic.
I never felt that it was heralded like that at all. It was supposed to be more realistic.. of course.. because it's approximating how light actually works in the real world. I mean.. people claimed Gran Turismo 1 was the most photo realistic racing game back in 1997/98.. that kind of stuff always happens when there's some kind of advancement in technology. It's always "more realistic".. I honesty never got that impression and I feel like I had a good pulse on things around that time.

Not directly, but Nesh's point is that alternatives are being dropped. So where you might have a game that could use planar reflections to good effect, nowadays that option isn't present and you either have intensive RT requiring a high-end GPU or no reflection. If users could choose between which techniques they would prefer, there'd be no issues, but that's too much work to ask of the devs to create two (or more) different rendering paths and let the players decide.
Oh for sure. I still don't think that should mean that it should be abandoned. I think we need to just accept what developers choose for their own games. They're just as motivated to push and adopt new technologies for their own curiosities and education as we are. The comparison may be in favor of one technique over the other in a certain instance, but maybe not in another. I think it's normal that we accept some caveats and growing pains with the adoption of any new technologies to push through and eventually become better overall for it. I think the more you simulate instead of imitate, the better off we'll be, and thus some ugly spots here and there are worth it.
 
So where you might have a game that could use planar reflections to good effect, nowadays that option isn't present and you either have intensive RT requiring a high-end GPU or no reflection.

Was it actually the case that games used planar reflections to good effect before RT? I don’t remember those but I do remember the bad SSR implementations.
 
Should full scene RT be abandoned?
No.

The PC gaming world absolutely benefited from Crysis performing at single digit framerates on then-current video cards and CPUs of the day. Nothing else at the time could boast so many thousands of discrete pixel shaders, so much use of high resolution texturing, so much use of real time lighting and shadow and interactive physics and effects post-processing all within a single game. Those of us who were around for the release remember what immediately came to the forums: so many people decided the only reason it could be so slow at max settings was "unoptimized code" and blah blah, but the original Crysis holds up reasonably well even to this day.

Pushing the envelope is what keeps hardware vendors moving. When someone in 2008 was building a new top-end PC, you bet your ass their first question was "Can it run Crysis?" And the graphics vendors were tripping over themselves to increase pixel shader power, multitexture throughput, total ROPs throughput, and working nights and weekends so their drivers could deliver the very best Crysis experience because it WAS the benchmark that all others were measured by.

IF you enable alllllllllllll the raytracing and it sucks your performance into the "OMG only 30FPS HOWWW WILLL I LIIIIIIIVE?!?", you should first consider turning some effects down just like we did back in the Crysis days. And then, the hardware vendors still have a new target to reach for their midrange and lower cards for the next go-round.

Pain is necessary for growth. :)
 
The issue with this is that we're making a game first and foremost. So most users should be able to play the game at acceptable settings and acceptable visual quality/clarity. As a person who enjoys playing games, I'm yet to see how this RT push meaningfully benefits me as a consumer. All I see is developers saving money and improving their workflows while passing the costs onto me. It's just another way of trying to continue create slack to allow their wasteful spending and poor project management to continue. In an era where we're getting the smallest jumps in graphics hardware coupled with aggressive increases in price, it doesn't seem wise to shrink your TAM based on the hardware they have.
That's on the developer. Trust me.. I'd love games to not stutter more than have every pixel path traced up to 25 bounces... I think we have bigger problems.. but we don't get to decide how things run and how things look. We only get to buy things or not buy them. The industry isn't going to stay stuck in the past because some people don't like a bit of ghosting, or whatever.
As it stands right now, I can't think of any game where the presence of RT made it more fun or enjoyable to play. The best game in terms of RT implementation for me to date is Cyberpunk and the addition of PT did not even make it 1% more fun to play. In fact, I'd argue that there's an inverse relationship between the enjoyment I derive from a game and the more time they spend implementing RT. Avatar, Star wars Outlaws, Indiana Jones, Alan Wake 2 and Control are some of the most boring games I've ever played.
I can think of a lot of games which look better with RT. Most games are pretty good about allowing you to turn stuff off. If you're looking for fun to play games, I suggest not worrying about graphics at all. Cyberpunk is a bad example because you're not being forced to play with RT or PT.
At the cost of visual clarity with poor upscaling algorithms upscaling from really low resolutions. But like you mentioned, you're ok with the tradeoffs. I am certainly not.

It is being forced on users because developers are foregoing the use of tried and tested methods like SSR, with fall back perspective correct cube maps, planar reflections, etc. Now you either have to pick between low sampled RT with a high performance cost or really bad SSR and cubemap implementations for lower performance costs. The sad part about this is that instead of using the additional hardware resources to push the game design and visuals to be more expansive, it's instead being wasted of features that deliver very marginal impact. Most users can't even tell the differences because the hardware is not nearly powerful enough to ray trace at a level that would show significant differences. If you show an average user a game with baked gi and real time gi, most can't even tell the difference. So, when you look at it from that perspective, it's hard on the consumer side to justify the additional cost.
This is the cost of advancing technology... it always has been. Devs/Pubs are incentivized to utilize new technologies. It what keeps the industry going, not just from a technological perspective, but it keeps the developers interested as well. People need reasons to buy new GPUs and CPUs.. Games with old technology start running too fast on all current GPUs, so some new graphics technology has to come out which will improve things but drops the old GPUs to their knees and thus requiring people to purchase new ones to build back up again. Having GPUs capable of this level of Ray Tracing massively improves productivity... being able to do it in real-time for games is incredible. Yes, the average person may not be able to see what the difference is in a game with baked GI and RTGI.. but we can always cherry pick scenarios which work one way or the other.. at the end of the day, the dev makes that call.

Don't get me wrong though, I understand your perspective. I just think that it's always been the case where something advances which improves things in some ways and regresses things in others.. but eventually it all catches up for the betterment of the medium.
 
It is being forced on users because developers are foregoing the use of tried and tested methods like SSR, with fall back perspective correct cube maps, planar reflections, etc. Now you either have to pick between low sampled RT with a high performance cost or really bad SSR and cubemap implementations for lower performance costs.

I don't see why we should blame RT for poor implementations of tried and tested methods like SSR. If they're tried and tested it should be easy enough to ship a good implementation. I think you're putting the blame in the wrong place.

How is graphics technology ever supposed to advance if you're stuck waiting for the hardware to be up to the task before trying it?

Yep and it also takes time for developers to wrap their heads around new stuff. Unfortunately they don't have the luxury of doing that in some offline incubator while shipping games based on mature tech. They're going to try and fail and learn and improve over time. As an end user I think the ROI on graphics improvements in the past 5-10 years has been really poor. Games in 2025 don't look "that" much better than games from 2015 yet perform far worse even without RT. But I also understand that we're in a period of diminishing returns with current graphics APIs.
 
Lighting and shadowing are paramount to creating the atmosphere the game designer intended and it’s an area where RT/PT delivers.

It’s no different than people don’t like hdr because they’re used to cranking up brightness in sdr and now perceive hdr as too dark either due to bad viewing habits and inadequate equipment.

RT + HDR is an absolute game changer to what we had for decades.
 
I really feel that the reflections in the Spider Man games makes you feel like you are actually IN the city and you are a part of the world. That kind of immersion makes the game more inpactful.
 
The issue with this is that we're making a game first and foremost. So most users should be able to play the game at acceptable settings and acceptable visual quality/clarity. As a person who enjoys playing games, I'm yet to see how this RT push meaningfully benefits me as a consumer. All I see is developers saving money and improving their workflows while passing the costs onto me. It's just another way of trying to continue create slack to allow their wasteful spending and poor project management to continue. In an era where we're getting the smallest jumps in graphics hardware coupled with aggressive increases in price, it doesn't seem wise to shrink your TAM based on the hardware they have.
Well its what it is, I agree, they are trying to make delivering games more cost effective and yes you have to pay for their experiments.
But the other side of the coin, to me at least, is that we would get less AAA games that tries to evolve the tech if they did not.
At the same time, the money people are scared of pushing to much from the narrow path everybody knows ie GaaS, clones and basically the same games over and over.

As for acceptable settings, you probably have different views on that from the next guy.

In general people's entitlement has become so huge, like the pirate iptv thing. People overcharge for tv of type A. Well then I am in my right to pirate it instead? Thats not how the world works, if I cant afford a Ferrari, its not okay to "steal" one. I have to settle for lesser quality or go without. I feel that it is the same with games, people expect that all games should have all the bells and whistles and they should be free as in beer. If you think a game is missing stuff, well do not buy it.



As it stands right now, I can't think of any game where the presence of RT made it more fun or enjoyable to play.

Can you name any graphical tech that made a game more fun? Except for maybe the move from 2d to 3d, which basically created a new genre of games.
I am not disagreeing with you about the main point of a game ie fun, but when a you expect a graphical tech to make the game more fun, it feels like you miss the mark.

I really feel that the reflections in the Spider Man games makes you feel like you are actually IN the city and you are a part of the world. That kind of immersion makes the game more inpactful.

Immersion is pet peeve of mine, to me, if you want immersion, just go and do it in the real world. Because keyboard/mouse or controller and sitting watching a screen will always break the immersion. You want to be an SAS guy, well enlist and try out or put on your footbal boots and play it. :p
 
Ok one thing that I find troubling in general in such discussions is an inability to read the post and instead go full on disagreement mode.

No where did I imply RT is bad, that we shouldn't have had RT hardware yet etc. Some arguments have this tone.

Having RT implementation was a good thing even when it came out. The argument I am pointing isn't about why it was introduced before the technology was advanced enough.

The argument is: by the time that the technology wasn't powerful enough yet to fully support RT globally (thus giving either mixed results or have huge performance cost in relation to the visual output result or the perceived quality), games still need a degree of manual curation to produce some effects without having to either abandon RT altogether or having to abandon other techniques altogether.

The RT roadmap would have still evolved regardless, where we are today while at the same time giving consistent improvements in graphics while not having to deal with huge compromises that make it harder to decide whether its best to turn RT On or Off.

At some point yes, RT hardware will be powerful enough to fully abandon all the manual work or the other imperfect solutions and that would be a blessing.
 
Last edited:
I think the best examples of of what Nesh is talking about are Hitman 3 and Mafia Remake. The first uses plannar reflections very often, many times with multiple simultaneous planes in-frame. It works. Mafia 3 did paralax correct cubmaps at a per-pixel basisi, and the results flare surprisingly convincing.

I agree with Nesh that game could theoretically look better than they currently do on the RT-less path. But I also realize that in the practical world, the budget to implement that work simply isn't there. In the industry of yesterday, that perhaps could have been the case.

There was a time when devs targeted both 16 colour EGA machines with a single beeper as well as 256 colours VGA machines with a phat rolland sound card, and all content was hand-crafted to squeeze the best use out of each set-up, and some of the in-betweens. That's long gone.

The latest example that I can recall of a modern game going to such extremes was the PC version of Tomb Raider Legend, where the graphics features could go all the way between PS2-like feature sets to full DX9 ps360 rendering, all with the apropriate re-touched models, textures, lightmaps, SFX, hand crafted to get the best result out of each rendering paradigmes.

But that was the result of a transition period where it was common to support 2 completelly different versions of a game, and the industry at large has done eveything in its power to avoid that state of affairs, and I can't see them wishing to go back any time soon. In fact, I'm betting they will keep pushing foward until all games are full path-traced, even if that means the end result is a noisy, ghosty, AI upscaled mess on lower end machines.
 
I think the best examples of of what Nesh is talking about are Hitman 3 and Mafia Remake. The first uses plannar reflections very often, many times with multiple simultaneous planes in-frame. It works. Mafia 3 did paralax correct cubmaps at a per-pixel basisi, and the results flare surprisingly convincing.
Hitman misses alot of reflections without Raytracing.

This is like saying why do we need fully 3d rendered graphics when we can just use prerendered backgrounds and 3d characters like in the PS 1 time with Resident Evil and co.
 
There are entire YouTube channels dedicated to how transformative path tracing is to the experience of Cyberpunk.

Here is a small snippet about secondary characters.


Yep. PT is full on transformative. You can basically come up endless examples. Down the road when the hardware evolves and techniques mature, it'll just be the norm and less sour grapes.
 
Hitman misses alot of reflections without Raytracing.

This is like saying why do we need fully 3d rendered graphics when we can just use prerendered backgrounds and 3d characters like in the PS 1 time with Resident Evil and co.

You still missed the point of the whole conversation. You are debating an argument I have not made.

Neither me nor, I believe, Nesh, are saying that RT should not ever exist because there are limited ways to "fake it" with other methods. The point is, because devs can rely on RT, they dont put as much effort in implementing an alternative "fake it" solution for users that are playing with RT off.

I consede that the realities of modern dev, that hypothetical scenario is not economically viable, yet I can still recognise that it would not be completely impossible to have more plannar reflections (or other shader hacks) with good enough results in carefully crafted scenes and games if there was a will for it, and Hitman 3 is one such example. That's all.
 
You still missed the point of the whole conversation. You are debating an argument I have not made.

Neither me nor, I believe, Nesh, are saying that RT should not ever exist because there are limited ways to "fake it" with other methods. The point is, because devs can rely on RT, they dont put as much effort in implementing an alternative "fake it" solution for users that are playing with RT off.

I consede that the realities of modern dev, that hypothetical scenario is not economically viable, yet I can still recognise that it would not be completely impossible to have more plannar reflections (or other shader hacks) with good enough results in carefully crafted scenes and games if there was a will for it, and Hitman 3 is one such example. That's all.
Yes I never implied RT should never exist. What I said is that since the hardware is not fully up to the task yet, games need to be checked what they are actually outputting on screen. In places were RT has a noticeable improvement keep it. Where RT costs too much without bringing any worthwhile perceivable results (or even resulting to negative perceivable results), keep some of the previous solutions even if they are imperfect. Or at least design the game around the fact that some hardware are not up to the task yet, providing thus the the option of having either a mix or the option to have a full scene RT for those that are on the high end spectrum.
 
Back
Top