GART: Games and Applications using RayTracing

Status
Not open for further replies.
It isnt the first time. Tessellation was the victim last generation. Console used the latest tech until last generation. Now it is even worse with Raytracing, DL and compute.

Hardware Tessellation was not flexible enough. Some title like Demon's soul's Remake use compute for tessellation.
 
Hardware Tessellation was not flexible enough. Some title like Demon's soul's Remake use compute for tessellation.
It was on nVidia. It wasnt on AMD because there geometry pipeline was just bad. This is exactly the same with Raytracing.
 
Last edited:
It was on nVidia. It wasnt on AMD because there geometry pipeline was just bad. This is exactly the same with Raytracing.
The same limitations apply to Nvidia too. There are predefined displacement patterns that you must use.

What are you trying to tell us? They've showed us demos with RT in 2018 and they had all support from NV engs for whole development time and now they had to ditch it because of "some reason"

Well.... nice
Most likely Nvidia just took some of the currently available assets and put them in their custom UE4 branch to make those demos. I doubt they had much of anything to do with the game in development at that point. I doubt you will have even a single room full of multiple mirrors, curved and convex, reflecting other mirrors many times into themselves with very little cutbacks in scene BVH quality.
 
Last edited:
One of the dev builds has RT enabled, showing a stark difference of course with reflections.

Why is it so different this time?
Hardware T&L was controversial back in the days too.

The difference now is that a lot more people have a lot more say on what's going on, without them knowing what they actually talk about (on social media and the likes), also consoles back then were separated from PC architecture, so their paths never crossed over, they were also on outdated archs, but now consoles and PCs share the same path.
 
It was on nVidia. It wasnt on AMD because there geometry pipeline was just bad. This is exactly the same with Raytracing.
I'd disagree again with complaining TS does not easily allow something like Catmull Clark subdivision. So the primary problem was again not worse performance of one GPU vendor over the other, but missing flexibility. (Mesh Shaders now solve this.)
Besides, tessellation is not very useful to enhance details in general, because without seamless UV parametrization it only works on something like terrain, but not general 3D models.
And finally, what's the point to generate tiny triangles if the HW can not rasterize them efficiently at all?

So all that fuzz about tessellation was just made up by tech nerds, vendor fanboys and benchmark fanatics, while the practical interest in it was much lower than the fuzz.
And AMD was not even stupid by not putting so much focus on the feature, imo.

That's sure some parallels to the current RT controversy, but RT is a way bigger promise, at much higher costs.
And the failure is not about HW limitations, but about shortsighted API limits, a lack of games to show a real uplift, and those costs on fps, power and $.
I don't think we can compare the situation for those reasons.
For the first time, we see something like a niche evolving in a space where we did not expect this to happen.

But if i call it 'niche', i understand that RT fans feel annoyed. They list hundreds of RT games, and mention even some mobile chips have it already. It's the future, not niche.
On the other hand, you won't see modern games with Portal RTX lighting running on mobiles or even entry level GPUs, and you know this.

So how should we call it then? How can we name things to avoid those constant misconceptions, leading to same cycles of argument?

Idk. But i can tell you what does not work:
Atomic Heart was never advertised as a console RT showcase. It was advertised by NV, as a RTX showcase.
And they did not deliver. They fooled us with pretty mocked up clips. Marketing lies.
Who's gulity? According to some NV fanboys here, the scapegoat is easy to find: AMDs terrible RT performance on those cheap mainstream consoles only meant to hold back the glorious NV masterrace.
That's ridiculous, stupid, and much more annoying than the opposition. Because AMD was not even involved into the development or marketing of Atomic Heart.

Have fun waiting for the patch. I'll play the game and have fun right now. :D
 
Hardware T&L was controversial back in the days too.
I don't think so. To me it was just nice.
I really only was critical about AA, e.g. 3Dfx sticking 4 chips on the board just to get some meh 2x2 multisampling. That felt stupid to me.

The difference now is that a lot more people have a lot more say on what's going on, without them knowing what they actually talk about (on social media and the likes),
This includes you and me too, or to be precise: Everybody who does not align to our agenda.
So no, it's not that. It's not that easy.

also consoles back then were separated from PC architecture, so their paths never crossed over, they were also on outdated archs, but now consoles and PCs share the same path.
If i would not be a dev, i would wish this separation comes back. Things were just more interesting with more architectures to compare.
But if it was, we would still see the same problems. Many PC players would not pay 2000 bucks just so devs can push the boundaries to no end.
We would see the same two classes society on the PC platform alone as well.

So we have no choice. We need to make both of them happy, using the same games.
To do this, or to raise the impression we are capable to do it at all, we need to converge on some agreement already here, at this place.
 
Most likely Nvidia just took some of the currently available assets and put them in their custom UE4 branch to make those demos.
Those demos weren't made by Nvidia. Why would Nvidia do demos for a 3rd party game?

I doubt they had much of anything to do with the game in development at that point.
There's a leaked dev build with what seems to be full RT support from a month ago or so.

I doubt you will have even a single room full of multiple mirrors, curved and convex, reflecting other mirrors many times into themselves with very little cutbacks in scene BVH quality.
What's this have to do with anything? The game will get RT support with a patch at a later date, this has been confirmed already I believe. If it's not clear it was cut at a last minute to give more time for QA of the PC version.
 
If i would not be a dev, i would wish this separation comes back. Things were just more interesting with more architectures to compare.

That would be a lot more interesting. It would be fun to see hardware (and games) optimized for micro polygons for example go up against another platform that was built around raytracing and a third ecosystem built around extremely flexible general compute. Platform wars would still ensue but at least there would be real differences to bicker over.

I was hopeful that RT would bring some of that differentiation and excitement but the cynicism around it has significantly killed the vibe. Someone claimed recently there are no PC games where RT has had an impact. Baffling. I suppose slightly higher resolutions or frame rates or shaving a few seconds off level loading times are all more impactful.

Maybe Nanite will finally deliver truly “next gen” experiences that we can all get behind. As long as it runs equally well on all of the branded x86 boxes out there of course.
 
I don't think so. To me it was just nice.
I really only was critical about AA, e.g. 3Dfx sticking 4 chips on the board just to get some meh 2x2 multisampling. That felt stupid to me.
Except it was rotated grid supersampling, not multisampling, and it was 8x for 4 chips, two did 4x

Those demos weren't made by Nvidia. Why would Nvidia do demos for a 3rd party game?
To get marketing material with a promise of a related game in future before there was a game to build it from, just assets and tech that would eventually be turned into a game.
 
That would be a lot more interesting. It would be fun to see hardware (and games) optimized for micro polygons for example go up against another platform that was built around raytracing and a third ecosystem built around extremely flexible general compute.
Yes, but if put nostalgic and nerdy emotions aside, we realize that we can do all of this on current HW.
All platforms converging at similar or even equal architecture is just what we get after said architecture satisfies all our requirements. And that's totally the case i would say.
Whining about some missing flexibility here and there, or worshipping one GPU vendor over the other is at a very high level, considering the differences are actually subtle.

The situation is actually pretty good. We should be relaxed and satsified. But we are not. Instead we blow up small issues and differences, just so we still have something to complain and discuss.

I was hopeful that RT would bring some of that differentiation and excitement but the cynicism around it has significantly killed the vibe. Someone claimed recently there are no PC games where RT has had an impact. Baffling. I suppose slightly higher resolutions or frame rates or shaving a few seconds off level loading times are all more impactful.
The problem is, as said, that the current standard is high, so it's impossible RT can add enough so everyone perceives it as impacting enough to be worth its cost.
We really have to respect each others opinion on this, but that's hard if we blow everything up or tone it down to the max. We exaggerate, so we we don't take each other serious, but instead annoy each other.

For example, if RT gives you such a big benefit, then how could it be that others not feeling the same could kill your vibe?
I'll try to analyze... i guess the problem is that we assume tech progress is a common goal for all of us, and there are no doubts an what's good or bad. It looks better, so it is better.
We are used to this assumption, because it did hold actually since the Atari 2600 up to GeForce.

But now, at current day, the assumption no longer holds. People no longer agree but diverge.
That's no bad thing, not all all. It's good even. It causes the breaking of a large community into multiple sub communities. (Maybe that's better words than talking about 'niches'.) It gives us more options, and more offers for our specific demands.

And maybe, because this split up is something new to us, we have not yet learned to deal with it.
We still all meet at a place like this forum under the gaming umbrella, which we still have in common. But we have not realized the split, and we're confused the other guy suddenly no longer shares our subjective status quo.
Intuitively we defend our own status quo, pointing out why it's superior, why it's the right way forwards. While doing so, we unintentionally attack the others status quo, generating the heat that leads to war.

I did not think about this in detail before. Just realizing while typing those posts. But there is something into it, no?
There were discussions to deal with this by changing the structure of the forum. Like having a console sections, PC sections, left and right, or whatever. But that's childish and should not be needed at all.
I'll try to respect the other status quo better. Try to avoid attacking it.
Maybe, after some time, we all will have learned things are more complicated and diverse now. There are AAA games beside Indie games, there's Switch besides RTX4090. There's believe besides disbelieve in RT or AI.
Once we are used to that, it should be possible again to criticize or mention flaws and issues, without generating the impression to attack the idea as a whole.
 
Besides, tessellation is not very useful to enhance details in general, because without seamless UV parametrization it only works on something like terrain, but not general 3D models.
And finally, what's the point to generate tiny triangles if the HW can not rasterize them efficiently at all?

[/QUOTE]

nVidia has shown using Tessellation for procedurally generate details and character deformation 12 years ago:

So all that fuzz about tessellation was just made up by tech nerds, vendor fanboys and benchmark fanatics, while the practical interest in it was much lower than the fuzz.
And AMD was not even stupid by not putting so much focus on the feature, imo.

That's sure some parallels to the current RT controversy, but RT is a way bigger promise, at much higher costs.
And the failure is not about HW limitations, but about shortsighted API limits, a lack of games to show a real uplift, and those costs on fps, power and $.
I don't think we can compare the situation for those reasons.
For the first time, we see something like a niche evolving in a space where we did not expect this to happen.

Tessellation wasnt made up by Fanboys. Like Raytracing these developers couldnt know Sony and Microsoft will go with GCN1.1. So they put time and ressources into Tessellation (like Raytracing) because they wanted to make their future games. After the release Tessellation was scrapped because these consoles couldnt use it.
 
Last edited:
For example, if RT gives you such a big benefit, then how could it be that others not feeling the same could kill your vibe?

It doesn’t affect my personal gaming enjoyment of course. I meant discussions in the community on RT aren’t fun.

I'll try to analyze... i guess the problem is that we assume tech progress is a common goal for all of us, and there are no doubts an what's good or bad. It looks better, so it is better.
We are used to this assumption, because it did hold actually since the Atari 2600 up to GeForce.

But now, at current day, the assumption no longer holds. People no longer agree but diverge.
That's no bad thing, not all all. It's good even. It causes the breaking of a large community into multiple sub communities. (Maybe that's better words than talking about 'niches'.) It gives us more options, and more offers for our specific demands. And maybe, because this split up is something new to us, we have not yet learned to deal with it.

It actually isn’t a new phenomenon. It also happened with anti-aliasing, tessellation and dynamic compute. The PC community for some reason is allergic to feature differentiation. The only acceptable innovation is slightly higher fps which can be very boring. Maybe “sameness” is in our DNA given the inherent nature of the Wintel ecosystem. We want all platforms to provide the same experience. That sentiment is gaining steam on consoles which at the end of the day are almost Wintel boxes too.
 
To get marketing material with a promise of a related game in future before there was a game to build it from, just assets and tech that would eventually be turned into a game.
Nvidia has enough marketing materials including their own demos. Why would they make anything with a 3rd party game from a completely unknown developer? The developer is the one who needed marketing materials and them getting into Nvidia marketing was the reason to use RT in their presentations. You got it completely backwards.
 
nVidia has shown using Tessellation for procedurally generate details and character deformation 12 years ago:
They did this long before that already.
I may have told this story already, but some time before the year 2000 i was working on character modeling using Bezier patches. I wanted curvy smooth skin, while dealing only with a small number of vertices.
And then NV has added HW acceleration of Bezier patches, the exact same thing i was using. I was excited.
It was crazy fast. The first shader i ever wrote was actually vertex skinning for those patches, and i had some rag doll simulation for the skeleton.
So i could throw those characters around, and the skin was high res, smooth, and deforming well. It was like 10 years ahead of what games of that time looked like.
But then they removed the feature for the next gen. (Maybe GeForce3 -> 4.)
So i emailed them and asked why. Cass Everitt himself replied, saying that nobody was using the feature.

The tessellation modes (fractional or quantized) were exactly what later came back as geometry shaders.

Probably that was not their first attempt, as their very first GPU could do only patches, no triangles at all. But idk any details about that.

Too bad it was not adopted. When GS came out, it was already a bit late, because geometric detail was quite good anyway already.
One problem surely was the big success of Catmull Clark subdivison surfaces. But that's a recursive algorithm, and not very attractive for realtime due to it's nature.
For character modeling it's just way better. Modeling using Patches is too technical and not artist friendly.
So today the dream of HW Nurbs has somewhat vanished, also because lighting became the dominating problem so the focus has shifted.
Also, patches can only increase detail, but fail at reducing it. So overall, something like Nanite is much more general and powerful.
But for hard surface and tech models it's still interesting imo, if only for compression reasons. It also would do well for the geometry of the Scorn game, for example.
 
One of the dev builds has RT enabled, showing a stark difference of course with reflections.


Hardware T&L was controversial back in the days too.

The difference now is that a lot more people have a lot more say on what's going on, without them knowing what they actually talk about (on social media and the likes), also consoles back then were separated from PC architecture, so their paths never crossed over, they were also on outdated archs, but now consoles and PCs share the same path.

UE4 RT reflections just don't look right in most games. This, Hogwarts, Returnal. Feels very obvious the materials were designed for non RT, and then turning it on just turns them to mirrors.

Makes me wonder how on earth Call of Duty plans to continue being dominant into 2024 while continuing to support PS4/XBO. The selling points of Call of Duty are things like great visuals and responsiveness, even if they drop it to 30fps on older gen consoles, by the end of this year games like Stalker 2 are going to be out and "average gamer" will be wondering why Call of Duty, "the fps you're supposed to get" looks so much worse. Some publisher with enough money could just drop a UE5 military shooter before the end of 2025 and grab a shit ton of attention and PR. Maybe they've spent so long without any competition whoever's in charge has just lost it and doesn't care anymore.
 
UE4 RT reflections just don't look right in most games. This, Hogwarts, Returnal. Feels very obvious the materials were designed for non RT, and then turning it on just turns them to mirrors.

I don't know why you've got this impression. Those RT on shots look stupidly better to me, especially the right one.

From what little I've seen of Returnal it also benefits greatly.
 
I don't know why you've got this impression. Those RT on shots look stupidly better to me, especially the right one.

From what little I've seen of Returnal it also benefits greatly.

From a technical standpoint. From an art standpoint all of these are guaranteed to be designed to look like the non RT versions. And personally I'd prefer the art directed ones, a giant black forbidding pool of water fits in with Returnal far better than a normal mirror reflection like pool of water no matter how much more "technically" correct it might be.
 
Status
Not open for further replies.
Back
Top