Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
Eh, pardon for asking, but why?
Does reality consist of ”curved surfaces”? I just can’t see why this would be an inherently better way to represent the world...

Curves are better than flat tris. We eventually need to have a basic primitive. Curved patches at least are more organic. Specially if also displaced by heightmaps. The CG industry hasn't been using them since ever for no reason.
 
Curves are better than flat tris. We eventually need to have a basic primitive. Curved patches at least are more organic. Specially if also displaced by heightmaps. The CG industry hasn't been using them since ever for no reason.
OK. Personally I would say that is mostly about modelling, really, but then I understand where you’re coming from.
 
OK. Personally I would say that is mostly about modelling, really, but then I understand where you’re coming from.
we are e meandering too much on this off-topic now. I gave dynamic tessellation as an example of a hardware feature that was taunted for years and still hasn't become ubiquitous in real time rendering, while in offline it is.
To make my point clear about subdivision surfaces, it's not just about modeling, although it also is that, but the key part is resolving the curve into tris dynamically every frame, just in the right amount necessary to represent the curve perfectly, without sharp edges. No more sharp corners in circular shapes, never anymore. This is barely a detail today given how highpoly meshes already are, but even then, it's not uncommon to see the occasional sharp corner.
 
we are e meandering too much on this off-topic now. I gave dynamic tessellation as an example of a hardware feature that was taunted for years and still hasn't become ubiquitous in real time rendering, while in offline it is.
To make my point clear about subdivision surfaces, it's not just about modeling, although it also is that, but the key part is resolving the curve into tris dynamically every frame, just in the right amount necessary to represent the curve perfectly, without sharp edges. No more sharp corners in circular shapes, never anymore. This is barely a detail today given how highpoly meshes already are, but even then, it's not uncommon to see the occasional sharp corner.
We need more info on mesh shaders, but with the concepts of meshlets you can get very grain LOD control over the model with significantly less impact to the CPU.
That’s why I’m sure if tesselation would be fully required. Model it high quality and let the system figure out which meshlets to render ?
 
We need more info on mesh shaders, but with the concepts of meshlets you can get very grain LOD control over the model with significantly less impact to the CPU.
That’s why I’m sure if tesselation would be fully required. Model it high quality and let the system figure out which meshlets to render ?
That's another viable way to address the issue. It trades memory for performance. You don't have to compute the dynamic subdivision, but you store many dozen LoDs, and even then, there is always the risk of getting too close for even the highest LOD in the chain, but your textures already went to shit at this point anyway...
 
Last edited:
Tessellation would be a memory saver. Less triangles to store in RAM, storing HOS instead, and less data to move from RAM to GPU (move HOS instead and process in GPU). Similarly processing procedural surface shaders instead of baking them into textures.

Best case example, an SDS sphere is 8 points in a binding cube, whereas for super-duper smooth even up close resolution you'd need god knows how many triangles.

All these considerations need to weigh up the cost of memory versus the cost of real-time processing, to see which is better value for money, as we can't afford to put in the best-of-all-technologies as used by offline renderers because as the performance requirements.
 
That's another viable way to address the issue. It trades memory for performance. You don't have to compute the dynamic subdivision, but you store many dozen LoDs, and even then, there is always the risk of getting too close to even the highest LOD in the chain, but your textures already went to shit at this point anyway...
Actually i think that’s where meshlets are better. I don’t believe you need to tons of meshes for varying LOD.

I need to be careful here because marketing talk
The mesh shader gives developers new possibilities to avoid such bottlenecks. The new approach allows the memory to be read once and kept on-chip as opposed to previous approaches, such as compute shader-based primitive culling (see [3],[4],[5]), where index buffers of visible triangles are computed and drawn indirectly.

The mesh shader stage produces triangles for the rasterizer, but uses a cooperative thread model internally instead of using a single-thread program model, similar to compute shaders. Ahead of the mesh shader in the pipeline is the task shader. The task shader operates similarly to the control stage of tessellation, in that it is able to dynamically generate work. However, like the mesh shader, it uses a cooperative thread model and instead of having to take a patch as input and tessellation decisions as output, its input and output are user defined.
 
You should check DF's analysis of RT battlefield V again to see the render quality of reflected objects. What Hitman is delivering here is arguably more.
So after trying Hitman 2, I discovered many limitations with their planar reflection implementation, none of them are mentioned in the DF video.

For starters, foliage and grass are not reflected, even if the scene is comprised of dense tall foliage. Also overflow water from sinks and taps is not reflected, so do water splashes from bullets or explosions. Fire, smoke and explosions from grenades or bullet impacts are not reflected as well. It's almost like all transparencies are excluded from reflections altogether.
 
Some maps in bfv dont do RT justice, while others do. It is very impressive for what it is and i recommend having dxr on low/medium as it isbt that big difference with high/ultra.
 
So after trying Hitman 2, I discovered many limitations with their planar reflection implementation, none of them are mentioned in the DF video.

For starters, foliage and grass are not reflected, even if the scene is comprised of dense tall foliage. Also overflow water from sinks and taps is not reflected, so do water splashes from bullets or explosions. Fire, smoke and explosions from grenades or bullet impacts are not reflected as well. It's almost like all transparencies are excluded from reflections altogether.
Those are not really limitations, just optimizations. It is just a second camera spot with lower shading quality/res/lod.
RT isn't perfect, too. It is always a matter of optimization and performance, how much you are willing to invest in a "simple" reflection.
Currently e.g. in BFV there are more reflective surfaces than in real life (never seen such reflective water or mud if someone walks through it), surfaces that reflect way more than they would ever do. Yes it is an early shot at RT and needs more optimization. But overall I really see that big difference when playing, just when standing still and watching for differences.
They should really not invest that much of the RT processing power into reflective surfaces. Correct lighting from many light-sources would be much better (in my opinion).
 
They should really not invest that much of the RT processing power into reflective surfaces. Correct lighting from many light-sources would be much better (in my opinion).
I kinda agree there, they indeed overdid reflections in the game. Many of them were not truly necessary in a fast game. Some dynamic GI would have been better in my opinion, especially inside buildings, as they are destroyed quite a lot and are lit only by the baked GI solutions even when completely demolished.
 
So after trying Hitman 2, I discovered many limitations with their planar reflection implementation, none of them are mentioned in the DF video.

For starters, foliage and grass are not reflected, even if the scene is comprised of dense tall foliage. Also overflow water from sinks and taps is not reflected, so do water splashes from bullets or explosions. Fire, smoke and explosions from grenades or bullet impacts are not reflected as well. It's almost like all transparencies are excluded from reflections altogether.
Expect similar compromises in RT reflections too; there's nothing intrinsically more efficient or faster about raytracing for drawing reflections. Every object you include in the reflection has a cost, whether you are tracing reflections or rasterising them. Shading results for surfaces reflected via tracing or rasterising will be the same. If tracing grass is too expensive, you'll leave that out. Transparencies are incredibly expensive in RT too (need a secondary ray cast, resulting in multiple recursions, so often times cap with a 'recursion limit' result). Every iteration in tracing is going to need to scale back visual quality to avoid exponential workloads.
 
Expect similar compromises in RT reflections too; there's nothing intrinsically more efficient or faster about raytracing for drawing reflections
Yeah, I was just returning to the subject of comparing the Hitman 2 vs Battlefield V implementations. Looking into BFV, foliage is not excluded, neither do explosions, fire, smoke, water splashes .. etc. The game also has reflections within reflections and dynamic reflections on curved surfaces, which are obviously missing from Hitman 2, lighting and AO is toned down in Hitman 2, bullet decals reflections are missing, none of that is in BFV .. etc.
 
Last edited:
I think I'm seeing DXR enabled here as well. May also explain the performance. if this is alpha this is pretty good.

Reflections and Shadows
4:30-4:40 edit: nvm, it's SSR

Shadows are have incredible quality. Keep watching from that point and you'll see razor thin shadows from the pipes above. Everything in the game is drawing shadows. All sorts of movement. Very clean.

Examining some more, also think they are running GI and AO through DXR. Harder to prove though.
 
Last edited:
I think I'm seeing DXR enabled here as well. May also explain the performance. if this is alpha this is pretty good.

Reflections and Shadows
4:30-4:40 you can see they aren't doing SSR

Shadows are have incredible quality. Keep watching from that point and you'll see razor thin shadows from the pipes above. Everything in the game is drawing shadows. All sorts of movement. Very clean.

Examining some more, also think they are running GI and AO through DXR. Harder to prove though.
Everything in there looks like regular Frostbite with SSSR + cubemaps and baked GI via Enlighten as usual.
 
Expect similar compromises in RT reflections too; there's nothing intrinsically more efficient or faster about raytracing for drawing reflections. Every object you include in the reflection has a cost, whether you are tracing reflections or rasterising them. Shading results for surfaces reflected via tracing or rasterising will be the same. If tracing grass is too expensive, you'll leave that out. Transparencies are incredibly expensive in RT too (need a secondary ray cast, resulting in multiple recursions, so often times cap with a 'recursion limit' result). Every iteration in tracing is going to need to scale back visual quality to avoid exponential workloads.
Except that planar reflections need to be rendered for every surface that needs them. If you have several surfaces at different angles/places that need accurate reflections RT is a clear winner in performance.
 
Except that planar reflections need to be rendered for every surface that needs them. If you have several surfaces at different angles/places that need accurate reflections RT is a clear winner in performance.
Well, actually you also must render them with RT. But with RT you just render what is really in the scene. You can really do this with smaller camera angles etc …
But it just depends on the scene. RT is the brute force method.
 
Well, actually you also must render them with RT. But with RT you just render what is really in the scene. You can really do this with smaller camera angles etc …
But it just depends on the scene. RT is the brute force method.
In RT you only need to render the pixels you need. With rasterization you'd have to render whole views for different surfaces.
 
Not necessarily. A camera that fits the reflecting surface viewport will only render as many pixels and whatever part of the scene as specified, exactly the same as the main camera. Think racing car rear-view mirrors for historical examples. You don't render the whole scene and then cut out a small part for the reflection.
 
Status
Not open for further replies.
Back
Top