The Complete List of PC Ray Traced Titles with Classification

NVIDIA has added two technologies to help accelerate ray tracing further on complex geometries.

RTX Hair: Through Linear-Swept Spheres (LSS), which is a new GeForce RTX 50 Series GPU-accelerated primitive that reduces the amount of geometry necessary to render strands of hair and uses spheres instead of triangles to get a more accurate fit for hair shapes. LSS makes it possible to do ray-traced hair with better performance and a smaller memory footprint.

RTX Mega Geometry: intelligently updates clusters of triangles in batches on the GPU, reducing CPU overhead and increasing performance and image quality in ray traced scenes, accelerates BVH building, making it possible to ray trace up to 100x more triangles than today’s standard. RTX Mega Geometry is coming soon to the NVIDIA RTX Branch of Unreal Engine (NvRTX), so developers can use Nanite and fully ray trace every triangle in their projects.

 
Figure 2. RTX Mega Geometry enables hundreds of millions of animated triangles through real-time subdivision surfaces

Are Nanite skeletal meshes actually more like displaced micromeshes than original Nanite? Otherwise that sentence doesn't make much sense.

It's been a year, but the only real documentation for Unreal 5.5 seems to be the code and I'm too lazy to look.

PS. if this does rely on animated displaced micromeshes, I wonder how much of AMD's work they borrowed for that. Of course then the Ada+ fixed hardware for static micromeshes would be fairly useless if it existed (I think there's programmable hardware they aren't exposing) .
 
Last edited:
Are Nanite skeletal meshes actually more like displaced micromeshes than original Nanite? Otherwise that sentence doesn't make much sense.

It's been a year, but the only real documentation for Unreal 5.5 seems to be the code and I'm too lazy to look.

PS. if this does rely on animated displaced micromeshes, I wonder how much of AMD's work they borrowed for that. Of course then the Ada+ fixed hardware for static micromeshes would be fairly useless if it existed (I think there's programmable hardware they aren't exposing) .

Nanite supports tesselation and displacement now. It has some tradeoffs.


 
Nanite supports tesselation and displacement now. It has some tradeoffs.
That's not really animated nor a subdivision surface. Presumably NVIDIA is talking about the Nanite Skeletal mesh when they say "animated triangles through real-time subdivision surfaces". Is the native representation in the engine a subdivision surface or is NVIDIA translating it to displaced micromeshes?
 
That's not really animated nor a subdivision surface. Presumably NVIDIA is talking about the Nanite Skeletal mesh when they say "animated triangles through real-time subdivision surfaces". Is the native representation in the engine a subdivision surface or is NVIDIA translating it to displaced micromeshes?

Native representation is still the same cluster format as before.
 
Doom the Dark Ages will be released with path tracing, but it is going to also use ray tracing during gameplay for hit detection.

We also took the idea of ray tracing, not only to use it for visuals but also gameplay," Director of Engine Technology at id Software, Billy Khan, explains

"We can leverage it for things we haven't been able to do in the past, which is giving accurate hit detection. [In DOOM: The Dark Ages], we have complex materials, shaders, and surfaces

So when you fire your weapon, the heat detection would be able to tell if you're hitting a pixel that is leather sitting next to a pixel that is metal," Billy continues.

"Before ray tracing, we couldn't distinguish between two pixels very easily, and we would pick one or the other because the materials were too complex.

Ray tracing can do this on a per-pixel basis and showcase if you're hitting metal or even something that's fur. It makes the game more immersive, and you get that direct feedback as the player

 
Ray tracing can do this on a per-pixel basis and showcase if you're hitting metal or even something that's fur. It makes the game more immersive, and you get that direct feedback as the player
The hit information obtained from the ray tracing pipeline needs to be transmitted to the host. Is there an efficient way to do this?
 
Half Life 2 RTX will feature RTX Skin and RTX Neural Radiance Cache to increase the quality of the path traced global illumination.

"Neural Radiance cache is an AI approach to estimate indirect lighting," says Nyle Usmani, GeForce Product Manager at NVIDIA.
"The way it works is that we train the model with live game data. So, while you're playing, a portion of the pixels are being fed to a ton of tiny neural networks, and they're all learning, and what they're learning is how to take a partially traced ray and then infer multiple bounces for each ray."
"This results in higher image quality, a night and day difference in quality when it comes to shadows, indirect lighting, and the way the scene looks," Nyle continues. "It looks way less washed out. Neural Radiance Cache is the first Neural Shader coming to RTX Remix alongside the new DLSS 'Transformer' model for Ray Reconstruction."

 
Is using ray tracing for primary visibility all-or-nothing, or is it feasible to have a renderer that uses rasterization to determine primary visibility for certain geometry and ray tracing for other geometry? Suppose that the renderer needs to ray trace primary visibility for certain types of geometry, such as:
  • Hair and other fibers, which are modelled using RTX Hair's linear swept spheres
  • Micropolygon geometry, which RTX Mega Geometry can handle better than hardware rasterization
  • Anything that uses opacity, so ray tracing can calculate accurate refractions and order-independent transparency without hacks
But the majority of the geometry can be handled perfectly well with regular hardware rasterization. Could a renderer be designed to use rasterization where it works and ray tracing where it doesn't, and beat the performance of a pure ray tracing renderer?
 
Is using ray tracing for primary visibility all-or-nothing, or is it feasible to have a renderer that uses rasterization to determine primary visibility for certain geometry and ray tracing for other geometry? Suppose that the renderer needs to ray trace primary visibility for certain types of geometry, such as:
  • Hair and other fibers, which are modelled using RTX Hair's linear swept spheres
  • Micropolygon geometry, which RTX Mega Geometry can handle better than hardware rasterization
  • Anything that uses opacity, so ray tracing can calculate accurate refractions and order-independent transparency without hacks
But the majority of the geometry can be handled perfectly well with regular hardware rasterization. Could a renderer be designed to use rasterization where it works and ray tracing where it doesn't, and beat the performance of a pure ray tracing renderer?
Nvidia's Zorah demo does this

We have the incredible work by ⁦⁦@acmarrs⁩ on RTX Mega Geometry to thank for allowing us to hit roughly 500m triangles in zorah. Excluding VFX, the playable character and translucency- every nanite pixel is also using primary view raytracing 🤯
 
Is using ray tracing for primary visibility all-or-nothing, or is it feasible to have a renderer that uses rasterization to determine primary visibility for certain geometry and ray tracing for other geometry? Suppose that the renderer needs to ray trace primary visibility for certain types of geometry, such as:

A visibility buffer renderer, such as nanite, could slot in raycasting for the primary intersections without making any changes at all to shading. You could combine it however you want.

You lose the near zero overdraw advantage if you can't directly intersect variable resolution geometry though, such as Intel previously described. Dynamically creating BVH's for potentially visible clusters every frame is inelegant.
 
Last edited:
Back
Top