Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Those results look weird. Other sites have shown dx12 to be equal, at least with intel cpus. I've been using dx12 and performance seems roughly the same but with lower input lag because you can turn future frame rendering off.
 
This would only be true if the performance with raytracing enabled would be dependent on the rest of the rendering performance and not raytracing performance, and I'm pretty sure everything suggests it's the raytracing performance that's the limiting factor, which would be just as slow no matter which API you're using.
Rasterization performance are an important part too. I am curious to know what made you think the game is limited only by it's RT performance.
at least with intel cpus. I've been using dx12 and performance seems roughly the same but with lower input lag because you can turn future frame rendering off.
They are using an i7 7700K. Though BFV proved to demand a 6 core CPU for ray tracing and general gameplay.
 
Rasterization performance are an important part too. I am curious to know what made you think the game is limited only by it's RT performance.
If it wasn't limited by RT performance, lowering RT quality wouldn't improve performance like it does now. Also the huge performance drop when nothing else changes when you enable RT.
There's absolutely no indications of RT performance being hindered by anything other than RT hardware and possibly developer skills and driver quality.
 
If it wasn't limited by RT performance, lowering RT quality wouldn't improve performance like it does now. Also the huge performance drop when nothing else changes when you enable RT.
There's absolutely no indications of RT performance being hindered by anything other than RT hardware and possibly developer skills and driver quality.

It seems you have a misunderstanding how ray-tracing here works. One of the biggest problems of raytracing reflections is that you increase your shading workload massively. Shading on a Turing takes more time than RT for reflections.
This talk from remedy explains a lot about it:

Perf numbers for reflections on a Titan V:
RT 5-9 ms
Shading 3-5ms
He explains Turing is around 5x faster than TV in RT, so 1-2 ms for RT, while the 3-5ms for shading don't change. And this is already with some optimizations for faster shading in place. Before, shading was even taking 2x the ms.
 
Also the huge performance drop when nothing else changes when you enable RT.
The huge drop happens because even when you think nothing is happening, the scene is constantly being ray traced for when stuff is happening, for an explosion or fire to be reflected on your gun (for example), or metallic surfaces, or water or ice .. etc.
There's absolutely no indications of RT performance being hindered by anything other than RT hardware and possibly developer skills and driver quality.
Incorrect, RT acceleration is only a part of the process, a huge chunk of it is shading heavy, and is carried out by the ALUs.
 
The huge drop happens because even when you think nothing is happening, the scene is constantly being ray traced for when stuff is happening, for an explosion or fire to be reflected on your gun (for example), or metallic surfaces, or water or ice .. etc.
It shouldn't be. If there are no surfaces requiring rays, which is purely reflections in this case, then there is no raytracing happening. Every pixel that has a reflective shader attached casts a ray as per the shaders, with each ray that results in a surface having that surface shader evaluated. Thus each reflective pixel is equivalent to another pixel shaded, in the case of 100% rays. However, rays are undersampled, so you have something of the order of 20% of the reflective pixels being additionally shaded (reflective surface shaders evaluated).
 
The huge drop happens because even when you think nothing is happening, the scene is constantly being ray traced for when stuff is happening, for an explosion or fire to be reflected on your gun (for example), or metallic surfaces, or water or ice .. etc.

Incorrect, RT acceleration is only a part of the process, a huge chunk of it is shading heavy, and is carried out by the ALUs.
Of course I know every reflection is being constantly raytraced, but I was under the impression that all those are handled by the RT hardware, which further point to it being limiting factor, and that the shading portion wouldn't be (notably) heavier RT on vs off

Edit: small clarification
 
Last edited:
Ray tracing hardware is there to accelerate bvh traversal and accelerate triangle intersection which is typically very expensive. Shading is still a very heavy cost in rt and the dedicated hardware doesn't help. The other new cost is updating the bvh.

Edit: Fixed my "Deficated hardware" auto-correct, which was kind of awesome.
 
Last edited:
Shading is still a very heavy cost in rt and the dedicated hardware doesn't help.

*shading is still a very heavy cost in RT REFLECTIONS. For shadowing, AO or some other possible uses, there is barely any shading involved, excluding filtering. Shadows will deficatedly be more efficient.

EDIT: definitely
 
It shouldn't be. If there are no surfaces requiring rays, which is purely reflections in this case, then there is no raytracing happening. Every pixel that has a reflective shader attached casts a ray as per the shaders, with each ray that results in a surface having that surface shader evaluated. Thus each reflective pixel is equivalent to another pixel shaded, in the case of 100% rays. However, rays are undersampled, so you have something of the order of 20% of the reflective pixels being additionally shaded (reflective surface shaders evaluated).

The problem is that the reflection rays land all over the scene, with very little locality. This causes warp divergence, and completely trashes instruction caches since neighboring pixels can be executing code from a multitude of different shaders. Undersampling makes locality even worse.

Engines will have to focus on cutting overhead by using a small number of generalized shaders rather than a large number of specialized shaders. There are a number of ways to do this, with different pros and cons, and reworking your entire shader system is a nice chunk of work, so we can expect it to take a while before we see results in games.
 
Just theoretically, would it be an option to:
  • Use a generic hit-shader which is solely responsible for re-dispatch, and records actual hit sorted by actual shader in the history with hit vector, list of dispached vectors, normal and UV. Only distinguishing flags for specular, diffuse and translucent dispatch, at most based on vertex/triangle attributes. Maybe even try pessimistic energy conservation estimation for early cut-off. Need to form buckets, to avoid global atomics.
  • Delayed evaluation of all hit types, sorted by specialized shader, annotating each recorded hit as 1x3 fp16 self contribution plus list of 3×3 fp16 color twist matrices. Use as many different shaders as you want, just iterate over all of them.
  • Delayed reduction of all ray trees based on computed PBR, depth first. Single shader again, pure random access + arithmetic, hopefully very little register pressure, because needs all the latency masking it can get.
 
Last edited:
Sorry for just wading in here and asking something general:

Are these RT cores also useful for non-Raytracing tasks? Can they be exploited for any other computational tasks like sound, physics or anything else? This would make the proposition for putting RT cores onto mainstream and lower-end cards actually sensible.
 
Side question ... Is there a relationship between RT cores and ALU's?

Edit: Clarification.
 
Last edited by a moderator:
Are these RT cores also useful for non-Raytracing tasks? Can they be exploited for any other computational tasks like sound, physics or anything else? This would make the proposition for putting RT cores onto mainstream and lower-end cards actually sensible.
Yes they are, some people used them in Data lookup for complex lighting, and screen space physics (that are also world aware).
https://blog.demofox.org/2018/11/16/how-to-data-lookups-via-raytracing/


RTX 2060 is going to have RT cores. That would perhaps make it good for 900p60 or 1080p30 low DXR.
 
Soon we can have entry level 2030's for DLSS upscaled 320x200! I wonder if I still have my old CGA monitor around...
 
Just theoretically, would it be an option to:
  • Use a generic hit-shader which is solely responsible for re-dispatch, and records actual hit sorted by actual shader in the history with hit vector, list of dispached vectors, normal and UV. Only distinguishing flags for specular, diffuse and translucent dispatch, at most based on vertex/triangle attributes. Maybe even try pessimistic energy conservation estimation for early cut-off. Need to form buckets, to avoid global atomics.
  • Delayed evaluation of all hit types, sorted by specialized shader, annotating each recorded hit as 1x3 fp16 self contribution plus list of 3×3 fp16 color twist matrices. Use as many different shaders as you want, just iterate over all of them.
  • Delayed reduction of all ray trees based on computed PBR, depth first. Single shader again, pure random access + arithmetic, hopefully very little register pressure, because needs all the latency masking it can get.
Old is new again. From deferred rendering to deferred ray tracing...
 
Last edited:
How are those tech demo's different from BFV's RT tech`?


Design.
One is made for real time gaming. With the challenges of trying to get RT working alongside some really optimized custom code for maximum performance/quality

The other is just showcasing the limits of the technology.

The reason the discussion is valid, say staying on topic for BFV, is because how RT is implemented in BFV has little to do with how it’s accelerated. How it’s accelerated direct impacts performance sure, but the fact that we see bugs/artefacts and what not, showcases other issues developers could be running into. And that worth discusssing.
 
Yeah was thinking that, a game tailored and coded from the ground looking like that demo, that would be nice. Not saying BFV isnt, but the possibilitys seem great.
 
Back
Top