Rasterization performance are an important part too. I am curious to know what made you think the game is limited only by it's RT performance.This would only be true if the performance with raytracing enabled would be dependent on the rest of the rendering performance and not raytracing performance, and I'm pretty sure everything suggests it's the raytracing performance that's the limiting factor, which would be just as slow no matter which API you're using.
They are using an i7 7700K. Though BFV proved to demand a 6 core CPU for ray tracing and general gameplay.at least with intel cpus. I've been using dx12 and performance seems roughly the same but with lower input lag because you can turn future frame rendering off.
If it wasn't limited by RT performance, lowering RT quality wouldn't improve performance like it does now. Also the huge performance drop when nothing else changes when you enable RT.Rasterization performance are an important part too. I am curious to know what made you think the game is limited only by it's RT performance.
If it wasn't limited by RT performance, lowering RT quality wouldn't improve performance like it does now. Also the huge performance drop when nothing else changes when you enable RT.
There's absolutely no indications of RT performance being hindered by anything other than RT hardware and possibly developer skills and driver quality.
The huge drop happens because even when you think nothing is happening, the scene is constantly being ray traced for when stuff is happening, for an explosion or fire to be reflected on your gun (for example), or metallic surfaces, or water or ice .. etc.Also the huge performance drop when nothing else changes when you enable RT.
Incorrect, RT acceleration is only a part of the process, a huge chunk of it is shading heavy, and is carried out by the ALUs.There's absolutely no indications of RT performance being hindered by anything other than RT hardware and possibly developer skills and driver quality.
It shouldn't be. If there are no surfaces requiring rays, which is purely reflections in this case, then there is no raytracing happening. Every pixel that has a reflective shader attached casts a ray as per the shaders, with each ray that results in a surface having that surface shader evaluated. Thus each reflective pixel is equivalent to another pixel shaded, in the case of 100% rays. However, rays are undersampled, so you have something of the order of 20% of the reflective pixels being additionally shaded (reflective surface shaders evaluated).The huge drop happens because even when you think nothing is happening, the scene is constantly being ray traced for when stuff is happening, for an explosion or fire to be reflected on your gun (for example), or metallic surfaces, or water or ice .. etc.
Of course I know every reflection is being constantly raytraced, but I was under the impression that all those are handled by the RT hardware, which further point to it being limiting factor, and that the shading portion wouldn't be (notably) heavier RT on vs offThe huge drop happens because even when you think nothing is happening, the scene is constantly being ray traced for when stuff is happening, for an explosion or fire to be reflected on your gun (for example), or metallic surfaces, or water or ice .. etc.
Incorrect, RT acceleration is only a part of the process, a huge chunk of it is shading heavy, and is carried out by the ALUs.
Shading is still a very heavy cost in rt and the dedicated hardware doesn't help.
It shouldn't be. If there are no surfaces requiring rays, which is purely reflections in this case, then there is no raytracing happening. Every pixel that has a reflective shader attached casts a ray as per the shaders, with each ray that results in a surface having that surface shader evaluated. Thus each reflective pixel is equivalent to another pixel shaded, in the case of 100% rays. However, rays are undersampled, so you have something of the order of 20% of the reflective pixels being additionally shaded (reflective surface shaders evaluated).
Yes they are, some people used them in Data lookup for complex lighting, and screen space physics (that are also world aware).Are these RT cores also useful for non-Raytracing tasks? Can they be exploited for any other computational tasks like sound, physics or anything else? This would make the proposition for putting RT cores onto mainstream and lower-end cards actually sensible.
Old is new again. From deferred rendering to deferred ray tracing...Just theoretically, would it be an option to:
- Use a generic hit-shader which is solely responsible for re-dispatch, and records actual hit sorted by actual shader in the history with hit vector, list of dispached vectors, normal and UV. Only distinguishing flags for specular, diffuse and translucent dispatch, at most based on vertex/triangle attributes. Maybe even try pessimistic energy conservation estimation for early cut-off. Need to form buckets, to avoid global atomics.
- Delayed evaluation of all hit types, sorted by specialized shader, annotating each recorded hit as 1x3 fp16 self contribution plus list of 3×3 fp16 color twist matrices. Use as many different shaders as you want, just iterate over all of them.
- Delayed reduction of all ray trees based on computed PBR, depth first. Single shader again, pure random access + arithmetic, hopefully very little register pressure, because needs all the latency masking it can get.
Side question ... Is there a relationship between RT cores and ALU's?
Edit: Clarification.
Design.How are those tech demo's different from BFV's RT tech`?