How do you design hardware to trace a limited ray count? That's like suggesting a GPU is designed to only shade a number of pixels to make it faster at shading that number of pixels. You have a number of compute units, and these process pixel shaders, and shade pixels, whatever the resolution. You can't make a compute unit faster by limiting it to 1080p framebuffers. The ROPS draw the pixels, however many you want, as quickly as they can. You can't make a ROP faster by limiting it to 1080p framebuffers. Likewise, with ray tracing, you cast rays, however many you choose, a handful for AI, and billions for total scene illumination. Once your hardware has traced all those rays, whether on CPU or compute or accelerated HW, you have your data to use however you want, such as constructing an image. The process of tracing a ray is independent of screen size.Just some theory crafting about that slide since I am unhappy with the formulation. What if the local GPU HW provides a low ray count, say 1 ray ppx or even less for a standard effect... say reflections
I am unable to envision a hardware design that can trace rays in a finite number, unless you have literally 2 million sampling units that can each trace one ray per frame for a 1080p image. Realistically, HWRT is going to be a form of processor that'll take workloads and produce results as quickly as it can, to be used however they are used.
Perhaps, thinking aloud, as ideas come to me, the RT process is coarse grained, not tracing down to the geometry level, making it suitable for lighting but not sharp reflections? That would involve less memory so allow caches to be more effective. Hardware cone-tracing? Well, no, it's called ray tracing in the slide.