AMD made a mistake letting NVIDIA drive Ray Tracing APIs

Haven’t read the full paper yet but SWRT research from Nvidia shouldn’t be surprising. They’ve been doing that for ages leading up to Optix etc.

Intro is interesting:

“It is generally believed that employing ray tracing over rasterization is the key to achieving photorealism, which is provably easier and more consistent to simulate global shading effects such ambient occlusion and indirect illumination. However, with only hardware ray tracing (HWRT) using specific hardware acceleration support, one can barely achieve real-time frame rates. More crucially, even today, only high-end platforms support HWRT. So we still need a HWRT alternative, software ray tracing (SWRT) solution, as a reasonable approximation to HWRT for providing users with the option to scale down: allowing trade- offs between quality and performance for low-end platforms, e.g., mobile devices and VR headsets.”
 
It's faster too and for GI speed won't stop mattering any time soon.

Or in the case of Deep Appearance Prefiltering, that can do things which HWRT can't without full Monte Carlo to convergence.
 
ROMA is a fairly interesting scheme for hardware. With some logic gates you could trivially create a ray aligned occupancy map from the base occupancy map, you pick the face the ray hits first and make that the UV plane, after that the other planes along the W direction just have to be shifted on the UV plane. A lot of binary permutations, but that's cheap for logic.

I'm surprised he didn't try cone tracing, it fits so well.
 
Back
Top