GPU Ray Tracing Performance Comparisons [2021-2022]

I'm fairly sure that all GPU vendors can control most of GPU's innards through s/w (BIOS/MC and drivers). Whether this can be exposed through drivers into a public API is another issue which may not have a positive answer if such exposure would lead to more problems than performance wins.
Strong possibility that the TSU might be fixed function logic ...

Any time you use RTPSOs/TraceRay, Intel HW will automatically reorder threads for RT shader dispatch for free. There's virtually no explicit programming to be had with TSU as it is designed to be used implicitly with the RT pipeline. SER does have overhead which means it's HW implementation doesn't come at no performance cost like a fixed function unit would ...

Also it's a bad idea to use either to solve shader permutation since it increases register pressure/lowers occupancy. Intel wants developers to specialize their RT shaders with little to no branching as much as possible for best performance ...
 
Spatiotemporal ray tracing techniques were not required for non-realtime CGI rendering. After practical RTRT hardware hit the market, everything started to evolve.. mainly by Nvidia.
It was never about practical hardware, if it was we would have been running PowerVR hardware forever before NVIDIA did anything. No-one just cared enough to license the IP and build them for commercial use back then.
 
Spatiotemporal ray tracing techniques were not required for non-realtime CGI rendering. After practical RTRT hardware hit the market, everything started to evolve.. mainly by Nvidia.

I've seen plenty research on denoising and spacial filtering (less on temporal reuse, but on that too) for offline rendering through the decade past. While not strictly necessary for offline cgi, better efficiency can mean thousands to millions of dollars saved in server use and man hours lost due to slow render times and slower interation.

I think it was ignored because it biases the renderer, but I think this obsession with 100% unbiased path tracing is misguided anyway.
 
Denoising for Monte Carlo rendering is used throughout the CG fx industry.
Its job is to reduce low to medium levels of noise, not to fill huge gaps in a 1 path per pixel image (or less..) typical of real-time path tracing.

BTW, the paper mentioned above is not about denoising in the traditional sense of post processing an image.
It's about making the best of the limited sampling budget of a real-time application, via computing better samples.
 
It boggles the mind how much progress Nvidia has made in the denoising area in a couple years when the entire CGI industry had done so little through decades.
can recall, when I was really young, reading my first computer magazines, PC Actual, Micromania, PC Mania, etc etc etc, featuring static images from SIGGRAPH with raytracing graphics, which were absolutely gorgeous, and the articles mentioned the most powerful computers of the time used to take a few months, or several weeks in the best case to render a single static raytracing image o_O . In fact when the first RTX was shown I thouhgh."What? Are you kidding me? That can't be true RT".
 
can recall, when I was really young, reading my first computer magazines, PC Actual, Micromania, PC Mania, etc etc etc, featuring static images from SIGGRAPH with raytracing graphics, which were absolutely gorgeous, and the articles mentioned the most powerful computers of the time used to take a few months, or several weeks in the best case to render a single static raytracing image o_O . In fact when the first RTX was shown I thouhgh."What? Are you kidding me? That can't be true RT".
This is seriously something I just don't understand. Imagination/PowerVR did it all years before NVIDIA, yet somehow NVIDIA doing it later was the miracle?
 
Denoising for Monte Carlo rendering is used throughout the CG fx industry.
Its job is to reduce low to medium levels of noise, not to fill huge gaps in a 1 path per pixel image (or less..) typical of real-time path tracing.

BTW, the paper mentioned above is not about denoising in the traditional sense of post processing an image.
It's about making the best of the limited sampling budget of a real-time application, via computing better samples.

I know all that. I'm using denoising as the blanket term academia, the industry and Nvidia themselves like to use.

But I remember even recent papers showing offline path traced images with thousands of rays per pixel that still looked noisier than these with 1 ray/pixel. I'm obviously not suggesting they use the shitty quality real time finds acceptable today for hollywood VFX, but surely there are more than a trick or two that these real times solutions found that could benefit the higher quality offline solutions.

If anything, for interactive previews at least, which still take a couple seconds to integrate into anything half decent, while Nvidia is doing it 30 times a second.
 
This is seriously something I just don't understand. Imagination/PowerVR did it all years before NVIDIA, yet somehow NVIDIA doing it later was the miracle?

Unfortunately, this is not about who did it first, it's about who's able to make it to the mass in a meaningful way. It applies on almost every new technology, automobiles, smart phones, light bulbs, you name it.
 
This is seriously something I just don't understand. Imagination/PowerVR did it all years before NVIDIA, yet somehow NVIDIA doing it later was the miracle?
ist here proof showing to what extent is that the case? Meaning that they could create something very similar, but it couldn't be used or wasn't polished. I heard about nVidia working on RT GPUs like ten years ago, if my memory doesn't fail me, although I thought it was mostly gibberish 'cos I deemed it impossible, judging from the heavy processing a single frame needed to create a raytraced capture.
 
ist here proof showing to what extent is that the case? Meaning that they could create something very similar, but it couldn't be used or wasn't polished. I heard about nVidia working on RT GPUs like ten years ago, if my memory doesn't fail me, although I thought it was mostly gibberish 'cos I deemed it impossible, judging from the heavy processing a single frame needed to create a raytraced capture.
Caustic had already working FPGA, which was even sold, in 2009. They were aquired by Imagination in 2010 and in 2013 they released ASIC-version of the FPGA as PCIe add-on card (R2500, had two of the chips on one board), still using Caustics name on it. They had full PowerVR mobile GPU with the Caustic-based HWRT in 2016 on silicon.

Here's one of the related videos
 
This is seriously something I just don't understand. Imagination/PowerVR did it all years before NVIDIA, yet somehow NVIDIA doing it later was the miracle?
Imagination wasn't the first company designing HW ray tracing products:
AFAIK Imagination ray tracing IP hadn't shipped in any mass market product before Turing was released (not sure if this is still true).
 
Imagination wasn't the first company designing HW ray tracing products:
AFAIK Imagination ray tracing IP hadn't shipped in any mass market product before Turing was released (not sure if this is still true).
They weren't the first to do HW RT, but first to do PC compatible HW RTRT if we count Caustic, which they bought (which I mentioned in the previous post) and first to do HW RT as part of GPU.
No, they didn't ship the GPU because for whatever reason no-one cared about HW RTRT. Maybe mobile space was the wrong space to try and push it. (Caustic and Imagination with Caustic brand did ship HW RTRT products earlier, though, just not GPUs)
 
Imagination wasn't the first company designing HW ray tracing products:
AFAIK Imagination ray tracing IP hadn't shipped in any mass market product before Turing was released (not sure if this is still true).
No, but they were the first trying to pair hardware for ray tracing with a modern, proven, programmable GPU design, and one where you programmed the RT hardware with an API designed to fit in with traditional rasterisation in a good way.

They didn't succeed in getting the technology into a proper product, but it was ahead of its time. Nvidia do deserve all of the credit for bringing it to the masses and bootstrapping the ecosystem as well (arguably even harder than designing hardware for it).
 
Back
Top