NVIDIA's Morgan McGuire predicts that the first AAA game to REQUIRE a ray tracing GPU ships in 2023

As shown above, this isn't limited to RT effects, but applies to all other effects, so that argument isn't really an excuse to suddenly consider all graphical advancements useless and irrelevant.
Good examples here. Was hard to notice the difference, usually I could only spot foliage or something like that. Depth of Focus being of lower quality etc.

I guess the debate for me, whether low settings with RT enabled would be noticeably better than ultra settings. Most of the time we benchmark ultra vs ultra+RT enabled (low/med/high) but have we tried low settings + RT? @Dictator ?
 
Why does it matter? The cost of anything new will always be easier to bear at the top of the market. Margins are just too tight at the bottom.
It matters because if the technology cannot be deployed across different tiers of the market, it will remain a niche technology. Nothing inherently wrong with that - but it is of limited interest.
In your discussion about RTRT, the underlying theme is that technologies will move from the very expensive to become ubiquitous. My counterargument to that is that the technological foundation for assuming such a migration going forward is not solid. And that the motives for doing so at all are lacking from a consumer point of view. There hasn't been consumers on barricades demanding "arguably better shadowing at high expense". What we have seen are quite a bit of disgruntlement that price/performance development has slowed a lot. People want better performance cheaper.

Neither was tessellation or multi texturing or any incremental advancement in IQ. Everyone is entitled to their own opinion but I’m certainly not hankering for the same tired old graphics at 250fps. If there’s a better use of transistor budget than RT it’ll be great to see that too if and when it appears.
Again you're arguing that "since other technologies in the past made it to mainstream, RTRT will too". I contend that this kind of reasoning does not apply across neither technologies nor time. Unifying vertex and pixel shading for instance brought flexibility and ultimately efficiency, and the cost in complexity was bought with advancements in lithography. But neither of those may apply equally to RTRT. Sure everyone is entitled to their own opinions, but I'm not looking for other peoples opinions. I'm looking for their arguments and analysis as it applies to RTRT specifically.

Also, an option when it comes to transistor budget is simply not to spend it, and save on die area, power draw, cooling, and total cost. It may make a better product for the end consumer.

That’s a pretty silly argument. New features take a long time to saturate the market whether IGPs get them or not. And as I said above it’s easier to absorb the cost of new tech when you have margins and a power budget to play with.
Of course. But if it doesn't migrate throughout the market it is basically e-peen for those who like to spend their disposable income on tech toys just for the sake of it. And there it is again, the question whether this will actually become the default way to handle some of the lighting in games. I can't say with any certainty if or when that might happen, my crystal ball is lousy when it comes to such things particularly at the very long time scales we are likely talking about here. But I can easily make a case for why it shouldn't as long as the efficiency isn't there.

Trying to refine this into a clear question I'd make this attempt: Does it make sense for consumers to trade transistors away from the mainstream path of today to dedicated RTRT hardware?
And my answer to that question, at this point in time, is no.
If the efficiency picture changes drastically, so would my answer.
 
Trying to refine this into a clear question I'd make this attempt: Does it make sense for consumers to trade transistors away from the mainstream path of today to dedicated RTRT hardware?
And my answer to that question, at this point in time, is no.
If the efficiency picture changes drastically, so would my answer.

The answer of course is, it doesn't really. Nvidia made some great advances with its latest architecture, but those advances have to do with raserization. Moving to task/mesh shaders is great, it unifies and simplifies pipelines for developers, allowing ease of development for certain kinds of programming and faster operation on less silicon. Both are a clear hardware win, and I greatly hope AMD implements something with relative parity on the PS5/XBS.

But Nvidia's dedicated BVH/triangle hardware that takes up extra silicon, AND Microsoft's DXR, are both steps backward. They were designed by hardware enthusiasts wanting to overcome a very specific, narrow problem without regard for programmers or consumers. Programmers want flexibility, they want to be able to program things. Microsoft's ridiculous "do it our way by fiat!" has ruined and delayed far too much graphics hardware. Fixed tessellation pipelines have resulted in a barely used feature for a decade now, geometry shaders were an MS dictated feature that has proven entirely useless, and now they want to ruin raytracing through their tactic of being "too big to ignore" as well.

Raytracing was already coming. UE4 has had signed distance field tracing for years now, it's shipped in Gears 5 for distant shadows. The Hunt: Showdown and Kingdom Come both use Cryengines voxel cone tracing. Control's signed distance field cone tracing, whatever that is specifically, runs on modern consoles, looks great, and is blazing fast.

But now because the great dictator Microsoft has decreed it, many engines have abandoned fast, programmable tracing to concentrate solely on BVH triangle tracing using specialized hardware that costs consumers extra money, gives programmers a narrow corridor to achieve anything, and is frankly quite slow for what it actually achieves. If this continues then McGuire is right, it will take years before specialized RT hardware is required, because it's slow and painful and costly. The only other option is that AMD and Microsoft's actual game developers, not some isolated hardware lab dudes, convince MS that programmable, unified RT hardware is the way to go for XBS, then RTX itself will be seen as a curious, low compatibility experiment, but consumers will ultimately win.
 
Last edited:
But Nvidia's dedicated BVH/triangle hardware that takes up extra silicon, AND Microsoft's DXR, are both steps backward. They were designed by hardware enthusiasts wanting to overcome a very specific, narrow problem without regard for programmers or consumers. Programmers want flexibility, they want to be able to program things.
Nvidia chose to go with BVH structure acceleration. It doesn't have to be and that's entirely on nvidia to handle ray intersection in that method. DXR is a generic API that is an extension of DX12 for the purpose of casting rays, _any_ rays, calling on drivers to determine intersection and allowing a shader to be run against the intersected triangles. DXR is nothing more than a unified API for ray casting and intersection, nothing more, it's purposefully designed to be flexible which is why it's not locked to only shooting light rays, the type of ray cast is entirely up to the developer. It can be used for AI vision, audio etc.

Microsoft's ridiculous "do it our way by fiat!" has ruined and delayed far too much graphics hardware. Fixed tessellation pipelines have resulted in a barely used feature for a decade now, geometry shaders were an MS dictated feature that has proven entirely useless,
These are made in tandem with the vendors. It's funny you mention geometry shaders and the need to be flexible. GS are entirely flexible and programmable, and they are too slow to work which is why they were ignored. The trade off for flexibility is and will always be speed. We started at fixed function and over time as silicon got more powerful we moved to the flexible compute. I think the romancing of the idea that we could have started GPU processing where we are today is laughable. Thats like saying we should never have bothered with coal, oil, nuclear, and renewables and should have figured out how to build Fusion from the get go.

Raytracing was already coming. UE4 has had signed distance field tracing for years now, it's shipped in Gears 5 for distant shadows. The Hunt: Showdown and Kingdom Come both use Cryengines voxel cone tracing. Control's signed distance field cone tracing, whatever that is specifically, runs on modern consoles, looks great, and is blazing fast.
With large caveats and limitations. It does not function as robustly as what can be done with standard ray traversal. The fact that engines are moving towards ray tracing without ray tracing hardware wasn't a sign that they didn't need ray tracing acceleration. It was the signal that they should start making it. You're looking at it backwards. Developers see ray tracing as being the tool they need to continue to push the envelope further on the graphics side - the graphics vendors, MS, and the industry worked collaboratively to bring RT forward.

But now because the great dictator Microsoft has decreed it, many engines have abandoned fast, programmable tracing to concentrate solely on BVH triangle tracing using specialized hardware that costs consumers extra money, gives programmers a narrow corridor to achieve anything, and is frankly quite slow for what it actually achieves.
Once again, is _only_ a ray casting and intersection API. It does not require the hardware to use BVH or anything of that sort. How intersection is handled it entirely up to the vendor and the vendors are free to compete on their R&D of how to improve the speed at which intersection is found whether the rays are coherent or not.
 
Nvidia chose to go with BVH structure acceleration. It doesn't have to be and that's entirely on nvidia to handle ray intersection in that method. DXR is a generic API that is an extension of DX12 for the purpose of casting rays, _any_ rays, calling on drivers to determine intersection and allowing a shader to be run against the intersected triangles. DXR is nothing more than a unified API for ray casting and intersection, nothing more, it's purposefully designed to be flexible which is why it's not locked to only shooting light rays, the type of ray cast is entirely up to the developer. It can be used for AI vision, audio etc.

Certainly DXR is flexible from a "trace rays" point of view. But no concepts of other shape tracing are included, as well as a somewhat fixed view of how acceleration structure can be used and built to begin with. And what I'm worried about is narrow, ray only hardware showing up in Xbox Scarlett, discounting that well designed hardware could potentially accelerate any number of other traversal mechanics at the tradeoff of some speed. Recursively splitting voxel cone tracing, for example, could make it fast enough to use if it had good hardware support, as just one example. But if hardware is designed to only accelerate DXR focused ray into acceleration tree structure tracing, where the only data available is at the "bottom" of the tree, then that's not going to happen.

These are made in tandem with the vendors. It's funny you mention geometry shaders and the need to be flexible. GS are entirely flexible and programmable, and they are too slow to work which is why they were ignored. The trade off for flexibility is and will always be speed. We started at fixed function and over time as silicon got more powerful we moved to the flexible compute. I think the romancing of the idea that we could have started GPU processing where we are today is laughable. Thats like saying we should never have bothered with coal, oil, nuclear, and renewables and should have figured out how to build Fusion from the get go.

Geometry shaders were an example of Microsoft adding a new pipeline area that nobody used, which was the point of how bad going off a spec can be. And as one example the PS2 vector unit pipeline was already better, this is an argument against a concept from before geometry shaders even existed. Hardware vendors have actually merged a lot of these Microsoft defined stages themselves into the same hardware pipeline, after realizing building something that mirrored MS's spec was too slow. I'm not saying they weren't there with them going down the wrong road, developing the bad spec in tandem. I'm saying there's a wrong road to go down and Microsoft is already a little off track.

With large caveats and limitations. It does not function as robustly as what can be done with standard ray traversal. The fact that engines are moving towards ray tracing without ray tracing hardware wasn't a sign that they didn't need ray tracing acceleration. It was the signal that they should start making it. You're looking at it backwards. Developers see ray tracing as being the tool they need to continue to push the envelope further on the graphics side - the graphics vendors, MS, and the industry worked collaboratively to bring RT forward.

What caveat? What limitation? You can do anything you want in compute shaders, trace anything you want. I'm arguing for that same flexibility to be transferred as much as possible to hardware. This demo was done in 4k on a Vega56, with a voxel structure that's not really DXR like. DXR specifies "bottom level" as where the data is other than the acceleration structure is, and that data as some assumed geometry primitives.

Once again, is _only_ a ray casting and intersection API. It does not require the hardware to use BVH or anything of that sort. How intersection is handled it entirely up to the vendor and the vendors are free to compete on their R&D of how to improve the speed at which intersection is found whether the rays are coherent or not.

I was too harsh on Microsoft explicitly. But, as just one example of "build to the spec" an already proposed update to DXR is to allow for beamtracing, an odd custom shape tracing that traces a pyramid like shape and just adds all hit polys to a render cue, something which DXR simply has no concept of at the moment. If the spec as is, is followed too closely fast beamtracing wouldn't be possible. The point wasn't really to be harsh on MS over DXR, but to provide examples of mirroring the current spec can limit speed, function, and cost. I'd love to see hardware that can accelerate screenspace tracing so ray independence doesn't mean sample counts get expensive for SSAO/SSR, or hardware that can accelerate cubemap tracing so isotropic cubemap reflections can be done, or etc. if DXR is followed as a guide to how to accelerate raytacing then the hardware wouldn't make any of these faster than they are today.
 
Last edited:
Nvidia chose to go with BVH structure acceleration. It doesn't have to be and that's entirely on nvidia to handle ray intersection in that method. DXR is a generic API that is an extension of DX12 for the purpose of casting rays, _any_ rays, calling on drivers to determine intersection and allowing a shader to be run against the intersected triangles. DXR is nothing more than a unified API for ray casting and intersection, nothing more, it's purposefully designed to be flexible which is why it's not locked to only shooting light rays, the type of ray cast is entirely up to the developer. It can be used for AI vision, audio etc.

These are made in tandem with the vendors. It's funny you mention geometry shaders and the need to be flexible. GS are entirely flexible and programmable, and they are too slow to work which is why they were ignored. The trade off for flexibility is and will always be speed. We started at fixed function and over time as silicon got more powerful we moved to the flexible compute. I think the romancing of the idea that we could have started GPU processing where we are today is laughable. Thats like saying we should never have bothered with coal, oil, nuclear, and renewables and should have figured out how to build Fusion from the get go.


With large caveats and limitations. It does not function as robustly as what can be done with standard ray traversal. The fact that engines are moving towards ray tracing without ray tracing hardware wasn't a sign that they didn't need ray tracing acceleration. It was the signal that they should start making it. You're looking at it backwards. Developers see ray tracing as being the tool they need to continue to push the envelope further on the graphics side - the graphics vendors, MS, and the industry worked collaboratively to bring RT forward.


Once again, is _only_ a ray casting and intersection API. It does not require the hardware to use BVH or anything of that sort. How intersection is handled it entirely up to the vendor and the vendors are free to compete on their R&D of how to improve the speed at which intersection is found whether the rays are coherent or not.

Geometry shader was slow because it was serial out of Intel one, Tesselator was not flexible enough and not very useful... Direct X geometry pipeline until Mesh shader was not very good at least from what devs say... There is some errors but the most important is to get rid of them and now geometry DX pipeline is back on track...
 
It matters because if the technology cannot be deployed across different tiers of the market, it will remain a niche technology. Nothing inherently wrong with that - but it is of limited interest.
In your discussion about RTRT, the underlying theme is that technologies will move from the very expensive to become ubiquitous. My counterargument to that is that the technological foundation for assuming such a migration going forward is not solid. And that the motives for doing so at all are lacking from a consumer point of view. There hasn't been consumers on barricades demanding "arguably better shadowing at high expense". What we have seen are quite a bit of disgruntlement that price/performance development has slowed a lot. People want better performance cheaper.

I disagree. The foundation is quite solid since that's exactly what has happened for the last 30 years. People want better performance cheaper and that's exactly what we've gotten. What was considered cutting edge performance 10 years ago is now available on an IGP. I'm not sure why you think RT will be any different to all the other technologies that started high-end and trickled down over time. That's literally how the entire technology industry works whether it's computers, cars, phones or TVs.

Trying to refine this into a clear question I'd make this attempt: Does it make sense for consumers to trade transistors away from the mainstream path of today to dedicated RTRT hardware?
And my answer to that question, at this point in time, is no.
If the efficiency picture changes drastically, so would my answer.

I suspect your answer will change soon as RT efficiency can only increase from here. We've only seen the first iteration after all.

In terms of the best place to spend transistor budget I think RT makes as much sense as anything else. We throw away a lot of performance on things that have less impact and value than RT. E.g. imperceptible ultra settings, 4k resolution. So I say hell yeah let's try something that actually makes a difference.
 
I disagree. The foundation is quite solid since that's exactly what has happened for the last 30 years. People want better performance cheaper and that's exactly what we've gotten. What was considered cutting edge performance 10 years ago is now available on an IGP. I'm not sure why you think RT will be any different to all the other technologies that started high-end and trickled down over time. That's literally how the entire technology industry works whether it's computers, cars, phones or TVs.



I suspect your answer will change soon as RT efficiency can only increase from here. We've only seen the first iteration after all.

In terms of the best place to spend transistor budget I think RT makes as much sense as anything else. We throw away a lot of performance on things that have less impact and value than RT. E.g. imperceptible ultra settings, 4k resolution. So I say hell yeah let's try something that actually makes a difference.

For the last 30 years we have had lithography advancements to trickle that tech down. We no longer have much more of that.
 
Back
Top