Next gen lighting technologies - voxelised, traced, and everything else *spawn*

hybrid RT is a viable strategy for games to tackle should they wish to, especially so if hardware can accelerate that process.

Agree, people like what is being done so far with RTX hardware, and things like the quake 2 RT is very impressive indeed. We cant expect a fully ray traced BF5 on ultra yet, maybe next RTX series can come closer to something like that. Maybe PS5 can do some (limited RT too) by that time.
 
Yea I didn’t exactly post my opinion on it. But where I thought was the hidden awesomeness is that this is just a mod. And it occurred to me that we’re seeing a lot of these old games being bolted with RT wasn’t some sort of coincidence, it would appear that if the render pipeline is straight forward enough that RT can be put in with relative ease (as there are no hacks to work around) compared to what we saw with battlefield and some of the new AAA titles that are having issues integrating RT quickly (other discussion thread)
Possibly not a coincidence if it's a case of history repeating.
(from 2007 article on Larrabee) https://arstechnica.com/gadgets/2007/04/clearing-up-the-confusion-over-intels-larrabee/2/
"Intel employee Daniel Pohl was on hand at last week's IDF to give demonstrations of a version of Quake 4 that uses real-time ray tracing running on Intel hardware."
I'm curious what factors weighted on version of Quake over another, possibly resourcing or licensing?
 
This doesn't make any sense whatsoever. Without baking or any of those "nasty" "static" solutions (including all other forms of Ray Tracing which aren't pure RT, right?) you wouldn't have any "interactive" game to play right about now. But then again who am I to judge? "It just works!" right? It comes a time when some topics just become tiresome.
Static worlds are by definition less interactive than dynamic worlds.

I'm all for that but some folks seem to be hellbent on "RTRT for everything! It's easier! " It is not and it's just one more tool in the giant toolbox that's already fully packed with, sometimes, great solutions so you don't have to re-invent the wheel every time. If it was so simple.. Where the he'll is the DXR update for Tombe Raider which was released 5 months ago ( and playable even before that)? I mean.. It's just shadows... Anyways..
Oh and me posting about QB was in response to the "you can't do great pre-baked interior GI.. RTRT is needed"
There's a talk scheduled for GDC about ray tracing in Tomb Raider. Maybe they'll release it by then.

Also, full path-tracing games are coming :p

 
That was debunked by proper benchmarks, the 2080Ti maintained a consistent 50% uplift over Titan V in Battlefield V.
Consistent 50% uplift? Where? I can't find such results. The "best" results for GeForce RTX 2080 Ti I was able to find are 16,9 % over Titan V in 1440p / low and 36,0 % over Titan V in 1440p / high (pc-better.com).
 
I'm curious what factors weighted on version of Quake over another, possibly resourcing or licensing?
IIRC, Quake 2 has information about the lights positions in the level data, but Quake 3 has not. Not sure about 1 and 4.

Thankfully.. light probes don't have to be placed manually one by one:
I think the video is only about diffuse probes for dynamic stuff. Reflection probes still need manual work (but i'm no UE4 user so not sure).
 
It's a collection of numbers taken from several discussion threads and performed on variously OCed GPUs ("proper benchmarks"?), which don't even include 2080Ti's numbers. The last table is taken from pc-better.com, and these results aren't even close to "consistent 50 % uplift":
https://www.pc-better.com/wp-content/uploads/2019/01/DXRPerformance1440p.png
It's 17-36 % difference at playable settings.
Anyway, even these results indicate, that redesigned cache (Pascal->Volta) boosts gaming raytracing performace more than addition of RT-cores (Volta-Turing). Not to mention that RTX 2080 Ti has significantly higher fillrate than Titan V + more efficient rasteriser. Without these differences performance would be even closer. Maybe RT-cores could make some difference in professional applications or in future games, but their contribution for Battlefield is quite questionable. That's why I'm not sure, that AMD should follow this direction.
 
It's a collection of numbers taken from several discussion threads
No, they are taken from a single 3D Center forum thread, where the original claim was made that TitanV performs about the same without RT cores.

Maybe RT-cores could make some difference in professional applications or in future games, but their contribution for Battlefield is quite questionable. That's why I'm not sure, that AMD should follow this direction.
You said TitanV is faster or equal to Turing in Battlefield V. A statement even you now find inaccurate.

The more you add RT effects the more RT cores come into play, at Ultra DXR the difference is still large. And once more, we already have a game that shows 3X improvements due to RT cores (Quake 2 with Path Tracing). Not to mention 3D Port Royal benchmark.

DXRPerformance1440p.png


Using this graph, we can see that on Ultra, the Titan RTX is 45% faster at 4k, and on low the Titan RTX is 28% faster. If you have been following what the developers have said about their optimizations with DXR, we know that Ultra is going to cast and have more affected materials than Low. These additional rays and affects are going to trigger more use of the RT cores, and this is where we see the gap grow even further between the two. The more rays that get cast, the bigger the difference in performance. The performance difference these RT cores give is also going to scale more with resolution. DXR casts rays based on the amount of pixels in your resolution. 'Radolov' over at YouTube was nice enough to dig up performance settings for DXR in Battlefield V.

https://www.pc-better.com/titan-v-dxr-analysis/
 
Last edited:
The more you add RT effects the more RT cores come into play, at Ultra DXR the difference is still large. And once more, we already have a game that shows 3X improvements due to RT cores (Quake 2 with Path Tracing). Not to mention 3D Port Royal benchmark.
But as we are in the 'hybrid era' now as they say, likely there is not enough RT core utilization possible in actual games to justify RT cores.
I do not doubt RT cores can be faster (according to NV researcher Christoph Schied up to factor 7 within some not specified cases).
But i'm pretty sure i can match RTX performance with compute tracing my sample hierarchy. This is possible because of dynamic LOD and alternative geometry which restricted RT cores do not allow, so killing off future research in this direction. And the future begins now, not in 5 years with RTX openenig up and being extended.
Downside of my idea is that perfectly sharp reflections are not possible, as my sample hierarchy only goes down to the detail of lightmap texels, not material textures as RTX can capture. This limitation is very acceptable, even for reflections on smooth materials IMO.
The reason for my optimism is: I already reach RTX performance if i compare my GI compute tracing results against the numbers given in Remedys video, so FuryX vs. GTX2080Ti. (The comparison is really over the thumb, because algorithms have different time complexity, and i do not really 'classical' raytracing here. So i compare based on results in practice.)
See the minecraft video as a practical example, which beats path traced Quake 2 quality, does nut use RTX but raytracing. Alternative geometry seems to be the reason here too.

Sorry for the repetition, still fighting for flexible software solutions... AMD shall not sacrifice compute performance for anything. Likely they will, but there's still hope. From my perspective their hardware is pretty much perfectly suited to solve open lighting problems.
 
But as we are in the 'hybrid era' now as they say, likely there is not enough RT core utilization possible in actual games to justify RT cores.
I do not doubt RT cores can be faster (according to NV researcher Christoph Schied up to factor 7 within some not specified cases).
But i'm pretty sure i can match RTX performance with compute tracing my sample hierarchy. This is possible because of dynamic LOD and alternative geometry which restricted RT cores do not allow, so killing off future research in this direction. And the future begins now, not in 5 years with RTX openenig up and being extended.
Downside of my idea is that perfectly sharp reflections are not possible, as my sample hierarchy only goes down to the detail of lightmap texels, not material textures as RTX can capture. This limitation is very acceptable, even for reflections on smooth materials IMO.
The reason for my optimism is: I already reach RTX performance if i compare my GI compute tracing results against the numbers given in Remedys video, so FuryX vs. GTX2080Ti. (The comparison is really over the thumb, because algorithms have different time complexity, and i do not really 'classical' raytracing here. So i compare based on results in practice.)
See the minecraft video as a practical example, which beats path traced Quake 2 quality, does nut use RTX but raytracing. Alternative geometry seems to be the reason here too.

Sorry for the repetition, still fighting for flexible software solutions... AMD shall not sacrifice compute performance for anything. Likely they will, but there's still hope. From my perspective their hardware is pretty much perfectly suited to solve open lighting problems.
Alternative methods have their place but they're not without limitations. Minecraft's voxel RT works great because its worlds themselves are made out of cubes, it's very low detail and it's slow paced so the lighting taking several frames to update is not very noticeable or important.

Also, compute is not going away so it's a bit dramatic to say RTX kills the research into alternative techniques.
 
Minecraft's voxel RT works great because its worlds themselves are made out of cubes
Which makes it useless, yes. But he did this in short time. It took me many years to move those advantages to irregular triangle geometry used in games and every where else.
IMO, we need to solve the LOD problem first, before we go fixed function hardware for RT. Offline rendering has no need to be fast, but we have. RTX as is now does not scale.
NV has 10 years of raytracing experience, but they never worked in the field of geometry processing, AFAIK. Almost nobody sees how those fields are related.

Also, compute is not going away so it's a bit dramatic to say RTX kills the research into alternative techniques.
That's not true. Trying to beat FF HW with software is against the flow of progress. It's just stupid.
RTX is here so i HAVE to use it, which is what i'll do. It has potential and can solve problems i can not.
The reflection tracing i have in mind would be maybe a week of work. But if next gen consoles announce RT support with backing by hardware, it's a waste of time so i'll wait for that. (It's still possible AMD does it better than NV, with a more uniform and flexible approach. They usually do.)
And second, any FF hardware takes chip area so reduces compute performance. (Which does not worry me but still.)

Bit i've said all this already earlier... you guys know what i think :)
 
Which makes it useless, yes. But he did this in short time. It took me many years to move those advantages to irregular triangle geometry used in games and every where else.
IMO, we need to solve the LOD problem first, before we go fixed function hardware for RT. Offline rendering has no need to be fast, but we have. RTX as is now does not scale.
NV has 10 years of raytracing experience, but they never worked in the field of geometry processing, AFAIK. Almost nobody sees how those fields are related.


That's not true. Trying to beat FF HW with software is against the flow of progress. It's just stupid.
RTX is here so i HAVE to use it, which is what i'll do. It has potential and can solve problems i can not.
The reflection tracing i have in mind would be maybe a week of work. But if next gen consoles announce RT support with backing by hardware, it's a waste of time so i'll wait for that. (It's still possible AMD does it better than NV, with a more uniform and flexible approach. They usually do.)
And second, any FF hardware takes chip area so reduces compute performance. (Which does not worry me but still.)

Bit i've said all this already earlier... you guys know what i think :)
It's just the BVH acceleration. Everything else is standard compute as we know it. DXR is meant to be run on any compute based card, it's up to the vendors to support it through drivers which is a business decision they're making, not a performance one.

You could have DXR supported on any compute based GPU if the IHV allowed it; you create your own data structure for ray parallelization management (so skip the BVH acceleration if this bugs you that much), and you run the DXR commands for everything else. It would work across any vendor that supports it.
 
Sure, but i hope for things like device side enqueue, or dynamic dispatch (however you want to call it) to finally arrive. It's key to process trees with out dispatch bubbles, and also to interleave shading and tracing.
I assume AMD does support DXR on next XBox, likely mostly in software, but they could have known earlier than we think and Navi already has some FF RT stuff.
Lisa Su said in an interview, they would work on both hardware and software regarding RT.

However. DXR API in software can't be fast. You need to tailor to your specific needs. This is what makes little sense to me here.
 
Offline rendering has no need to be fast, but we have.
That's not true at all! In production, realtime visualsation is incredibly important, and rendering out is all server costs and expensive. The only difference is offline rendering will prioritise visual quality first, and then want to speed it up, whereas realtime will favour performance first to get something that can run 60 fps, and then improve quality.

This again points to RTX being better suited, or designed with, pro imaging in mind. It's hardware apparently designed to accelerate 'perfect path tracing' more than funky, fudgy game rendering solutions. That's where there perhaps could (and should?) be different solutions for gaming hardware versus professional hardware, but these GPUs are designed for both.
 
Last edited:
That's not true at all! In production, realtime visualsation is incredibly important, and rendering out is all server costs and expensive. The only difference is offline rendering will prioritise visual quality first, and then want to speed it up, whereas realtime will favour performance first to get something that can run 60 fps, and then improve quality.

This again points to RTX being better suited, or designed with, pro imagin gin mind. It's hardware apparently designed to accelerate 'perfect path tracing' more than funky, fudgy game rendering solutions. That's where there perhaps could (and should?) be different solutions for gaming hardware versus professional hardware, but these GPUs are designed for both.
... nails it!
 
Perhaps no more non hardware DXR coming.
The D3D12 Raytracing Fallback Layer is a library that emulates the DirectX Raytracing (DXR) API on devices without native driver/hardware support.

[Windows October 2018 Update] The Fallback Layer emulation runtime works with the final DXR API in Windows October 2018 update. The emulation feature proved useful during the experimental phase of DXR design. But as of the first shipping release of DXR, the plan is to stop maintaining this codebase. The cost/benefit is not justified and further, over time as more native DXR support comes online, the value of emulation will diminish further. That said, the code functionality that was implemented is left in place for any further external contributions or if a strong justification crops up to resurrect the feature. We welcome external pull requests on extending or improving the Fallback Layer codebase.


https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/Libraries/D3D12RaytracingFallback

Access to DXR documentation is gated.

Closest thing I could find to a tutorial.
http://cwyman.org/code/dxrTutors/tutors/Tutor4/tutorial04.md.html

and
https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/Samples/Desktop/D3D12Raytracing

The last commit was last year so AMD definitely is further along than expected.
AMD: current/v1.5 revision of the Fallback Layer is not supported on AMD cards and will fail to run. Temporarily, you can use previous v1.2 source code snapshot with v1.1 SDK overlay binary snapshot which work on AMD:
 
So what are the hardware requirements for DXR support then? BVH acceleration? Is a hardware de-noising solution required?
 
So what are the hardware requirements for DXR support then? BVH acceleration? Is a hardware de-noising solution required?
From what I've been reading, the IHV has to determine if it's supported or not.
In this case, Nvidia supports it on Turing and Volta.
It's also up to the IHV to support the fallback emulation, but it would appear that they only want to use the fallback emulation for developers (even that won't get updated), so we won't see that for consumer usage.
 
So what are the hardware requirements for DXR support then? BVH acceleration? Is a hardware de-noising solution required?
None besides DX12 class GPU & native driver support. The fall back layer allowed DXR to be run on DX12 GPUs which didn't have native driver support for it (everything besides Volta & Turing basically).
 
You said TitanV is faster or equal to Turing in Battlefield V.
No, I didn't. I mendioned exactly Titan V and GeForce RTX 2080 Ti (not any other variant of Turing).

A statement even you now find inaccurate.
Do I? I made a statement and quoted a source. I'm not forcing anybody to take it as a fact.
You made a statement that "the 2080Ti maintained a consistent 50% uplift over Titan V", but you still didn't quote any single source supporting this statement. You have linked a comparision of OCed Titan RTX to Titan V, you have linked an article showing that GeForce RTX 2080 Ti can be 17-36 % faster under certain playable settings - that's fine. But I still lack any source, which would support your claim about consistent 50% uplift of RTX 2080 Ti over Titan V.
 
Back
Top