Next gen lighting technologies - voxelised, traced, and everything else *spawn*

None besides DX12 class GPU & native driver support. The fall back layer allowed DXR to be run on DX12 GPUs which didn't have native driver support for it (everything besides Volta & Turing basically).
ah ok, thanks for the clarification. Makes sense and I can understand why they'd get rid off the fallback then.
 
No, I didn't. I mendioned exactly Titan V and GeForce RTX 2080 Ti (not any other variant of Turing).
Volta (without RT cores) offers the same (or even higher) performance as Turing (with RT cores)
Even the source you quoted disagree with that statement.

You have linked a comparison of OCed Titan RTX to Titan V,
Both cards were OC'ed, the Titan V actually has the advantage with OCing actually because it's base clock is much lower than Titan RTX. Both were OC'ed to 2.0GHz. And those were the only comparisons available between a Titan V and Turing for a long time. I didn't know about pc-better until you linked it.
 
Here's some more goodies that corroborate what I was saying more than 6 months: The RTX branding (or whatever you want to call it) is total bollocks. Every feature under that umbrella (OptiX, RT, Denoising, DLSS etc etc) work on Nvidia Maxwell, Pascal, Volta & Turing GPUs. It just that if the GPU has Tensor Cores then those will be used for the specified task, same with RT Cores. Same with DXR..but unlike it's proprietary stuff (OptiX/CUDA) Nvidia obviously won't enable it on Maxwell & Pascal for obvious marketing reasons.

Sumary:

Nvidia OptiX 6 Change log says: "RTX acceleration is supported on Maxwell and newer GPUs but require Turing GPUs for RT Core acceleration"

Nvidia's answer: Yeah it's confusing..but here's how it works..

The release notes for OptiX 6.0.0 reads as follows:

"RTX acceleration is supported on Maxwell and newer GPUs but require Turing GPUs for RT Core acceleration"

I'm not sure I understand the part about "Maxwell and newer GPUs". I thought RTX acceleration wasn't available on GPUs older than Turing.

Yeah, that's formulated confusingly.

What it means is the new "RTX execution strategy" in OptiX, which is just a name for the new core OptiX 6.0.0 implementation. It's in contrast to the "mega-kernel execution strategy" used so far in all previous OptiX versions.

That new RTX execution strategy allows to use the RT cores inside the Turing RTX GPUs for BVH traversal and triangle intersection.
On GPUs without RT cores the bulk of that new code path will still be used for shader compilation, acceleration structure builds, multi-GPU support, etc. but the BVH traversal and triangle intersection routines run on the streaming multi-processors as before.

To invoke the hardware triangle intersection routine or a fast built-in routine on boards without RT cores, you'd need to change your OptiX application to use the new GeometryTriangles and attribute programs.
The GeometryTriangles neither have a bounding box nor an intersection program, but they have a new attribute program which allows to fill in your developer defined attribute variables, so that you can use the same any hit and closest hit programs from custom primitives in Geometry nodes as well as from GeometryTriangles.

Unfortunately the OptiX Programming Guide is slightly behind on explaining that. The online version hopefully gets updated accordingly soon.
http://raytracing-docs.nvidia.com/optix/index.html
The OptiX API Reference document and of course the headers contain the necessary function explanations and the optixGeometryTriangles example demonstrates the new GeometryTriangles and attribute program usage.

https://devtalk.nvidia.com/default/...is-supported-on-maxwell-and-newer-gpus-quot-/
 
... to remember:

Maxwell is not good for DXR (but it works by 'emulation' of course which is now discontinued. 1080Ti had 10fps in Star Wars Demo at native 4K, IIRC)

Volta is a lot better, because it has improved compute sheduling, likely the ability to launch compute shaders dirictly from compute shaders, with out a need of a static command buffer generated on CPU.
People call this 'device side enqueue', or 'dynamic dispatch', or whatever. NV talked about 'work generation shaders', eventually to be exposed.
This is the final bit we need to be exposed to unleash full power of GPUs. We are wining for years for it. Other APIs like OpenCL 2.0 (and Mantle AFAIK, don't know anything about CUDA) have it since a long time already. But not game crap.

Personal Opinion: Very likely, if you tailor your RT algorithm to specific needs, do optimizations impossible with DXR, it is enough and better than restricting FF RT cores.
But i'm not alone, even if i felt so all the time!
See those gfx top coders criticizing RTX with the same arguments like 'no LOD, does not scale, blackboxed, let us implement ourselves instead': ... found this just by coincidence here, @chris1515 has posted it in an older threat about dynamic render resolution.
My hopes go to AMD to do the right thing here.

Finally there is Turing which adds RT cores to the above.
 
Same results as the rtx q2? 30fps on a 960 not bad. 680 being much slower though? Its faster in most games otherwise.

Windows executable in YT link


They already had RT in 2017 :)
 
Personal Opinion: Very likely, if you tailor your RT algorithm to specific needs, do optimizations impossible with DXR, it is enough and better than restricting FF RT cores.
I get your need for advocacy, but without seeing the documentation I think you're entirely off base on your claims. The industry very much needs to walk before running. Ray Tracing has been under research since 1980s and continues to be by a great deal of many people on how to best speed it up and all things have pointed towards acceleration data structures like BVH for ray-triangle intersection as being the most beneficial in parallelization process. It's something I know that at least the whole industry can agree to do together before moving along, something that MS is at least responsible for getting everyone onto the same page at the same time.

Give FF hardware a chance to provide the speedups to make a difference before you dive into a flexible pipeline that could very well be so slow that it is no longer a feasible feature. We didn't get to flexible pipelines in rasterization stages until much later, so I don't see the rush to jump to the end game today.
 
I think the counterargument to that fair suggestion is that ray-tracing is actually bogged down in legacy thinking. It's a visualisation concept first described in 1968, and implemented in 1979. Computation options were limited. Data storage was incredibly limited. The very notion of parallelisable workloads didn't exist - processors were single threaded rather than thousands of integrated cores.

Visualisation can take a step back from all the old ways of doing things, like representing everything as triangles, and see what other options are available, exploring the new paradigms presented by multicore hardware and vast quantities of fast storage. These developments are only years old, not decades old, and the argument would be to keep working on new ideas rather than trying to perfect old ideas. That doesn't mean the new ideas won't gravitate towards to old concepts, but it means they aren't tied to them. The moment the hardware prescribes a way of doing something, R&D for the following years/decades ends up being tied to that. What if SDF for graphics had appeared in 1982...would we be looking at whole games and hardware and tool solutions directed towards that, with decades of research solving the limitations of SDF, while someone starts exploring 'representing models as triangle meshes' and writing their own triangle modeller because none exists in a world of Maya SDF etc?

Offering the most flexible solutions, even if not the most performant, will provide the best opportunities for new, ideal paradigms to develop, which the hardware can then be tailored towards. Offering a fixed approach to a particular solution will instead get better performance for that solution now, but constrict the options being explored for years to come - at least if history and common sense are to be followed. It'll take devs to eschew the HW options, such as MM ignoring the rasterising portions of the GPU while exploring their splatter, to explore other options, whcih is counter-intuitive. You need to get a game out there looking good; the hardware provides XYZ features to do that; let's use those features and create an engine to run well using the hardware provided.
 
Hello, try gl_pt_enable in console.
Thanks! Unfortunately i have grid like glitches over the screen. But it's very fast with 1080p. Seems 60 fps. Setting bounces to 4 it goes down to 15 fps maybe. (did not find how to show FPS or ms, GPU is FuryX)

I get your need for advocacy, but without seeing the documentation
I see there is no more need for advocacy, with people like Karis, MJP, Goldberg and Sebbie sharing my thoughts. Maybe they should have asked such people as well before presenting new API and hardware with a big (rushed and jumped) surprise?
I do not criticize before reading the API documentation, if you mean that. I did read this long before registering here. And i don't think new hardware needs me to give it a chance. It just forces me to do so. Historical overview about how RT works is unrelated to given critique.
 
I think the counterargument to that fair suggestion is that ray-tracing is actually bogged down in legacy thinking. It's a visualisation concept first described in 1968, and implemented in 1979. Computation options were limited. Data storage was incredibly limited. The very notion of parallelisable workloads didn't exist - processors were single threaded rather than thousands of integrated cores.

Visualisation can take a step back from all the old ways of doing things, like representing everything as triangles, and see what other options are available, exploring the new paradigms presented by multicore hardware and vast quantities of fast storage. These developments are only years old, not decades old, and the argument would be to keep working on new ideas rather than trying to perfect old ideas. That doesn't mean the new ideas won't gravitate towards to old concepts, but it means they aren't tied to them. The moment the hardware prescribes a way of doing something, R&D for the following years/decades ends up being tied to that. What if SDF for graphics had appeared in 1982...would we be looking at whole games and hardware and tool solutions directed towards that, with decades of research solving the limitations of SDF, while someone starts exploring 'representing models as triangle meshes' and writing their own triangle modeller because none exists in a world of Maya SDF etc?

Offering the most flexible solutions, even if not the most performant, will provide the best opportunities for new, ideal paradigms to develop, which the hardware can then be tailored towards. Offering a fixed approach to a particular solution will instead get better performance for that solution now, but constrict the options being explored for years to come - at least if history and common sense are to be followed. It'll take devs to eschew the HW options, such as MM ignoring the rasterising portions of the GPU while exploring their splatter, to explore other options, whcih is counter-intuitive. You need to get a game out there looking good; the hardware provides XYZ features to do that; let's use those features and create an engine to run well using the hardware provided.
We had software rendering before that was entirely flexible. It stood no chance against 3D accelerators.
RT cores is hardware accelerating an acceleration data structure. This should not be confused with an entire pipeline. The emulation layer of DXR asks developers to create their own data structure to point to for ray debugging. As far as we know the entire concept of RT cores is just a marketing term for a black box whose sole purpose is to accelerate the updating the BVH tree.
 
I think the counterargument to that fair suggestion is that ray-tracing is actually bogged down in legacy thinking. It's a visualisation concept first described in 1968, and implemented in 1979. Computation options were limited. Data storage was incredibly limited. The very notion of parallelisable workloads didn't exist - processors were single threaded rather than thousands of integrated cores.

Visualisation can take a step back from all the old ways of doing things, like representing everything as triangles, and see what other options are available, exploring the new paradigms presented by multicore hardware and vast quantities of fast storage. These developments are only years old, not decades old, and the argument would be to keep working on new ideas rather than trying to perfect old ideas. That doesn't mean the new ideas won't gravitate towards to old concepts, but it means they aren't tied to them. The moment the hardware prescribes a way of doing something, R&D for the following years/decades ends up being tied to that. What if SDF for graphics had appeared in 1982...would we be looking at whole games and hardware and tool solutions directed towards that, with decades of research solving the limitations of SDF, while someone starts exploring 'representing models as triangle meshes' and writing their own triangle modeller because none exists in a world of Maya SDF etc?

Offering the most flexible solutions, even if not the most performant, will provide the best opportunities for new, ideal paradigms to develop, which the hardware can then be tailored towards. Offering a fixed approach to a particular solution will instead get better performance for that solution now, but constrict the options being explored for years to come - at least if history and common sense are to be followed. It'll take devs to eschew the HW options, such as MM ignoring the rasterising portions of the GPU while exploring their splatter, to explore other options, whcih is counter-intuitive. You need to get a game out there looking good; the hardware provides XYZ features to do that; let's use those features and create an engine to run well using the hardware provided.
Damn shifts, sometimes you really know how to summarize and articulates pages worth of blabber into one clear and comprehensive post. I would guess you approximated JoeJ's fears very well here, and I share the same fears, but I'm not as invested or affected by it all as he is. I'm just watching it all from affar as an enthusiast after all.
 
We had software rendering before that was entirely flexible. It stood no chance against 3D accelerators.
But now we have accelerators so flexible and fast, we even can implement our own rendering, see Dreams, Claybook, etc. And we also can make self driving cars, transfer money and more. Things that were not thought about, when accelerating rastering pretty triangles.
Why would we need FF again just to intersect some triangles with some rays?

purpose is to accelerate the updating the BVH tree
They say RT cores do traversal BVH with rays, not updating it. (So we can assume update is compute job)

Damn shifts, sometimes you really know how to summarize and articulates pages worth of blabber into one clear and comprehensive post. I would guess you approximated JoeJ's fears very well here
Yes, Shifty Geezer has rhetorical talent!
But i do not fear RTX. It's not fast enough to beat me at global light transport. And it can do all the things i can not. I always expected RT to come to games and i am prepared. But i want it to do efficiently. O(n log n) is not good enough for me - i could do better at the prize of acceptable apporximations. Also i don't want to be forced to have multiple data structures for the same purpose, etc, etc...
 
Offering a fixed approach to a particular solution will instead get better performance for that solution now, but constrict the options being explored for years to come - at least if history and common sense are to be followed.

Agree, and im sure Nvidias current RT hardware wont be the same in a few years, looking at history fixed function hardware was there when we needed it, software was more flexible but too slow (like PS2).

We had software rendering before that was entirely flexible. It stood no chance against 3D accelerators.
RT cores is hardware accelerating an acceleration data structure. This should not be confused with an entire pipeline. The emulation layer of DXR asks developers to create their own data structure to point to for ray debugging. As far as we know the entire concept of RT cores is just a marketing term for a black box whose sole purpose is to accelerate the updating the BVH tree.

Just like Shifty very well explained.

But now we have accelerators so flexible and fast, we even can implement our own rendering, see Dreams, Claybook, etc.

Dreams and Claybook arent really ray tracing games, if that was the subject about atleast :) Theres probably nothing to fear, as in a few years we most likely will see more flexible solutions in nvidia/amd and hopefully Intel GPU's.
 
But now we have accelerators so flexible and fast, we even can implement our own rendering, see Dreams, Claybook, etc. And we also can make self driving cars, transfer money and more. Things that were not thought about, when accelerating rastering pretty triangles.
I can't help but feel there is way too much romance for flexible programming here.
ASIC miners severely dwarf GPUs on BTC, there's no comparison on cost performance or watt performance. It started as CPU mining which quickly moved to GPU mining because it was more performant.

2) Self driving cars and the technology underlining it has been in development for as long as 1980s. The cheaper cost of storage, the amount of data we capture and when CUDA which was released because they found Data Scientists repurposing pixel values in rasterization for compute values - did deep learning data science happen. NN didn't happen because it was flexible and great. It was because CPUs were too slow compared to more specialized technology for its price point.
Today our strongest convolutional neural networks and Deep learning are driven by Fixed Function Tensor Cores. Even the Tegra X1s use 16bit Floats to try to accelerate neural networks best they can which you find in self driving cars and that was quickly trumped by 16bit tensors.

The title of this thread has been "Ray-Tracing, meaningful performance metrics and alternatives"
I've not seen a lot of posts that show meaningful performance of ray tracing metrics on their alternatives.
I'm looking at a game, that has bolted on RT running 60fps at 1080p. For something that has largely been considered unachievable, i'm having a hard time with seeing how this metric is so quickly dismissed compared to the alternatives (SDF +) which I have yet to see on a AAA title running at the graphical and world complexity of BFV - and then to be told that flexible programming is what we need instead of this solution.

Because all this time on PC we've had compute in 2007 since the release of DX11, no one tried it then. No one tried it through kepler, maxwell, pascal..., GCN 1-4. No one tried this. Some hardware acceleration finally comes along and an API is released to support RT development across multiple IHVs, and suddenly everyone has an 'ah-ha moment' that flexible compute was the answer we needed all along to solve RT? (no focus of PowerVR as being a hardware accelerated alternative should be more in line with discussion)

It's both baffling and frustrating to read this. It is an unrealistic position held by those in romance of discovery of some 'magic' algorithm that hundreds of PHD and masters researchers have not found in the last 40 years.
 
Last edited:
tss noone ever mentions Caustic Graphics & PowerVR Wizard, like it wasn't an hybrid TBDR/Ray Tracer which silicon released a decade ago...
I tried, over and over again (I think I tried it here, too, not 100% sure), but no-one cared, because RTX ON and everything else sucks
 
Back
Top