Next gen lighting technologies - voxelised, traced, and everything else *spawn*

On a more technical point. There's always a chance that an specific implementation runs at sub-par performance on other IHVs HW when it's partly coded by a competing IHV (UE4's current DXR implementation). Which brings me to Metro Exodus.. It will be really interesting to see if it ever works (or its performance) on non NV HW (when AMD & Intel finally have DXR support) given that some of the underlying RT code is AFAIK based on Nvidia's proprietary iRay.
That's to be expected though, if nVidia put in the investment. If you develop an engine to work on a particular architecture, and then an alternative comes along to try and run the same engine, you'd expect differences. The real concern is whether cross-platform engines end up favouring an architecture. If, for example, Unreal Engine's RT solution better favours nVidia hardware because nVidia have been involved in its development, that might result in a performance negative for other platforms. For the benefit of the art, we want solutions to really be platform agnostic, with it being the responsibility of the IHVs to develop optimal hardware solutions. At this juncture, a lot probably rests on what the next consoles bring as they'll define the RT target for the next few years of multiplat engines. Well, MS's side should be obvious as its DXR, and on AMD, so any DXR engine should, we hope, be as nice a fit for AMD's (XB) RT solution as any other. Sony is an unknown at this point. We really need any proprietary solution to be a good fit for what's happening in the PC space, and definitely not off on an independent tangent. Unless they're doing something mind-boggling.

Should also have commented on the paper! The separation of lighting and shadowing is exactly the sort of thing that offline renderers wouldn't consider because it clashes with their mandate for quality, and exactly the sort of thing we've been expecting game devs to toy around with and find better uses for RT techniques than just brute-forcing the image construction. Shadowing can be dealt with as a different task to lighting. Lighting solutions can focus on solving the temporal latency, while shadowing can work on most efficient tracing and denoising, while shading happens through the rasterisation of the polygons. Flexible solutions will allow us to combine and separate processes as best fits for optimal results. So, early results with RT performance won't be at all reflective of what games are achieving in a few years on the same hardware.
 
Last edited:
Should also have commented on the paper! The separation of lighting and shadowing is exactly the sort of thing that offline renderers wouldn't consider because it clashes with their mandate for quality, and exactly the sort of thing we've been expecting game devs to toy around with and find better uses for RT techniques than just brute-forcing the image construction. Shadowing can be dealt with as a different task to lighting. Lighting solutions can focus on solving the temporal latency, while shadowing can work on most efficient tracing and denoising, while shading happens through the rasterisation of the polygons. Flexible solutions will allow us to combine and separate processes as best fits for optimal results. So, early results with RT performance won't be at all reflective of what games are achieving in a few years on the same hardware.

Alternatively increased R&D by game companies into best uses of RT could lead to hardware changes by the IHVs that invalidates and/or renders obsolete current hardware RT solutions such that no one bothers to support it in a few years. As an example early H&L hardware in PC GPUs, AMD's (ATi at the time) early work with tessellation, etc.

Now that NV have opened the door to real-time use of RT in games, the amount of R&D into RT by game developers and IHVs should ramp up rapidly. Which doesn't necessarily mean that the hardware that started it will be what gets used in the future.

I still think it's a crying shame that PowerVR wasn't present in PC hardware when they introduced RT accelerated hardware. Even more that they didn't re-enter the PC hardware market when they came out with their implementation. We could have seen this revolution many years earlier.

Regards,
SB
 
That's been one of the big debates, whether 'compute' will be a better solution. The argument against has been that compute hasn't enabled RTRT so far whereas RTX has, but the counter to that is there's a load of research still to do. I don't see that a GPU can be harmed in having ray-intersect acceleration though. It's a notable part of tracing and an order or magnitude acceleration of that, even if eventually just a small part of the overall pipeline, won't be wasted silicon. Eventually though, it might end up evolved out of the hardware.
 
In 2-3 years time we should have a good idea of where RT is headed and how hardware IHVs have decided to tackle it. Right now, it's way too early to say one way or another how things will be handled.

Some people on both sides of the debate seem far too certain that it will go one way or another.

The only thing we can say with any certainty right now is that RT is probably here to stay. And we can thank NV on the PC side of things for getting the ball rolling in a significant way WRT gaming.

Regards,
SB
 
* Edit - correction. The paper covers a full collaboration between Unity, Lucasfilm, and nVidia. There's a website on the technique which is far more relevant and interesting than which companies are involved!
https://eheitzresearch.wordpress.com/705-2/
Thank you, this is my original point, unlike what Ike implied, NVIDIA’s contribution was far more significant than a mere marketing stunt or a logo slab job. Unity was crystal crystal clear about this in their blog, which I quoted in the first paragraph of my previous post.

Everything else you posted (paragraphs 3 to 5) is just the benefits of raytracing
Of course, though I felt the urge to include these bits in response to the completely skewed summary posted above, which would make it seem like Unity were not really getting any benefit out of Ray Tracing, while infact it was the opposite.
 
Last edited:
Fly-by quotes don't explain anything. I genuinely don't understand people who post quotes or video links and then expect people to get the message! You posted zero explanation, so it's no wonder your point was lost.

In schools, they're taught to phrase their argument as point, evidence, analysis. Please, please, please everyone, make sure your point is front and centre so everyone knows what's being discussed and how to interpret the following evidence! ;)
 
DD2019 presentation about Unity's early implementation of DXR:
https://auzaiffe.files.wordpress.co...ay-tracing-hardware-acceleration-in-unity.pdf

Super interesting insight on clever optimizations. Also interesting takeaways (Spoiler: RT isn't a miracle solution):



Another anecdotal bit is that there's not a single mention of Nvidia or acknowledgement of work/help from Nvidia (even thought the official PR line at the time was that this was done in partnership with Nvidia...as usual never trust PR mumbo jumbo..). Unlike UE4's DXR/RTX implementation which is partly based on code directly written by Nvidia, Unity's current DXR implementation seems to be done "in-house". Nvidia's only "contribution" is Morgan Mcguire's work on this paper from last year alongside Unity & LucasFilms (which is the stochastic shadows solution used in the Unity DXR path but can also be done in the regular HDPR raster path without DXR): https://eheitzresearch.wordpress.com/705-2/
They're not doing hybrid screen/world space ray tracing yet so that's another performance boost we can expect at some point.
 
That's been one of the big debates, whether 'compute' will be a better solution. The argument against has been that compute hasn't enabled RTRT so far whereas RTX has, but the counter to that is there's a load of research still to do.
Let’s be a bit more restrained here. RTX has enabled a compromised RTRT for a given subset of the total rendering problem at significant cost in performance and some cost in silicon that could otherwise have been used to enhance other capabilitiies.

General compute has produced overall convincing results at moderate computational cost already, and nothing says we’re at roads end there at all. I’m more doubtful when it comes to silicon scaling and thus the development of the amount of graphics resources available to throw around.

Raytracing as a word may have enough weight to sell hardware, but I’m not sure about that. The initial hype died off very quickly when the games that could demonstrate the benefits showed that gamers had difficulty differentiating the lighting approaches at all, and when they could the perception was that the renderings were different but without clear advantages to either.
Apart from cost, that is.

Now I’m fully aware that opinions differ on this issue, but it is undeniable that even graphics enthusiasts on forums have cooled off on the greatness of RT. Joe Average has no reason to care at all.

We’ll see. It may be something that is hyped to help push the new consoles. Or it may not. Cerny made a big deal about load times, and spoke about 3D audio, but not RTRT. Then again, he may be keeping that as an ace up his sleeve. Who knows? But when efficiency demands dedicated hardware, with inevitable costs to other resources and of limited general use, then to me it’s only natural to feel a bit doubtful about console viability.

But if total rendering efficiency improves, sure.
In which case future titles are bound to favour very shiny and reflective futuristic setting with rain puddles even in subway stations. ;-)
Looking forward to it.
 
Raytracing as a word may have enough weight to sell hardware, but I’m not sure about that. The initial hype died off very quickly when the games that could demonstrate the benefits showed that gamers had difficulty differentiating the lighting approaches at all, and when they could the perception was that the renderings were different but without clear advantages to either.
Apart from cost, that is.

Now I’m fully aware that opinions differ on this issue, but it is undeniable that even graphics enthusiasts on forums have cooled off on the greatness of RT. Joe Average has no reason to care at all.
The first examples of RT are way, way below what will be achieved long term, in the same way the first games on PS2 were way, way below what the hardware could ultimately achieve, as the new possibilities were completely unknown and the first applications of the new hardware are rudimentary advances on old techniques. Look at the sudden appearance of things like the Minecraft mods, the Quake mods, separating lighting and shadowing passes, and screen-spaced traced shadowing, to see how the field is exploding. With or without RT hardware, tracing has become a technique usable in enough places now (at GTX1080+ level performance) for devs to explore its application most fully where they never have before. You'd have to have been pretty much blind to how graphics technology has progressed over the years to believe that the first few nVidia backed RTX enhanced games were all RT had to offer. That's like looking at the first application of programmable shaders and wondering what's the big deal versus fixed-function hardware that was achieving the same results. Tech like tessellation is limited in scope and application. Tech like tracing rays is akin to programmable shaders - a whole new resource to be used, and a whole new field of research to discover how to use it.
 
It's not a resource, it's a straightjacket. Problems which need to be solved going forward (geometry LOD for instance) become harder with these raytracing APIs.

There's still mostly just compute below it, just with a much more expressive API than third party developers have access to. I'd rather they gave access to that and not mix up the small bits of dedicated hardware with a ton of black box software.
 
Okay. Er...I never said the API's were a resource. I've only ever talked about the value of RT, whether attained through hardware acceleration or compute.
 
Let’s be a bit more restrained here. RTX has enabled a compromised RTRT for a given subset of the total rendering problem at significant cost in performance and some cost in silicon that could otherwise have been used to enhance other capabilitiies.

General compute has produced overall convincing results at moderate computational cost already, and nothing says we’re at roads end there at all. I’m more doubtful when it comes to silicon scaling and thus the development of the amount of graphics resources available to throw around.

Raytracing as a word may have enough weight to sell hardware, but I’m not sure about that. The initial hype died off very quickly when the games that could demonstrate the benefits showed that gamers had difficulty differentiating the lighting approaches at all, and when they could the perception was that the renderings were different but without clear advantages to either.
Apart from cost, that is.

Now I’m fully aware that opinions differ on this issue, but it is undeniable that even graphics enthusiasts on forums have cooled off on the greatness of RT. Joe Average has no reason to care at all.

We’ll see. It may be something that is hyped to help push the new consoles. Or it may not. Cerny made a big deal about load times, and spoke about 3D audio, but not RTRT. Then again, he may be keeping that as an ace up his sleeve. Who knows? But when efficiency demands dedicated hardware, with inevitable costs to other resources and of limited general use, then to me it’s only natural to feel a bit doubtful about console viability.

But if total rendering efficiency improves, sure.
In which case future titles are bound to favour very shiny and reflective futuristic setting with rain puddles even in subway stations. ;-)
Looking forward to it.

The guy of Wired talked with Cerny told on twitter they talked mostly about raytracing as a visual effect but he decided to edited his article differently. He was not knowing the existence of Nvidia RTX card...

EDIT:

https://patents.google.com/patent/US9633471B2/en

A 2015 Sony patent about hardware Photon mapping on a GPU to complement direct illumination using rasterizing.

http://casual-effects.com/research/Mara2013Photon/index.html

It uses the 2D tile of this Nvidia paper

In 2016 Sony hired this ex employee of Imgtec/Caustic Graphics

https://www.linkedin.com/in/carlovloet

https://carlovloet.wordpress.com/

he did a master thesis on Photon mapping
https://carlovloet.files.wordpress.com/2011/03/portfolio-carlo-vloet.pdf

Sony has this patent referencing a kd tree

http://www.freepatentsonline.com/10255650.html

EDIT: Photon mapping is fully independent of geometry complexity of the scene and read the nvidia study, the 2d tile is not a problem for resolution. And one of the goal of the Sony patent was to be able to do global illumination at high resolution at high speed.
 
Last edited:

At Gameslab 2013 Marck Cerny spoke about a realtime raytracing GPU Sony did not use for PS4 because it means all the game team needs several years to rebuild their game engine to use it.

Édit:

And he was talking about realtime raytracing in console in 2010.
 
Last edited:
At Gameslab 2013 Marck Cerny spoke about a realtime raytracing GPU Sony did not use for PS4 because it means all the game team needs several years to rebuild their game engine to use it.

Édit:
And he was talking about realtime raytracing in console in 2010.
Woah woah woah! You've really put words into his mouth there. Raytracing was only presented as a hypothetical example of not disrupting devs existing workflow. There was no RTRT option for PS4 - it was a hypothetical question on where devs stood regarding exotic architectures. Likewise, the remark in 2010 was only about how graphics were set to remain the same and devs weren't going to need to wrestle with another new architecture such as raytracing.

Both links are talking about game development and not lighting techniques. Both times RT is just mentioned as an alternative render tech where there aren't really that many to choose from. Post-Dreams, he could have replaced 'ray tracing' with 'splatted SDFs'.
 
Last edited:
Woah woah woah! You've really put words into his mouth there. Raytracing was only presented as a hypothetical example of not disrupting devs existing workflow. There was no RTRT option for PS4 - it was a hypothetical question on where devs stood regarding exotic architectures. Likewise, the remark in 2010 was only about how graphics were set to remain the same and devs weren't going to need to wrestle with another new architecture such as raytracing.

Both links are talking about game development and not lighting techniques. Both times RT is just mentioned as an alternative render tech where there aren't really that many to choose from. Post-Dreams, he could have replaced 'ray tracing' with 'splatted SDFs'.

I highly doubt Sony was not thinking about raytracing before 2015, the date of the Photon Mapping patent. Like Phil Spencer told in 2014, there will be realtime raytracing on console. Same the patent for dynamic traversal of a datastructure was updated in 2015 the year they fild the patent for Photon Mapping and Carlo Vloet was hired just after filing this patent.

EDIT: I think we will global illumination hardware solution on Xbox Scarlett and PS5...
 
I highly doubt Sony was not thinking about raytracing before 2015, the date of the Photon Mapping patent. Like Phil Spencer told in 2014, there will be realtime raytracing on console. Same the patent for dynamic traversal of a datastructure was updated in 2015 the year they fild the patent for Photon Mapping and Carlo Vloet was hired just after filing this patent.

EDIT: I think we will global illumination hardware solution on Xbox Scarlett and PS5...
Ultimately, one patent & one engineer doesn't mean much in the grand scheme of things... We can't assume anything based on this info.
AMD has numerous engineers (& published papers) working on RT for years now in addition to their own RT engine (Radeon Rays)...no still no "RT Cores" in sight..
 
Ultimately, one patent & one engineer doesn't mean much in the grand scheme of things... We can't assume anything based on this info.
AMD has numerous engineers (& published papers) working on RT for years now in addition to their own RT engine (Radeon Rays)...no still no "RT Cores" in sight..

They maybe have other engineer but I find this guy on google trying this search term "Photon mapping Sony Interactive entertainment" because of KD-tree traversal in the other patent, I was searching what could link KD-tree to Sony...

EDIT: And I find 2015 the patent one week after finding this guy and just after the Nvidia study. Photon mapping was working very well for some complex testing scenes at 1080p on a Geforce 670 using compute shading. They could have improved the performance but they expected after two fab process it will be viable for videogame without improving the algorithm and they expected to be viable with some hardware and software improvement before this.

http://casual-effects.com/research/Mara2013Photon/Mara13Photon.pdf

We demonstrated sufficient performance to immediately enable photon mapping for interactive design applications, provided the photon trace is amortized over multiple frames and also performed efficiently. For limited BSDFs and specific scenes, this could perhaps be optimized enough to deploy on current high-end hardware in games. Yet, a more important goal is robust and accurate lighting in the near future without such limitations and simplifications. We cast our work as a step toward that goal. We note that rendering bounding volumes over screen-space buffers is an increasingly popular technique for quantities other than indirect light (e.g., [Laine and Karras 2010; Maletz and Wang 2011; Kim 2012]). The approach demonstrated by our efficient 2.5D bounds and Tiled methods may be directly applicable to these. 2.5D scattering is easy to implement and fast in practice. However, both 3D and 2.5D scattering are quality limited by their inability to (efficiently) sample stochastically during shading and thus will not scale well to higher resolutions or fine detail. The Tiled gather algorithm supports stochastic sampling and already performs well; it seems the most likely candidate moving forward. Profiling revealed several opportunities for low-level optimization with its structure. The most significant may be reducing the required 40 registers for the full BSDF, which limits the number of concurrent threads and thus performance. We also observed memory bank conflicts and fairly weak coalescing across global memory read operations that may be addressable with more careful operation scheduling. For stochastic sampling during shading, we used statistically independent random numbers at each pixel. Cooperative sampling methods have been shown to dramatically increase convergence by coordinating samples at adjacent pixels. A more sophisticated reconstruction filter may also reduce the number of samples required to achieve acceptable convergence. Historical trends indicate that two GPU generations will yield approximately a 6× increase parallelism. Even without further optimization, that would enable real-time applications of photon mapping in production applications using the methods described in this paper. However, we hope that algorithmic and hardware advances will enable this even sooner and at a higher quality.

And there is not many references in the Sony patent, only the Jensen patent when he created Photon mapping, a reference to Nvidia study and another to a study of Zou one guy Microsoft research which patented the fast creation of a KD-tree on a GPU.
 
Last edited:
Back
Top