Photon Mapping

Just some theory crafting about that slide since I am unhappy with the formulation. What if the local GPU HW provides a low ray count, say 1 ray ppx or even less for a standard effect... say reflections
How do you design hardware to trace a limited ray count? That's like suggesting a GPU is designed to only shade a number of pixels to make it faster at shading that number of pixels. You have a number of compute units, and these process pixel shaders, and shade pixels, whatever the resolution. You can't make a compute unit faster by limiting it to 1080p framebuffers. The ROPS draw the pixels, however many you want, as quickly as they can. You can't make a ROP faster by limiting it to 1080p framebuffers. Likewise, with ray tracing, you cast rays, however many you choose, a handful for AI, and billions for total scene illumination. Once your hardware has traced all those rays, whether on CPU or compute or accelerated HW, you have your data to use however you want, such as constructing an image. The process of tracing a ray is independent of screen size.

I am unable to envision a hardware design that can trace rays in a finite number, unless you have literally 2 million sampling units that can each trace one ray per frame for a 1080p image. Realistically, HWRT is going to be a form of processor that'll take workloads and produce results as quickly as it can, to be used however they are used.

Perhaps, thinking aloud, as ideas come to me, the RT process is coarse grained, not tracing down to the geometry level, making it suitable for lighting but not sharp reflections? That would involve less memory so allow caches to be more effective. Hardware cone-tracing? Well, no, it's called ray tracing in the slide.
 
How do you design hardware to trace a limited ray count? That's like suggesting a GPU is designed to only shade a number of pixels to make it faster at shading that number of pixels. You have a number of compute units, and these process pixel shaders, and shade pixels, whatever the resolution. You can't make a compute unit faster by limiting it to 1080p framebuffers. The ROPS draw the pixels, however many you want, as quickly as they can. You can't make a ROP faster by limiting it to 1080p framebuffers. Likewise, with ray tracing, you cast rays, however many you choose, a handful for AI, and billions for total scene illumination. Once your hardware has traced all those rays, whether on CPU or compute or accelerated HW, you have your data to use however you want, such as constructing an image. The process of tracing a ray is independent of screen size.

I am unable to envision a hardware design that can trace rays in a finite number, unless you have literally 2 million sampling units that can each trace one ray per frame for a 1080p image. Realistically, HWRT is going to be a form of processor that'll take workloads and produce results as quickly as it can, to be used however they are used.

Perhaps, thinking aloud, as ideas come to me, the RT process is coarse grained, not tracing down to the geometry level, making it suitable for lighting but not sharp reflections? That would involve less memory so allow caches to be more effective. Hardware cone-tracing? Well, no, it's called ray tracing in the slide.
I was not talking about the HW limiting the ray count or something (well, AMD does limit the tessellation factor manually or automatically in its driver actually on PC as an example), sorry if it came off that way. Rather I was talking about the implementation in theoretical games games. Like the devs limis the ray count - or AMD RT sponsored titles would design the ray count on casting based upon specific AMD HW limitations/features.
 
Not sure I understand. Devs can and should be free to cast however many rays they want. We already see RTX scaling ray counts depending on which box you have, and distributing rays based on importance.
 
Like the devs limis the ray count - or AMD RT sponsored titles would design the ray count on casting based upon specific AMD HW limitations/features.
Ofc you always adjust to what the hardware can do, but just take the Crytek demo which gets good results even without HW. Seems there is no need to worry much.
If their RT would be so weak it makes sense only with help from the cloud they would just drop it i guess.
Dealing with limitations / decisions like 'can barely trace the characters' would cause a lot of complexity for little benefit - they would drop it as well.
 

Because I doubt they will used point cloud rendering for geometry like in Dreams it can come from lighting solution the datastructure for photon mapping looks like a point cloud or maybe far fetched but we never know they can do like in pixar renderman and approximate direct illumination with point cloud lighting.

https://graphics.pixar.com/library/HQRenderingCourse/paper.pdf

Photon emission from complex light sources Emitting photons corresponding to simple light sources such as point lights, spot lights, and directional lights is fairly straightforward. However, movie production light source shaders are very complex: they can project images like a slide projector and they can have barn doors, cucoloris (“cookies”), unnatural distance fall-off, fake shadows, artistic positioning of highlights, etc. [5]. For photon emission corresponding to such a light source, we would need to compute a photon emission probability distribution function that corresponds to the light source shader. This can be difficult since the programmable fall-off requires evaluation of the shader not only at different angles but also at different distances. 93 However, there is a simple method to create photon emission distributions that exactly match very general light sources. The method starts by evaluating the light source shader on the surface points. This is done by rendering the scene with direct illumination from the light source(s) and storing a point cloud of the direct illumination. The point cloud contains an illumination point for each surface shading point (roughly one point per pixel). This is a sampling of the light source shader exactly at the positions where its values matter (namely at the surfaces in the scene), and ensures that the photon distribution exactly matches the illumination — no matter how complex and unpredictable the light source shader is. We basically treat the light source shader as a “black box” that is characterized only by its illumination on the surfaces in the scene
 
I don't think anymore photon tracing will be in PS5 but it is possible to use it without raytracing. It is more like Voxel Cone Tracing I suppose this is better. This is what they use to do Davy Jones in Pirates of the Carribbean. I suppose they can use a 3D texture to replace the octree like Voxel Cone Tracing.



From a july job annouce


Senior Graphics Researcher (12 month contract) - London

PlayStation London, GB


Research and develop graphics techniques like real time ray tracing and point cloud rendering for next generation console platforms.


https://graphics.pixar.com/library/PointBasedColorBleeding/


The second rendering part can use Point Based Color Bleeding and at least there seems more compatible with point cloud rendering


https://graphics.pixar.com/library/PointBasedColorBleeding/paper.pdf


7.2 of this document


7.2 Final gathering for photon mapping


Another application of this method is for photon mapping [Jensen

1996] with precomputed radiance and area estimates [Christensen

1999]. The most time-consuming part of photon mapping is the

ray-traced final gathering step needed to compute high-quality images. The point-based color bleeding method presented here can be

used directly to speed up the final gathering step. In this application, the points in the point cloud represent coarse global illumination computed with the photon mapping method (instead of direct

illumination).
 
Last edited:
Because I doubt they will used point cloud rendering for geometry like in Dreams
Using point clouds to calculate lighting does not mean you have to use points for primary visibility as well.

Ok, first you frightened me by speculating photon mapping hardware,
and now you frighten me, saying they may finish surfel GI one day before me!!! :O :)
 
Using point clouds to calculate lighting does not mean you have to use points for primary visibility as well.

Ok, first you frightened me by speculating photon mapping hardware,
and now you frighten me, saying they may finish surfel GI one day before me!!! :O :)

Sorry...
 
Back
Top