Photon Mapping

It means you don't need tensor core to doing it but you use other things. You trade tensor core against another hardware.

It is more a question of which one will have the more efficient die size.

Tensor cores are not necessary for denoising, it can be done with regular pixel shaders and gpu compute and so far that's how how it's been done by most if not all released RTX games. Tensor based denoising is something nVidia tried to make into a thing, I believe solely to justify their inclusion on consumer cards and artificially make them useful in that space to leverage nvidia's relatively longer experience with tensors into an extra feature in the field of gaming, but it did not actually catch on.
 
I agree denoising might compete PM, but this was not yet known in 2015.
PM might have been more attractive before recent denoising progress, and maybe they had to start with chip design at this time.

On the other hand, beating 'slow' PT is not so much of an argument either. It would be more interesting to know how PM compares to somthing like voxel GI in practice for games.
 
I agree denoising might compete PM, but this was not yet known in 2015.
PM might have been more attractive before recent denoising progress, and maybe they had to start with chip design at this time.

On the other hand, beating 'slow' PT is not so much of an argument either. It would be more interesting to know how PM compares to somthing like voxel GI in practice for games.

Maybe knowing they want to go with 2d tile a better data structure would be spatial hashing method like in this paper
https://www.csee.umbc.edu/~olano/class/635-11-2/sgupte1.pdf
 
I would not be surprised if Simon Brown of Media Molecule and before working for Sony ATG had been consulted for the raytracing part of PS5. He is a specialist of raytracing, global illumination(read his blog he implemented all GI algorithm), Voxel and sdf too. Before been at Media Molecule, he was at Sony ATG and in 2009 they did a test at work using a raytracer for direct lightning and a photon mapper but they replace the raytracer by lightcuts.


http://sjbrown.co.uk/2009/02/05/multiple-importance/

The 2009 test

http://sjbrown.co.uk/

On his blog talking about every possible light transport algorithm and raytracing.

https://www.mobygames.com/developer/sheet/view/developerId,728325/#

Where I find he was at Sony ATG before Media Molecule and the guy became Media Molecule empgloyee only on 2013 but worked on Dreams since 2011 as a Sony ATG employee. Read SIGGRAPH 2015 presentation about Dreams...

Edit: Maybe lightcuts and a photon mapper

lightcuts can approximate everything outside...caustics...
 
Last edited:
Ligthcuts

The lightcuts framework presents a new scalable algorithm for computing the illumination from many point lights. Its rendering cost is strongly sublinear with the number of point lights while maintaining perceptual fidelity with the exact solution. We provide a quick way to approximate the illumination from a group of lights, and
more importantly, cheap and reasonably tight bounds on the maximum error in doing so. We also present an automatic and locally adaptive method for partitioning the lights into groups to control the tradeoff between cost and error. The lights are organized into a light
tree for efficient partition finding. Lightcuts can handle non-diffuse materials and any geometry that can be ray traced. Having a scalable algorithm enables us to handle extremely large numbers of light sources. This is especially useful because many
other difficult illumination problems can be simulated using the illumination from sufficiently many point lights Lightcuts can compute the illumination from
many thousands of lights using only a few hundred shadow rays.
Reconstruction cuts can further reduce this to just a dozen or so.

Second caracteristic it is decoupled from geometry
Decoupling Lighting from Geometry.

Lightcuts [Walter et al. 2005; Walter et al. 2006] build a hierarchy over point samples and adaptively select the appropriate level
for efficient synthesis of images with complex illumination, motion
blur, and depth of field. The technique is hierarchical with respect
to both the point light sources (senders) and geometry (receivers).

Long term maybe the idea is rasterizing just for visibility check for direct illumination raytracing for vibility check for indirect illumination. On the side of light transport direct illumination and specular/glossy with light cuts and for indirect illumination and caustic photon mapping.

There is some common idea between lightcuts and photon mapping be able to be decoupled from geometry and reduce the number of rays launched and the part with incoherent memory access and dependant of scene complexity...
 
Long term maybe the idea is rasterizing just for visibility check for direct illumination raytracing for vibility check for indirect illumination. On the side of light transport direct illumination and specular/glossy with light cuts and for indirect illumination and caustic photon mapping.

If you have a system like lightcuts that can reduce work based on a LOD cut, then why an additional brute force technique like photon mapping?
Why not the former for everything, and ignore caustics which no one really needs yet?
If you extend the lightcuts idea to represent geometry as well you come closer to what i see as the ideal realtime GI algorithm. Or most likely you come to a completely different solution.
And that's why i do not swallow the PM hardware acceleration idea. If Sony talked to devs, what might they have said? "Yes, please give us something for GI, no matter what, because we never get there ourselves." or "FF PM??? We want more options, more flexibility! Every game will look the same! Nooo!!" :)

So if they really do this there must be a very good reason like awesome performance. But if it is not good enough and you still need lightcuts / something else on top then i doubt it even more.
It makes much more sense to focus just on the common denominator of all of this, which is RT. Lots of chip area already.
My bet is both PS and XBox will have something very similar to RTX, maybe more flexible on console. (Like the ability to link your photons to BVH nodes because there is no restrictive DXR API)
 
If you have a system like lightcuts that can reduce work based on a LOD cut, then why an additional brute force technique like photon mapping?
Why not the former for everything, and ignore caustics which no one really needs yet?
If you extend the lightcuts idea to represent geometry as well you come closer to what i see as the ideal realtime GI algorithm. Or most likely you come to a completely different solution.
And that's why i do not swallow the PM hardware acceleration idea. If Sony talked to devs, what might they have said? "Yes, please give us something for GI, no matter what, because we never get there ourselves." or "FF PM??? We want more options, more flexibility! Every game will look the same! Nooo!!" :)

So if they really do this there must be a very good reason like awesome performance. But if it is not good enough and you still need lightcuts / something else on top then i doubt it even more.
It makes much more sense to focus just on the common denominator of all of this, which is RT. Lots of chip area already.
My bet is both PS and XBox will have something very similar to RTX, maybe more flexible on console. (Like the ability to link your photons to BVH nodes because there is no restrictive DXR API)

But I never said it is the full solution in the patent it seems it is only part of other things. We will see just when the console will be shown.
 
3-1080.8a2c9622.jpg

"Selected Lighting Effects" sounds less powerful than "Full Scene Ray Tracing".
Maybe this makes an additional PM HW more likely, because RT too weak to do serious per pixel work.

One could even interpret this as 'Skipping RT for dedicated gaming because it's dead - focus on cloud instead.'

Waiting another year to know more, keep working on custom GI without being sure there will be still a need for it. I'm tired of this guessing... :|
 
3-1080.8a2c9622.jpg

"Selected Lighting Effects" sounds less powerful than "Full Scene Ray Tracing".
Maybe this makes an additional PM HW more likely, because RT too weak to do serious per pixel work.

One could even interpret this as 'Skipping RT for dedicated gaming because it's dead - focus on cloud instead.'

Waiting another year to know more, keep working on custom GI without being sure there will be still a need for it. I'm tired of this guessing... :|

They will reveal before I suppose it is more a problem of power for full GI, I am not sure if we have rt core it will be enough to have full gi on a next gen game...
 
"Selected Lighting Effects" sounds less powerful than "Full Scene Ray Tracing".
That's not a statement of performance but application. HWRT does not enable full-on raytraced scenes. For that, you want cloud computing. What it does enable is hybrid renderers, the same as RTX, where you slectively use RT to enable some lighting effects in a game, such as reflections, or AO, or secondary lighting.

How could you design hardware acceleration of raytracing in a way that limits those rays to a subset of raytracing applications? How could you speed up memory traversal and/or intersect tests and have that only applicable to shadows, say, and not direct lighting, or reflections?
 
Yeah, makes perfect sense. Still the formulation is a bit unlucky - they should hire some NV marketing guys to make it sound more awesome :D
 
Just some theory crafting about that slide since I am unhappy with the formulation. What if the local GPU HW provides a low ray count, say 1 ray ppx or even less for a standard effect... say reflections. Or 1 or less ray per pixel but only moving skinned objects are traced against. Then the cloud with the same scene representation has a much higher ray per pixel for the rest of the static scene. The time lag for the information from the cloud is less important for objects that are not moving and you can conserve your local HW ray budget for the objects that need it the most.

Just some theory crafting to make that slide make some sort of sense. But otherwise, that slide is very confusing why the tiers are written that way!
 
Wait, did AMD really just pulled an Xbox One on us with "the power of the cloud" talk. Oh my...
 
Back
Top