Realtime reflections in Killzone Shadow Fall

If the tracing method achieves a polygon perfect hit detection and can shoot additional rays, is there really a difference to a raytracing?

They might be creating a acceleration structure like quad-tree and use and trace a final hit to a quad.
Only difference to normal raytracing would be the fact that source data was in screenspace.
 
If the tracing method achieves a polygon perfect hit detection and can shoot additional rays, is there really a difference to a raytracing?
That would be raytracing. Raytracing doesn't prescribe a method to calculate the surface intersect, so any method that determines the nearest surface along a vector is RT. If instead you're doing something like approximating the nearest surface by evaluating steps along a vector, that's more like space partitioning.
 
So, to be clear, 1) Are guerilla ray-tracing/ray-marching to find the reflecting surface mapped to screen space? This is the secondary ray, or is it just the initial observer ray and the reflection computed from surface normal?

2) Is this performed with the scene geometry, or just on the G-buffer or some other buffer?
 
So, to be clear, 1) Are guerilla ray-tracing/ray-marching to find the reflecting surface mapped to screen space? This is the secondary ray, or is it just the initial observer ray and the reflection computed from surface normal?

2) Is this performed with the scene geometry, or just on the G-buffer or some other buffer?
My guess is that they start rays from pixels in g-buffer or a surface within screen and marched against mipmap pyramid of the screenspace.
This would give them acceleration surface in which they can jump over large distances and ability to have blurry reflections easily. (basically 2 dimensional cone tracing.)

In presentation he mentioned that they have ability to have another bounce for reflections as well, but I'm sure that it's only used in some rare scenes.
 
I was thinking about this today, and I realized this same algo could lend itself to reflective shadowmaps easily. Just render the albedo and normal of surfaces when generating your shadowmap, and when doing the screen-space gi, test your rays against the reflective shadowmap instead of from the Gbuffer. This is nice because it is in worldspace, and isn't affected by screen-space discontinuities, as occlusion and such. This is terribly light bleeding prone though, and it would probably be limited to only one light source or a couple few for performance reasons. I could see this done this way. Render your scene and do all the lighting exept for the sun. Then, render the sun's shadow map with albedo and normals, do the screen-space-RT-reflections much like in killzone but use samples from both the Gbuffer and the reflective shadowmap, than add the sun lighting contribution deferredly later (so that things in screenspace lit by the sun don't give out double bounce). Some form of cheap world-space occlusion could be needed to avoid too much light bleeding though, that could be a voxelized version of the scene precomputed or dynamic, or distance fields as in the samaritan demo, or whatever else.
 
Back
Top