Game development presentations - a useful reference

Graphics Programming Conference 2024. 19 session videos.


The Graphics Programming Conference is a three day event in November. We have keynotes from industry experts, presentations, and “masterclass” workshops with the ability to dive into specific technology in a hands-on environment.
 
Wonder if Intel will actually push this. Until NVIDIA decides to push extrapolation or frameless rendering themselves they likely won't take kindly to devs doing it, which will make it hard to get it into games. Maybe if Valve launches Index 2 they could be convinced to show-piece it? VR has the greatest need.
Could they push ExtraSS as their engine-integrated solution to devs while introducing GFFE as a driver-level feature like AMD Fluid Motion Frames?

Edit: After reading further, probably not.
Note that we use the term "G-buffer free" to refer to the absence of G-buffers for extrapolated frames only. The depth buffer and motion vectors for rendered frames are used since they are usually readily available in the rendering engine without additional cost
 
The game needs to render a slightly larger viewport to cover expected de-occluded pixels along the edges, also you want the view matrix for the frames and for the game engine to give the view matrix for the desired extrapolated frame based on real time user input (no need to dead reckon that, reading input is cheap to do at high FPS). It needs very little extra work in the game engine, but it does need a little.

PS. they say camera pose instead of view matrix, same difference.
 
I'm really enjoying the lighting in Path of Exile 2, and the game runs great. Radiance Cascades seems like a really interesting approach. I believe that Alexander Sannikov at Grinding Gear Games is the inventor of this solution. It seems like there are a lot of people getting interested in radiance cascades, so it'll be cool to see where it goes.





Here's a video I took showing radiance cascades in Path of Exile 2.



I'm really curious to see if any works gets done to improve it with off-screen rays. Either hardware ray-tracing or some kind of sdf proxy like lumen. Right now the main issue in 3D is having lighting disappear when the source is occluded.

Edit: The original paper
 
Last edited:
I'm really curious to see if any works gets done to improve it with off-screen rays. Either hardware ray-tracing or some kind of sdf proxy like lumen. Right now the main issue in 3D is having lighting disappear when the source is occluded.
It has some interesting properties, but I think it's far from obvious how it would be practical in a 3D world, beyond just the screen space traces. As the paper notes:

For example, even though storing the ”tail” of a full 3d radiance field takes only as much space as storing its cascade 0, just storing a cascade 0 is practically equivalent to voxelizing the entire scene – this is often a ”dealbreaker” for large-scale scenes.
...
However, it is unclear how to calculate world space radiance intervals anywhere near as efficiently as by raymarching in screenspace.
There's really only a single example in the paper in a 3D world, and the results are obviously far too quantized even in a small toy scene. The author notes that hierarchical methods could be applied, but history has shown that "just use sparse voxels!" is far from a silver bullet for the N^3 scaling of world space voxelization.

I do think that it's worth noting that the property of encoding high spatial resolution near contact points and shifting to high angular frequency in that far field is extremely desirable, not just for GI but even for just for shadows (and indeed this work appears to follow on some of the earlier work that was shadow-specific in earlier works from the author).

My main concern here though is that the data structure is optimizing a bit too much for sampling when the "how to generate it" part is far from obvious. The claims in the introduction about it not needing temporal reprojection and being able to fully recompute the structure every frame would appear to only apply to the flatland/2D case as implemented in PoE2, as even with all the handwaving about 3D the few examples they give they also mention using various amounts of temporal reuse to achieve. And indeed if they didn't the claims would be impossible - there's no getting around the numbers of rays required in arbitrary scenes with specular reflections. I vaguely even remember a paper that showed how to formulate arbitrary computation as path tracing problems in a scene with carefully positioned mirrors.

Already we're trending towards the cost of maintaining the various world space data structures being the main issue, beyond the actual tracing. Obviously both are important, but even if raytracing was infinitely fast, a lot of current games would not even get that much faster or prettier (although we could certainly get rid of some amount of ghosting).

Thanks for the link though - interesting to read through and definitely seems like a good tradeoff for PoE2 and similar type games.
 
Already we're trending towards the cost of maintaining the various world space data structures being the main issue, beyond the actual tracing. Obviously both are important, but even if raytracing was infinitely fast, a lot of current games would not even get that much faster or prettier (although we could certainly get rid of some amount of ghosting).
RT power would mean more rays so less noise and sampling artefacts. I guess that's the balance. RT resolving power is all about tracing rays and getting more info. World updating is all about ray traceable detail. One without the other would either mean fabulous traced scene of simple, inaccurate geometry, versus very sparse and noisy sampling of accurate scene representation.
 
Back
Top