Not a game development presentation exactly, but a really nice blog post about cpu benchmarking, cpu utilization in games.
The Graphics Programming Conference is a three day event in November. We have keynotes from industry experts, presentations, and “masterclass” workshops with the ability to dive into specific technology in a hands-on environment.
Wonder if Intel will actually push this. Until NVIDIA decides to push extrapolation or frameless rendering themselves they likely won't take kindly to devs doing it, which will make it hard to get it into games. Maybe if Valve launches Index 2 they could be convinced to show-piece it? VR has the greatest need.
Could they push ExtraSS as their engine-integrated solution to devs while introducing GFFE as a driver-level feature like AMD Fluid Motion Frames?Wonder if Intel will actually push this. Until NVIDIA decides to push extrapolation or frameless rendering themselves they likely won't take kindly to devs doing it, which will make it hard to get it into games. Maybe if Valve launches Index 2 they could be convinced to show-piece it? VR has the greatest need.
Note that we use the term "G-buffer free" to refer to the absence of G-buffers for extrapolated frames only. The depth buffer and motion vectors for rendered frames are used since they are usually readily available in the rendering engine without additional cost
It has some interesting properties, but I think it's far from obvious how it would be practical in a 3D world, beyond just the screen space traces. As the paper notes:I'm really curious to see if any works gets done to improve it with off-screen rays. Either hardware ray-tracing or some kind of sdf proxy like lumen. Right now the main issue in 3D is having lighting disappear when the source is occluded.
There's really only a single example in the paper in a 3D world, and the results are obviously far too quantized even in a small toy scene. The author notes that hierarchical methods could be applied, but history has shown that "just use sparse voxels!" is far from a silver bullet for the N^3 scaling of world space voxelization.For example, even though storing the ”tail” of a full 3d radiance field takes only as much space as storing its cascade 0, just storing a cascade 0 is practically equivalent to voxelizing the entire scene – this is often a ”dealbreaker” for large-scale scenes.
...
However, it is unclear how to calculate world space radiance intervals anywhere near as efficiently as by raymarching in screenspace.
RT power would mean more rays so less noise and sampling artefacts. I guess that's the balance. RT resolving power is all about tracing rays and getting more info. World updating is all about ray traceable detail. One without the other would either mean fabulous traced scene of simple, inaccurate geometry, versus very sparse and noisy sampling of accurate scene representation.Already we're trending towards the cost of maintaining the various world space data structures being the main issue, beyond the actual tracing. Obviously both are important, but even if raytracing was infinitely fast, a lot of current games would not even get that much faster or prettier (although we could certainly get rid of some amount of ghosting).