Not a game development presentation exactly, but a really nice blog post about cpu benchmarking, cpu utilization in games.
The Graphics Programming Conference is a three day event in November. We have keynotes from industry experts, presentations, and “masterclass” workshops with the ability to dive into specific technology in a hands-on environment.
Wonder if Intel will actually push this. Until NVIDIA decides to push extrapolation or frameless rendering themselves they likely won't take kindly to devs doing it, which will make it hard to get it into games. Maybe if Valve launches Index 2 they could be convinced to show-piece it? VR has the greatest need.
Could they push ExtraSS as their engine-integrated solution to devs while introducing GFFE as a driver-level feature like AMD Fluid Motion Frames?Wonder if Intel will actually push this. Until NVIDIA decides to push extrapolation or frameless rendering themselves they likely won't take kindly to devs doing it, which will make it hard to get it into games. Maybe if Valve launches Index 2 they could be convinced to show-piece it? VR has the greatest need.
Note that we use the term "G-buffer free" to refer to the absence of G-buffers for extrapolated frames only. The depth buffer and motion vectors for rendered frames are used since they are usually readily available in the rendering engine without additional cost
It has some interesting properties, but I think it's far from obvious how it would be practical in a 3D world, beyond just the screen space traces. As the paper notes:I'm really curious to see if any works gets done to improve it with off-screen rays. Either hardware ray-tracing or some kind of sdf proxy like lumen. Right now the main issue in 3D is having lighting disappear when the source is occluded.
There's really only a single example in the paper in a 3D world, and the results are obviously far too quantized even in a small toy scene. The author notes that hierarchical methods could be applied, but history has shown that "just use sparse voxels!" is far from a silver bullet for the N^3 scaling of world space voxelization.For example, even though storing the ”tail” of a full 3d radiance field takes only as much space as storing its cascade 0, just storing a cascade 0 is practically equivalent to voxelizing the entire scene – this is often a ”dealbreaker” for large-scale scenes.
...
However, it is unclear how to calculate world space radiance intervals anywhere near as efficiently as by raymarching in screenspace.
RT power would mean more rays so less noise and sampling artefacts. I guess that's the balance. RT resolving power is all about tracing rays and getting more info. World updating is all about ray traceable detail. One without the other would either mean fabulous traced scene of simple, inaccurate geometry, versus very sparse and noisy sampling of accurate scene representation.Already we're trending towards the cost of maintaining the various world space data structures being the main issue, beyond the actual tracing. Obviously both are important, but even if raytracing was infinitely fast, a lot of current games would not even get that much faster or prettier (although we could certainly get rid of some amount of ghosting).
The naive radiance cascade leaks all over the place because the interpolation of higher cascades while conceptually feel good, is a really poor approximation. They have a "bilinear fix" for that where they trace from the lower cascade interval end to the higher cascade interval starts, but that destroys a bit of the elegance.Radiance Cascades seems like a really interesting approach.
There's really only a single example in the paper in a 3D world, and the results are obviously far too quantized even in a small toy scene. The author notes that hierarchical methods could be applied, but history has shown that "just use sparse voxels!" is far from a silver bullet for the N^3 scaling of world space voxelization.
Yes although on the GI side you also need to shade the rays at some frequency. Diffuse GI is - as always - the "easy"/cheap case here while with specular there's a limit to how much cheating you can do. I think a lot of the tradeoffs really do come down to how you can cheat/approximate the theoretically unbounded specular cases in a way that tends to work with a given set of content, because basically all real-time techniques break down spectacularly in various situations right now; places where offline can still just brute force a zillion rays.RT power would mean more rays so less noise and sampling artefacts.
Depends a ton between the implementations, but very often the shading of the RT scene is simpler. In games that do mostly diffuse GI it's often not even shaded, it's just a flat color. The more specular, the more you have to care, but you'll notice in most games that both geometry and shading are still quite simplified on the RT side.Where does shader LOD on rays fit into the pipeline? Are simplified shaders part of the surface's material, or are they substituted on a ray evaluation? Does RT lead to more shaders, or more complex shaders?
Please note ball lightning doesn't act a light source (for example) and terrain AO might miss the mark too much even if it's ideal scenario for SS. I'd still prefer Carmack's idea how they could ve shipped Rage with seperate diffuse and swap it on the fly and claim no down side to real time lighting.played games with AO that looks worse and costs about as much.