Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Watching it again today i think it's more just classical raytrcing of a lower triangle LOD from the scene. Your idea would show more irregular tesselation considering the random orientation of the shells, and my ideas above would also show similar artifacts from the world space grid alignment.
The mentioning of their Total Illumination maybe is just a bit misleading towards voxels. I think they use 'voxels' only for acceleration structure here, if at all.
At distance there seems indeed a fallback to a voxel approximation, but there is only one spot in the video where i can see this. Otherwise it's too perfect.

The missing noise also made me tend to think about alternative image based tech like VSM, but now i guess the reason is simply this:
Instead of disturbing the ray directions with random angles for AA (as usual, causing noise), they use a globally equal angular offset for the whole frame (or just per normal direction, whatever).
As a result, rays having the same normal (and that's common for smooth human made materials that show sharp reflections, also for water) will keep parallel and will traverse the same nodes in acceleration structure much more likely.
Also instead noise you'd get this kind of banding / ghosting visible in the video.

So the missing noise is no hint to any spectacular new tech either. It really seems triangle raytracing mostly?

Maybe they limit themselves to only sharp reflections for now because coherent rays and no need for denoising, but the quality shown is better than needed, and more glossy reflections could be made by bilateral blur / falling back to voxels earlier?
I'd be mostly interested in options to trade accuracy vs. performance. Video is 30fps and 2K - upscaling reflections could work to get a 4K game on next gen i guess.
Well, yes, I also think they ray-trace a 'voxelized' triangle mesh and use the voxels for the empty space skipping during ray-tracing and storing triangles. There is various ways to do this efficiently, but basically per voxel you have a flag that tells it is empty or not empty. In case of a none empty voxel a limited number of triangles (like 1-4) would be associated with that voxel, similar to what marching cubes produces: So instead of BVH, 3D textures are used for accelerating the ray tracing.
Anyway impressive they managed to make it that fast using general purpose compute.
 
Well, yes, I also think they ray-trace a 'voxelized' triangle mesh and use the voxels for the empty space skipping during ray-tracing and storing triangles. There is various ways to do this efficiently, but basically per voxel you have a flag that tells it is empty or not empty. In case of a none empty voxel a limited number of triangles (like 1-4) would be associated with that voxel, similar to what marching cubes produces: So instead of BVH, 3D textures are used for accelerating the ray tracing.
Anyway impressive they managed to make it that fast using general purpose compute.
https://github.com/CRYTEK/CRYENGINE/commit/e661217049eeaad116b9e796b1357743fa4dc3d5
 
How this is possible without rt ?
Not sure I understand. While the Infiltrator demo is still amazing, it doesn't have ray tracing tech running - as far as I understand? It certainly doesn't have RT running on compute.

So my question was very much related to how can we have a demo with RT running that well, on compute without RTX? What's stopping anyone from using that in the real world? What am I missing?
 
Yes curious how they achieved anything remotely as good like that without RT hardware on a Vega. It is very close to proper hardware accelerated RT and it is amazing that it manages to run at that framerate even if not butter smooth,
It looks outstanding and very promising.
That infiltrator demo reminded me why I am so disappointed with this gen. What hardware was the Infiltrator demo running on? The quality is beyond anything we could get on current logically priced hardware.
This and the Final Fantasy demo by Square Enix were a hint of what to come and not what was achievable this gen
 
So my question was very much related to how can we have a demo with RT running that well, on compute without RTX? What's stopping anyone from using that in the real world? What am I missing?
If this is borrowing off their other dynamic GI technology, and assuming some of the restrictions are still in place, I believe it's still single light source for dynamic GI portion. But in this video, it doesn't look that way. Not sure if there are some limitations like mixing light sources for GI. At least the last time I checked, both Crytek and UE were running single source and usually that meant for most games implementing the technology the single source was the 'Sun'.

Not sure how they did reflections, I'm going to assume with shooting rays.

I guess the big part is whether they can actually be implemented in a game. That's generally been my barometer for these types of things, but this looks promising.
 
That infiltrator demo reminded me why I am so disappointed with this gen. What hardware was the Infiltrator demo running on? The quality is beyond anything we could get on current logically priced hardware. This and the Final Fantasy demo by Square Enix were a hint of what to come and not what was achievable this gen

Both the Infiltrator and Agni's Philosophy Final Fantasy tech demos were running on a single GTX 680.
 
Last edited:
That infiltrator demo reminded me why I am so disappointed with this gen. What hardware was the Infiltrator demo running on? The quality is beyond anything we could get on current logically priced hardware.

Both the Infiltrator and Agni's Philosophy Final Fantasy tech demos were running on a single GTX 680.

Yup, The PC running Agni's Philosophy in 2012 had a single GTX 680 which was about ~3 TFlops (Nvidia Kepler architecture) an i7-3770K at 3.5GHz with 32 GB of RAM.
 
Yup, The PC running Agni's Philosophy in 2012 had a single GTX 680 which was about ~3 TFlops (Nvidia Kepler architecture) an i7-3770K at 3.5GHz with 32 GB of RAM.
So what prevents our consoles from outputting something similar? Is it memory constraints?
I had the impression they had stacked multiple GPUs in one of the presentations
 
So what prevents our consoles from outputting something similar? Is it memory constraints?
I had the impression they had stacked multiple GPUs in one of the presentations

A lot of it is the need to build an actual practical GAME engine and GAMEPLAY ready assets instead of a tech demo animation.
 
54211728_10161726568615145_3335909628608249856_n.png

54419147_10161726568635145_8689004811615469568_n.png

54255409_10161726568595145_8412970746720026624_n.png
 
These demos will always have far superior graphics compared to full blown game, you know exactly everything you have to and don't have to render.
A lot of it is the need to build an actual practical GAME engine and GAMEPLAY ready assets instead of a tech demo animation.
Yes I understand that with Demos you can pull off more detail since you can focus on what you want to show and everything is under the control of the artists and directors, but still I doubt these visuals can be achieved by our consoles even as tech demos.
 
Yes I understand that with Demos you can pull off more detail since you can focus on what you want to show and everything is under the control of the artists and directors, but still I doubt these visuals can be achieved by our consoles even as tech demos.

There you go, running on base PS4
 
Back
Top