Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Scott_Arm

Legend
The reflections demo in battlefield 5 was the best, because it was demoing something that screen-space reflections just can't do. The other demos were kind of trash, especially the gi demo. They compared gi to no gi in a static scene, instead of comparing it to a number of current gi solutions that are available.
 
Jut to be clear, there's no "secret sauce" about Nvidia's RTX GPU. Every DX12 class GPU can do it.. but slower. As a matter of fact all of the DXR demos before last week's Turing reveal where running on a DX12 GPU "without real" HW RT acceleration (Volta..its Tensor cores were only used for denoising). The RT blocks in Turing which accelerate and handle the BVH is what gives it the nearly 10X boost in performance when traycing rays. It's up to AMD (and later intel) to add similar blocks to their GPUs but there's no new special feature set that prevents a DX12 GPU from supporting DXR/RT in Vulkan etc..
 
As an aside on the PVR stuff, that was a tile based renderer iirc which let you do all sorts of funky stuff for cheap that classic GPUs can't but was rubbish at other things (e.g. colored lighting back in the PVR1 days) so I'd be very sceptical as to how generalisable their solution is to "normal" gpus
 
Jut to be clear, there's no "secret sauce" about Nvidia's RTX GPU. Every DX12 class GPU can do it.. but slower. As a matter of fact all of the DXR demos before last week's Turing reveal where running on a DX12 GPU "without real" HW RT acceleration (Volta..its Tensor cores were only used for denoising). The RT blocks in Turing which accelerate and handle the BVH is what gives it the nearly 10X boost in performance when traycing rays. It's up to AMD (and later intel) to add similar blocks to their GPUs but there's no new special feature set that prevents a DX12 GPU from supporting DXR/RT in Vulkan etc..
A feature needs to be fast enough to be used in real applications. 10x difference in performance is the difference between having a workable feature or just a paper spec. It's the difference between a DX7 class GPU and running a game on the software rasteriser. ;) AMD needs hardware acceleration that's fast enough, and until they show something, we have to be a little cautious that their hands may be a little tied based on existing IPs that may prevent them using some acceleration methods, hampering what they can achieve.

In short, it's not a given that AMD will absolutely have comparable RT acceleration ready for release with their next architecture, even though they knew all about MS's DXR (not the only RT API).
 
Last edited:
Its nice that it's scalable; the APIs work in that regard; at least from where we stand today, the polish may not be there fully, but in time we can see it progressing to be a standard for sure. I agree with the principle that this should be selling point to next gen consoles, resolution is a moot point after 4K, HDR has been implemented, frame rate is supported and based on developer choices, and I'm willing to bet to some degree moving to RT could potentially reduce the CPU load vs a faked rasterized scene; the mid-gen refresh buys time to delay next gen until the price point is feasible.

The SoC is large, I believe they said it was on 12nm? So 7nm would make a smaller SoC, bring the prices down, memory prices should drop as well. I do agree prices need to come down, but it's not as far out of reach as we earlier speculated.
 
Maybe ray tracing is the feature for the Pro version next time. As long as the software is designed with it from the start in mind (even if performance is very limited in the first iteration), that can be the Pro differentiator. It's not going to be 8K resolution.
 
Do you guys think that Sony/Microsoft will be able to deliver a console that provides both RT and a generational leap in other graphics/CPU departments, such as proper hair/cloth simulation, polygon density, AA, AI, etc., AND at a reasonable price?

I love RT, I really do, but honestly, I prefer better physics simulations and more complex graphics overall instead of just more accurate shadows and reflection. Yes, I know, RT can produce prettier graphics, but I prefer seeing a room full of interaction between complex objects, even though the shadows and reflections are "fake", than just seeing an empty room with a beautiful, shiny marble on a mirror floor. ¬_¬ I'm oversimplifying the examples just to make my point. And at any rate, I think that we've reached a point where rasterization can produce very pretty/photorealistic graphics. I wouldn't trade the personal satisfaction of just knowing that a reflection is perfect over so many other aspects of graphical beauty/simulation.
 
I can actually see a PS5 Pro including the RT hardware few years down when the tech is all matured, bug free, more games support and with enough juice "20+ TFs" to run everything else. Raytracing is so not worth it at the moment for the hardware cost, I mean if you look at Book of the Dead or Deep Down demo which neither uses RT, looked far more impressive to me than everything Nvidia has shown. If PS5 could devote all its transistors into traditional rendering techniques then I believe the end result is still overall better. For me I would take superior geometry, shaders, fluid sims, gpu particles, deformations and what not over some noisy but accurate reflections and shadows. And yes, Voxel GI is a reasonable substitute for straight up RT GI.
 
How does RT affect bandwidth requirements? These monster GPUs aren't massively endowed with bandwidth. Could a shift of workloads to raytracing reduce bandwidth requirements over similar quality achieved through rasterisation, thus allowing more of the BOM budget to be spent on silicon rather than RAM? Or will BW requirements be as high as we expect them?

It'd make for an interested next gen if one machine came with raytracing on a failure meagre RAM topology, while the other used conventional graphics on a massive TB/s stacked RAM something-or-other solution. ;)
 
And yes, Voxel GI is a reasonable substitute for straight up RT GI.
I doubt this aspect.

There's a lot of confusion on what RT can accomplish vs what voxel GI can accomplish. One is an entire system of ray tracing methods that can be employed to do a variety of graphical and suitable for audio effects that bring us much closer to realism. The other is just GI, and voxel GI, aside from it's performance and depending on how it's implemented can range from worse to absolutely inferior to ray traced global illumination.

There's no value in having all this computational power, if all that is being done is to fake RT. You may as well just build the hardware to RT and get what you want from it. I don't see where this argument is going, the industry, now that it's embarked towards ray tracing, will never go back. Doubling down on rasterization doesn't make a lot of sense when the platform holders (Sony and MS) are the ones in control of their own platforms.

You're not going to get a 20TF chip in a console ever. Even if you did, when it comes down to applications, Turing was whooping the DXG in ray traced applications.
 
How does RT affect bandwidth requirements? These monster GPUs aren't massively endowed with bandwidth. Could a shift of workloads to raytracing reduce bandwidth requirements over similar quality achieved through rasterisation, thus allowing more of the BOM budget to be spent on silicon rather than RAM? Or will BW requirements be as high as we expect them?

It'd make for an interested next gen if one machine came with raytracing on a failure meagre RAM topology, while the other used conventional graphics on a massive TB/s stacked RAM something-or-other solution. ;)
Yea bandwidth requirements didn't seem major here. 2080TI is only 11 GB. Not exactly breaking ground here.
looking at something like 2070 where a high end console spec could potentially land:
8 GB GDDR6

12 GB DDR6 - 16 GB DDR6 seems an ideal spot to land on
 
No sir. Book of the Dead was running on a PS4 Pro already at 1080p/30fps. The Deep Down reveal trailer was real time using Voxel GI and fluid dynamic fire but later got cut down a notch to actually run on the PS4 hardware.
highly controlled scenes. Not applicable to real gameplay.
we talk about faked graphics a lot, things that are faked can be faked because it's not real gameplay, the camera and direction is locked, so they can setup their GI to work for the scene.

Real games don't work like that unfortunately.
 
I doubt this aspect.

There's a lot of confusion on what RT can accomplish vs what voxel GI can accomplish. One is an entire system of ray tracing methods that can be employed to do a variety of graphical and suitable for audio effects that bring us much closer to realism. The other is just GI, and voxel GI, aside from it's performance and depending on how it's implemented can range from worse to absolutely inferior to ray traced global illumination.

There's no value in having all this computational power, if all that is being done is to fake RT. You may as well just build the hardware to RT and get what you want from it. I don't see where this argument is going, the industry, now that it's embarked towards ray tracing, will never go back. Doubling down on rasterization doesn't make a lot of sense when the platform holders (Sony and MS) are the ones in control of their own platforms.

You're not going to get a 20TF chip in a console ever. Even if you did, when it comes down to applications, Turing was whooping the DXG in ray traced applications.
While I totally agree RT does GI better in the end but the GI effect perceived by human eyes from a high quality VOGI still impresses on a similar wow level. It still totally makes sense for a rasterizer for the hardware grunt we are limited with currently, it may sound contradicting but compromise is sometimes needed for better overall result, that's my theory. Of course I'm not saying we sticking to it for good, at some point we shall fully embrace it but I don't believe right now is the time, especially when a $1200 gpu is only running Shadow of the Tomb Raider at 1080p 30-50fps.
http://www.pcgameshardware.de/Grafi...ormance-in-Shadow-of-the-Tomb-Raider-1263244/
 
highly controlled scenes. Not applicable to real gameplay.
we talk about faked graphics a lot, things that are faked can be faked because it's not real gameplay, the camera and direction is locked, so they can setup their GI to work for the scene.

Real games don't work like that unfortunately.
Was this post directed at me?
I was just answering Shifty's question...Deep Down trailer was pre-rendered. Book of The Dead was/is not. Whether it's applicable to gameplay wasn't really the subject of my post..
 
highly controlled scenes. Not applicable to real gameplay.
we talk about faked graphics a lot, things that are faked can be faked because it's not real gameplay, the camera and direction is locked, so they can setup their GI to work for the scene.

Real games don't work like that unfortunately.
Even if it's fake the point being you don't need RT to produce mind blowing visuals, a rasterizer with a heavy portfolio of other high end techniques still does a wonderful job.
 
Eeep ~30 FPS in the brightly lit complex mechanical area vs up to 70 staring at a wall in shade, I thought RT was more or less a constant load (per lightsource) once you had decided on how many rays you were going to trace?
 
Back
Top