Next gen lighting technologies - voxelised, traced, and everything else *spawn*

If AMD half asses RT then they will surrender both the IQ AND performance advantage to NVIDIA.

They surrendered that to NV many years ago. RT doesn't do anything to change that.

And despite that the vast majority of games aren't implementing much that can't be done on AMD hardware. Why? Because the vast majority of games aren't sold on PC, they are sold on consoles.

As a PC user, if we're lucky we'll get some little graphical addons for PC that generally aren't terribly efficient and don't dramatically change how games look.

RT changes that to some extent, but it's still hit and miss. Metro: Exodus occasionally looks quite good with RT on, but for the most part it doesn't look significantly far ahead of any other game you can play on PC.

Whether AMD comes out with dedicated hardware RT or not doesn't change that the fact that the vast majority of games are going to target whatever consoles can do. A few (read small niche) of game developers will implement advanced PC only graphics technology. Just like there's only a small niche of developers implementing RT via RTX into their games right now.

NV will maintain their lead in the PC space in that case, but no game made will fail to run on AMD hardware.

And that all comes back to what will be on the next round of consoles.

I fully expect some sort of RT support, but I highly doubt it will be RT specific hardware. I full expect more general purpose hardware that might have some tweaks to help with RT related work but available to use however the developer wishes to use it.

I also predict that while it may be slower at the very specific types of RT that RTX accelerates, the end result of a more flexible structure is that RT will end up looking better on the next gen consoles than they do on the RTX hardware that is currently available for purchase.

It's entirely possible I might be wrong on that. But then no one outside of AMD, Microsoft, and Sony know what's going to be on the next round of consoles and what they might include that might help out with RT.

Regards,
SB
 
You don't know that with next-gen compute. I guess it all depends on what people mean by 'ray tracing hardware'. It is a pretty silly argument over whether console have a hardware feature or not when no-one can even say what exactly that hardware is. ;)
Do we know what sort of computational power and memory resources are needed to do RTRT? I ask because Sony worked at async compute for PS4, if (and it's a big if) RTRT could be carved out of the compute budget sans fixed hardware, we could see the consoles opt to hit performance above that metric and allow developers to choose which features to implement. RTRT and upscale 1080p at 30 fps or 4k and 60 fps or some other optimum solution (many more fps) for VR for example.

I'd prefer this sort of approach as it leaves design decisions to the software teams and we as consumers get more variety and the industry gets to experiment and eventually adopt best use cases.

The other consideration, in my opinion anyway is over the next 36 month I'd wager 4k is in most homes. But other than resolution I'm not sure there's enough of a difference between current generation and what next generation would be offering if we're not using more sophisticated lighting techniques. I doubt many consumers can articulate or even recognize the differences between PS4 and One X that justifies the additional cost.

Next generation needs more than 4k to justify upgrade, we need to see next generation gaming in terms of the AI and the immersion offered in the virtual world or I'll be disappointed.
 
Is 70fps in Minecraft at 1440p top-notch performance? It is 2 bounces, vs 1 for Metro. It is just a demo, vs a production implementation, so I'm not sure it would really be fair to compare performance anyway. It does look very nice, and I do like his idea of storing the average colour for a particular texture for GI purposes. Instead of actually sampling a point on a texture for colour information, he can just use the same average sample for the whole texture. There must be some performance improvements there, and it actually might lead to a less noisy and "better" look when you're dealing with 1 or less than 1 ray per pixel.

Edit: It's a GTX1080, very simple simple map even by minecraft standards. Would be interesting to see how the BVH affects performance as the world gets bigger. Geometric complexity here is pretty simple, but the GTX1080 isn't exactly a ray tracing gpu, even by raw compute standards.

Also, Minecraft is written in Java ... wtf ... so I imagine there's some lost performance there.
Tracing one sized cubes must be incredibly fast as it's basically the classic Wolfenstain3D raycasting DDA with third dimension.
Some empty space skipping should be easy and fast as well.
 
Last edited:
Do we know what sort of computational power and memory resources are needed to do RTRT?
We don't know. I've never seen a serious approach to do RT in games yet. Things like Optix or Radeon Rays do not tell us anything, and lowpoly Quake or Minecraft neither.
Personally i do RT in compute for GI, and from this i'm sure it's doable, but i use a hierarchy of discs instead triangles, also for diffuse GI LOD can be used very aggressively.
I'm very optimistic and will try to use this for sharp specular reflectons as well ASAP. Whatever performance i get, the same would be possible on next gen consoles using triangles too because with fp16 the instruction count and register usage would be almost the same.

I think in the end it is just a question of detail. If next gen has no RT hardware, specular reflections might end up less sharp than with RTX, or shadows will be less exact, but for GI it's no visible difference.
No RT hardware could even result in better results because devs are forced to solve the problems with better algorithms instead with raw but restricted brute force power.
Additionally, if Volta can do RT fast enough, then next gen can do it too. And doing RT at half res is enough anyways.

Who knows - i guess next gen games using RT are in the works, and we might see some things at next gen announcement that ends up much more impressive than anything we've seen on PC so far. But even if not, it's just a matter of time.
 
We don't know. I've never seen a serious approach to do RT in games yet. Things like Optix or Radeon Rays do not tell us anything, and lowpoly Quake or Minecraft neither.
Personally i do RT in compute for GI, and from this i'm sure it's doable, but i use a hierarchy of discs instead triangles, also for diffuse GI LOD can be used very aggressively.
I'm very optimistic and will try to use this for sharp specular reflectons as well ASAP. Whatever performance i get, the same would be possible on next gen consoles using triangles too because with fp16 the instruction count and register usage would be almost the same.

I think in the end it is just a question of detail. If next gen has no RT hardware, specular reflections might end up less sharp than with RTX, or shadows will be less exact, but for GI it's no visible difference.
No RT hardware could even result in better results because devs are forced to solve the problems with better algorithms instead with raw but restricted brute force power.
Additionally, if Volta can do RT fast enough, then next gen can do it too. And doing RT at half res is enough anyways.

Who knows - i guess next gen games using RT are in the works, and we might see some things at next gen announcement that ends up much more impressive than anything we've seen on PC so far. But even if not, it's just a matter of time.
This is why I favor putting resources in the hands of development and letting teams determine how to allocate the compute and bandwidth. These new memory speeds create additional possibilities too, the relatively slow memory we've had till now could be the equivalent of putting a quarter in a vending machine when we need a dollar to get anything back out.
 
They surrendered that to NV many years ago. RT doesn't do anything to change that.
Only in select visual fields. Like shadows (VXAO, HFTS), and some AA or WaveWorks effects. RT changes that by introducing a massive sweeping change, that involves the rest of the visual spectrum: reflections, lighting, materials, more shadows .. etc.
Because the vast majority of games aren't sold on PC, they are sold on consoles.
I disagree, PC sales are now equal or sometimes surpassing XO sales. They've become an integral part of the overall game sales. Now EVERY game is released on PC except 1st party Sony exclusives. Because the porting process is easier now, and the sales are good.
As a PC user, if we're lucky we'll get some little graphical addons for PC that generally aren't terribly efficient and don't dramatically change how games look.
That's not entirely true either, most AAA titles now have much higher fidelity graphics than even the OneX, just look at how Metro and Battlefield V on PC compares to One X. The One X usually lacks SSR, Tessellation, Draw Distance, a myraid of particle effects .. etc.
but for the most part it doesn't look significantly far ahead of any other game you can play on PC.
I urge you to watch DF analysis on the tech, might change your mind there.
NV will maintain their lead in the PC space in that case, but no game made will fail to run on AMD hardware.
Of course not. Every game will run, no one is doubting that. However AMD sales on PC will suffer because of that, they will be rendered as the lower performing, lower IQ option, this will degrade their presence on PC even further. That's not good for AMD, or even the industry. This will also further impact their presence on consoles on subsequent cycles.
 
Not sure if this is noticed yet but RTX is doing a piss poor job on human skins sometimes, it looks dramatically worse than rasterization here.
d8fMHPY.jpg
 
Not sure if this is noticed yet but RTX is doing a piss poor job on human skins sometimes, it looks dramatically worse than rasterization here.
d8fMHPY.jpg
RT doesn't do the rendering here. It is a hybrid solution and the lightning conditions and models are more or less optimized for normal rasterization techniques. This can result in a very different scene.
E.g. just like those Uncharted screenshots that show how ugly the models are. There weren't really ugly, just a different daytime and therefore the lighting was not perfect.
But in this scene it looks more like they did something wrong in there algorithm.
 
Not sure if this is noticed yet but RTX is doing a piss poor job on human skins sometimes, it looks dramatically worse than rasterization here.
It's the effect of indirect RT AO. Contact shadows increase a lot more. You need to watch this in motion not in screenshots. Skin is rendering normally with RT outside of shadowed areas.
 
It's the effect of indirect RT AO. Contact shadows increase a lot more. You need to watch this in motion not in screenshots. Skin is rendering normally with RT outside of shadowed areas.
Even in motion the old dude especially just looks wrong and overly darkened at times. But yeah outside of the shadowed area is quite fine.
 
Even in motion the old dude especially just looks wrong and overly darkened at times. But yeah outside of the shadowed area is quite fine.
Am I very strange if I think that maybe the issue is the hair (it looks like it's lit with ambient light, value 1) and not the skin? o_O And I don't see where's the WTF in the girl.
 
Am I very strange if I think that maybe the issue is the hair
It seems they do RT AO, which cancels out lighting from probes so the faces become dark, and after that they render hair without AO.
Hair geometry is likely too dense to be raytraced, so it does not cause AO, but it should be possible it receives AO. (This is visible for smaller objects like a cup on table: It does not cause shadows bit it is darkened.) I guess they did not want to start another RT pass after the transparency pass.
I wonder how RTX behaves in dense scenes like a detailed forest...
 
Additionally, if Volta can do RT fast enough, then next gen can do it too.

Volta being much slower then even a 2060, you better hope the console packs more power then thar 3000 dollar compute monster, it has much more compute power then even a 2080Ti.
 
Last edited by a moderator:
Volta being much slower then even a 2060, you better hope the console packs more power then thar 3000 dollar compute monster, it has much more compute power then even a 2080Ti.
Volta is a 2017 GPU that wasn't designed with raytracing in mind. The fact that it can still raytrace effectively shows that there's potential for compute to be used for raytracing, and it may well be possible to just augment compute to make it that little bit more effective. That's exactly how compute came to be. People found they could use pixel and vertex shaders to run non-graphics workloads. The GPU was then modified to enable these workloads more effectively. People are currently finding they can raytrace on compute...
 
Last edited:
We know raytracing can be done with compute and we don't really know how nVidia RT cores are working. There could be quite a bit of compute being used.

What kind of fixed-function hardware would benefit raytracing and could be added to existing shader core architectures relatively cheap in order to help (mainly compute-based) raytracing? Ray triangle intersection? AABB bounding box generation of trangle strips, sets, meshes? Support for hierarchical tree-like structures like BVH?
 
I thought it was understood already that the RT cores in Turing are accelerating BVH traversal?
 
Back
Top