Next gen lighting technologies - voxelised, traced, and everything else *spawn*

BFV isn't a great reference point as it was a first title with RTX added as an after-thought.
Just like showing a very first RT demo integrated into a engine not designed for RT.
Look at the pure specs of how many rays (and what devs will one-day be able to do with those beyond glossy reflections) in compute-based versus RTX based.
According RTX vs GTX benchmarks, based on emulation that can only be suboptimal, the speed up was only 3x at best (Exodus), bot lower in most games (1.5 TR and 2 BFV IIRC.)
So if RTX + Tensor is indeed just 10% as people assume, and Tensors taking the larger part of that, then RT HW is justified now.
But this does not mean the long time restrictions are justified too, and because we won't continue to work on compute RT, we will never know when, or if at all the unlimited flexibility could have been the better alternative.
Control is a better comparison as Remedy included the best non-RT alternatives.
Control has alternatives to RT just as any other game has.

However, all i want to say is: To compare Crytek vs. RTX in a fair way, we have to look at more than just current day performance, because obviously fixed function hardware is faster at what it does.
We have to look at what the HW can't do, what it prevents to be adressed at all without making new HW, and how long this process takes as well, because this affects performance too.
If a game like Dreams shows similar graphics quality than every other game, we have to question if we still need rasterization hardware.
Same for raytracing, just now we will wait 20 years until that raytraced Dreams will appear, mainly because hardware has to be utilized.
(How competive would compute soltions be with twice the available chip area, removing all those raster and RT units? How fast would such a platform evolve?)

So although i'm a bit tired, taking the role of defending software > hardware again and again, it hast to be said here and there. This isn't related to actual matters like next gen, or speed ups less than an order of magnitude.

That's actually how a lot of deferred rendered games handle their LOD transitions. Many have beem doing it since last gen even, before TAA was even a thing... They just accepted the noise.
Yeah, it's nice but the transition is still very noticable as it's restricted to the duration TAA takes to converge.
With RT it's no longer this duration, but spatial distance that can control the blending, so the transition is totally hidden, and the efford is nothing. Devs will love it.
BTW, something like stochastic transparency can be solved with RT much better too, together with TAA and obvious things like DOF and motion blur. Too bad we are in hybrid aera :)
 
So, sony and MS, AMD, and Nvidia are just stupid? What's the problem anyway, consoles are going to have hardware RT, it's confirmed, i assume it will be atleast equal to RTX otherwise why would they bother.
None of those are stupid, if at all it would be the customers / investors who are. But that's not what i mean or want to discuss. If they want RT in Consoles now, HW is the best solution for them.

Consoles are a closed platform of limited lifespan, after that a completely different architectures might replace them.
On PC that's not possible, and i tried to point this out. HW standarts have consequences that last much longer on this platform.
If AMDs TMU patent is the way to go for them, they very likely will support traversal shaders. Also there is no reason to black box anything on console, also not for MS. So that's not the platform we should worry about.

Now i could point to many blogs and posts of reputated game developers with similar complaints. Are they 'just stupid'?
 
Yeah, it's nice but the transition is still very noticable as it's restricted to the duration TAA takes to converge.

That's not exactly how it's usually implemented...
Most times I notice it, the transition is distance based. If the camera stops mid-transition, that object stays in that hybrid state for as long as the camera doesn't move. Essentialy, the engine is renderinf both LODs simultaneously with a shader to stochastically ignore some pixels for one, and the other for the other. The fact both LODs geometries are transformed and rasterised whenever the transition is going on is the overhead that insentives that transiton area to be short. But it has nothing to to with TAA convergence time, in fact, as I said, this solution has been used in games with no TAA whatsoever multiple times.
This is off topic, but if you wanna talk about smooth LOD transitions in rasterisation, that is it.
 
Last edited:
Let us adapt it to BFV:
BFV is a full fledged game with dozens of moving objects and destruction and alpha particles around the scene, IMO there simply no comparison to this almost "static" demo.

-Some objects missing completely from reflections
Those are filled in through SSR though.

-Only materials with roughness under a certain threshold are traced, the rest is cube mapped
But they are traced none the less, and in any scene there are at least a dozen material reflecting explosions, fire or muzzle flashes. These are the conditions that matter to reflections in a shooter game.


bot lower in most games (1.5 TR and 2 BFV IIRC.)
It's 2X in BFV and Tomb Raider. Using reflections in Atomic Hearts gave 3X the speed up, Control would probably deliver the same thing too, but no one bothered to test it at this point.

battlefield-v-ultra-dxr-2560x1440-geforce-gpu-performance.png


https://www.nvidia.com/en-us/geforce/news/geforce-gtx-dxr-ray-tracing-available-now/
 
Most times I notice it, the transition is distance based. If the camera stops mid-transition, that object stays in that hybrid state for as long as the camera doesn't move.
Interesting! Do you know any game which does this?
Never spotted this myself and thought rendering twice would be too expensive to be worth it.
(EDIT: To be really continuous, the whole scene would be needed to be rendered twice, otherwise we get noticeable phases of in and out of transitions... i need to look closer :) )

BFV is a full fledged game with dozens of moving objects and destruction and alpha particles around the scene, IMO there simply no comparison to this almost "static" demo.
Sure, but if there would be no RTX, it would be an option to exclude some or many dynamic objects, particle effects etc. from reflections. (Just like RTX does too, but less aggressively)
People would be impressed and happy, the tech would improve with optimizations, better algorithms and faster hardware as usual, and we would have RT in games.

Maybe some other people would work on interesting machine learning application for games as well...
And after all this, HW vendors could decide to offer hardware acceleration, tailored to the now known needs for games, including all required features and flexibility.

But what happened here is the opposite. And i still think that's the wrong way to do it, even if i made peace with RTX in other aspects.

https://www.overclock3d.net/news/gp...le_ray_tracing_is_a_reaction_to_geforce_rtx/1
 
Interesting! Do you know any game which does this?
Never spotted this myself and thought rendering twice would be too expensive to be worth it.
(EDIT: To be really continuous, the whole scene would be needed to be rendered twice, otherwise we get noticeable phases of in and out of transitions... i need to look closer :) )


Sure, but if there would be no RTX, it would be an option to exclude some or many dynamic objects, particle effects etc. from reflections. (Just like RTX does too, but less aggressively)
People would be impressed and happy, the tech would improve with optimizations, better algorithms and faster hardware as usual, and we would have RT in games.

Maybe some other people would work on interesting machine learning application for games as well...
And after all this, HW vendors could decide to offer hardware acceleration, tailored to the now known needs for games, including all required features and flexibility.

But what happened here is the opposite. And i still think that's the wrong way to do it, even if i made peace with RTX in other aspects.

https://www.overclock3d.net/news/gp...le_ray_tracing_is_a_reaction_to_geforce_rtx/1
That doesnt seem very realistic of a scenario, the stutter step that is.
 
Now, Cryteks work is just good for some time while people upgrade to RT GPUs. Sad but true, but don't think it would be inferior, which it is not.
It is superior, because it already has LOD build in by supporting voxel mipmaps, and it can do cone tracing as well, which hardware RT can't and never will.
But it is a proof hardware RT would NOT have been necessary. Please think of it. And be sure Cryteks work could be improved upon, if it would make any sense.
I am not sure I fully agree with that since the demo is nice from Crytek, but it is dramatically less complex (it is more or less static) and less physically accurate than any of the games or demos we have seen with DXR or VKRT HW accelleration. And even then, it runs rather poorly in comparison on AMD hardware.
Crytek themselves in that intervierw mentioned how porting it to HWRT would dramatically speed it up and allow for it to be much more dynamic.
 
Also, there’s not much to worry about in the pc space. Besides that amd hw rt in consoles will reside in amd gpu’s, probably even faster. Most games will be cross-platform.
 
Just like showing a very first RT demo integrated into a engine not designed for RT.
No. the Crytek demo is building on years and years of compute work and RT on compute work. It's not first-generation. During the whole 'do consoles need RTRT hardware?' discussion, I've been linking to previous work in compute-based lighting. This latest demo is the state-of-the-art, best-possible-so-far in compute raytracing

However, all i want to say is: To compare Crytek vs. RTX in a fair way, we have to look at more than just current day performance, because obviously fixed function hardware is faster at what it does.
We have to look at what the HW can't do, what it prevents to be adressed at all without making new HW, and how long this process takes as well, because this affects performance too.
Yes. I was arguing that side of the argument a year ago, remember. ;)
If a game like Dreams shows similar graphics quality than every other game, we have to question if we still need rasterization hardware.
Dreams uses rasterisation hardware. That old presentation using just splatting was superceded by the latest Bubble Bath engine that uses, as I understand it (can't find link), conventional rasterisation for the solid objects. And where the lighting can be gorgeous, it's also limited, including a lot of lighting latency like other volumetric solutions.

Same for raytracing, just now we will wait 20 years until that raytraced Dreams will appear, mainly because hardware has to be utilized.
(How competive would compute soltions be with twice the available chip area, removing all those raster and RT units? How fast would such a platform evolve?)
Those are some crazy hypothetical numbers you're wielding. The redundant components 1) aren't redundant in Dreams and 2) don't take up half the chip.

Additionally, Dreams is a special case for engine development because Sony have financed it over a decade without MM having to worry about a clear product release. They've been allowed to experiment and prototype with virtually academia-like freedom, including a complete start-from-scratch for the existing game. Other devs don't have this luxury, save CIG. And if they did, gamers wouldn't see many games because the devs would be constantly working on engines. ;) On top of an idealistic software solution, we also have a realistic one, in achieving actual results with methods that, even if not hypothetically perfect, are 'good enough' and get results without needing years of research.
 
Last edited:
Interesting! Do you know any game which does this?
Never spotted this myself and thought rendering twice would be too expensive to be worth it.

Off the top of my head, games I clearly remember noticing this on were the AC games on PS360, GTAV and MGSV. No TAA on neither, just live with the noise and maybe MLAA would smooth the dither up somewhat.
This gen, U4 was the first first time I realized they were doing it but TAA helped hide it. Very hard to spot LOD transitions in that game, but there are moments...
 
it would be an option to exclude some or many dynamic objects, particle effects etc. from reflections. (Just like RTX does too, but less aggressively)
In Battlefield V, only grass is excluded from RT and done through SSR, the rest of the scene is ray traced, even particles and smoke are ray traced. Did you notice anything differently?

I don't know about Control, but my observations is that reflections reflect everything, even plants and grass.
 
I am not sure I fully agree with that since the demo is nice from Crytek, but it is dramatically less complex (it is more or less static) and less physically accurate than any of the games or demos we have seen with DXR or VKRT HW accelleration. And even then, it runs rather poorly in comparison on AMD hardware.
Ofc i do not expect it could compete current HW RT and i never said so. Tracing accuracy however seems as accurate as any other RT implementation. Physical shading correctness, methods of denoising etc. is out of topic if we only talk about hardware vs. software RT.
I strongly disagree it runs poorly on AMD because i tried myself and 60fps at 720p is good. Impressive even, considering any 'out of the box UE4' game runs usually worse at 1080p without RT for me.
The scene complexity is high enough to show it's usable for games. Or at least it would be, if our expections wouldn't be to match RTX.

Also, we simply do not know how their methods scale with multiple objects.
I also assume many dynamic objects are a problem. I really think BVH would be better than regular grid for acceleration structure, and i do not understand why almost everybody uses grids for GPU RT. BVH means more divergence but less brute force work, and less extra cost for dynamic scene.
And i assume they have a problem with rays that are not parallel. Likely the additional divergence in traversal would hurt performance a lot, and unlike switching from grid to BVH this can't be fixed easily.

But that's just assumptions, and it does not matter how true they are, because they proofed RT in games is possible and practical regardless.

I'm aware there is no going back from HW RT and i'm fine with that.
As a journalist, your point of view ofc is to compare with available RTX technology.
As a developer, mine is just different and less tied the the here and now, and i'm more worried about limitations to eventually presist in the future and BC.
To me there is zero doubt RT (or other techniques achieving the same image quality) would have come to games in any case and soon, without hardware acceleration. This changes my ratio here too.


Crytek themselves in that intervierw mentioned how porting it to HWRT would dramatically speed it up and allow for it to be much more dynamic.
Of course they will use RTX - it's faster. And this will make their new triangle tracing effords basically a waste of time.
This is because they had the interesting things before already, like ability to cone trace voxels for rough materials or GI.
Will this demo tech ever make it into a game? Likely not - it's a high end feature, and those who want max details invest in RT GPUs anyways. Shit happens.
So the recognition for this demo is likely all they'll get for the work, and this deserves more than just saying 'no thanks - RTX is faster' in a single line comment and done. Maybe i got triggered a bit too much here.


Also, there’s not much to worry about in the pc space. Besides that amd hw rt in consoles will reside in amd gpu’s, probably even faster. Most games will be cross-platform.
There is always a need to worry about things you have no control of, if they affect your daily work, your long time investments and decisions, etc.


No. the Crytek demo is building on years and years of compute work and RT on compute work. It's not first-generation. During the whole 'do consoles need RTRT hardware?' discussion, I've been linking to previous work in compute-based lighting. This latest demo is the state-of-the-art, best-possible-so-far in compute raytracing
I don't know since when they worked on this, but guess they started before RTX announcement.
But their approach naturally extends their voxel tech and tells nothing about alternatives like using BVH, ray reordering, alternative geometry. Personally i became quite pessimistic about the latter, you remember, but i have no interest in developing high end features.
I believe it would work for high end and beat Crytek flexibility and performance. Maybe it would work for mid range too, but i won't try and nobody else will either. Get HW RT right instead is the way to go, and actually things don't look bad here.

I know Dreams uses raster to draw hole filling tiny cubes... they could have done much better than that.

Those are some crazy hypothetical numbers you're wielding.
Yeah, but don't nail me on one off errors in numbers. Maybe i need to exaggerate a bit for illustration of the idea :)

Additionally, Dreams is a special case for engine development because Sony have financed it over a decade without MM having to worry about a clear product release. They've been allowed to experiment and prototype with virtually academia-like freedom, including a complete start-from-scratch for the existing game. Other devs don't have this luxury, save CIG. And if they did, gamers wouldn't see many games because the devs would be constantly working on engines. ;) On top of an idealistic software solution, we also have a realistic one, in achieving actual results with methods that, even if not hypothetically perfect, are 'good enough' and get results without needing years of research.
Agree, and doing the same i regret i have to fund myself. Also, working on tech meant to stop working on games for me.
But nothing of this would justify to make FF HW just because devs have no time for research and progress. If it would be like that, then we had just another even larger issue to discuss...


In Battlefield V, only grass is excluded from RT and done through SSR, the rest of the scene is ray traced, even particles and smoke are ray traced. Did you notice anything differently?

Yes, initially BFV had very serious clipping issues, before release - seen in work of progress videos.
This has been improved and bugfixed, but missing objects were still noticable in videos from release version. (I remember a whole car missing from reflections for example, not so distant from camera.)
LOD is the only way to solve it. No matter how fast RTX is. We can not raster the whole world of BFV in full detail, and we can not trace it either.
It is all good if all future GPUs support LOD, but first gen RTX just does not.
 
There is always a need to worry about things you have no control of, if they affect your daily work, your long time investments and decisions, etc.

Most games will be developed for consoles in mind, and since they have basically pc hw it will scale.
 
Most games will be developed for consoles in mind, and since they have basically pc hw it will scale.
No.
In the technically best case, we soon have something like this on PC, some time after next gen (random numbers):
70% Non RT GPU
15% RT GPU with LOD
15% RT GPU without LOD
Game that targets all platforms will hold RT back, or optimize both code and content individually (not going to happen).

The economically best case for game developers would have been: No HW RT at all, because it helps to sell GPUs but not games.

The situation i consider best case so would be:
Use LOD to make performance adapitve and to bring RT to entry level GPUs, even mobiles on the long run. Enforce RT on minimal specs and accept reduced PC sales. Do it quickly - people will upgrade if they can afford.
If this works is another question, because proposed LOD solution will also hurt performance not only help it.
And it may well be i overestimate the importance of LOD, which changes the situation only slightly.
 
I strongly disagree it runs poorly on AMD because i tried myself and 60fps at 720p is good. Impressive even, considering any 'out of the box UE4' game runs usually worse at 1080p without RT for me.
I think you actually agree with Dictator then ;)

These techniques would not be usable in game by your results then.

However, all i want to say is: To compare Crytek vs. RTX in a fair way, we have to look at more than just current day performance, because obviously fixed function hardware is faster at what it does.
We have to look at what the HW can't do, what it prevents to be adressed at all without making new HW, and how long this process takes as well, because this affects performance too.

There has been a lot of movement in DXR 1.1 since we last had this discussion (1 year). I can only suspect a 1.2 continuing to offer additional flexibility for the type of things you want. And it may not necessarily be bound to the hardware which is why these features are slow to come out; but perhaps on gathering consensus by the different vendors on how things should work. I suspect we will see Turing supporting the full DXR 1.1.

Yearly advancement in the API seems fairly quick imo. I understand the allure of having everything now, and done your way (done through compute); but yearly advancements towards that while keeping performance up is probably desirable until we reach the point that we no longer require FF hardware for RT.
 
Last edited:
To me there is zero doubt RT (or other techniques achieving the same image quality) would have come to games in any case and soon, without hardware acceleration.

I believe it would work for high end and beat Crytek flexibility and performance. Maybe it would work for mid range too, but i won't try and nobody else will either. Get HW RT right instead is the way to go, and actually things don't look bad here.

I know Dreams uses raster to draw hole filling tiny cubes... they could have done much better than that.
I think you're being overly optimistic for the alternatives here. MM have worked for years trying to find the best solutions, and have happily gone clean-slate design based on their findings with their current attempts. That they haven't found a better solution while you think there is a much better solution out there...well, you've been saying much better is possible than everyone else is achieving since you came on this board, but if you remember, you abandoned your approach to RTRT because you realised it wasn't going to work.

Perhaps if RT hardware didn't exist, devs would find even better solutions on compute. I was arguing that before myself. But given what we're actually seeing from people investing lots into compute-based solutions, versus what we know RT hardware can bring both in performance and simplicity of solutions, it's a long ask to say to the industry, "abandon RTRT hardware and focus on finding knew paradigms that'll eventually be better," rather than accepting RTRT hardware because we know RT ideally solves all those problems, and exploring how to use that to make new paradigms and algorithms to use those rays effectively. At this point, you or someone else needs to present proof that better is economically possible. Crytek's best effort so far, very commendable, shows significant limitations. As do all the other alternatives like voxelised lighting.
 
I think you actually agree with Dictator then ;)
These techniques would not be usable in game by your results then.
No, because AAA games using customized UE4 run bery well, leaving some ms for RT for those willing to sacrifice resolution for stable FPS.
Does anybody seriously doubt their RT would not work for a complete game, just as the demo shows?

Yearly advancement in the API seems fairly quick imo.
Yes, progress here looks promising, and i did not expect it to happen so quickly. I was wrong in this point with my pessimism a year ago. Very happy about this!

I think you're being overly optimistic for the alternatives here. MM have worked for years trying to find the best solutions, and have happily gone clean-slate design based on their findings with their current attempts. That they haven't found a better solution while you think there is a much better solution out there...well, you've been saying much better is possible than everyone else is achieving since you came on this board, but if you remember, you abandoned your approach to RTRT because you realised it wasn't going to work.
I work myself since over a decade on compute GI. I use splatting to generate small environment maps per texel, and here i came up with a very simple solution to the holes problem, which surely is better than rasterizing cubes. I'm only referring to this detail and don't know how Dreams has changed from the paper of years ago.
The reason i abandoned classic RT is: It would be slower than initially expected for once, and rumors of upcoming consoles having HW RT. The idea itself is still interesting, because it has built in LOD like voxels but no spatial quantization. If i would be 'very optimistic', or had more time, i would try it.
Otherwise, my GI algorithm which also uses RT for visibility test IS faster than any other method i know, including anything shown using RTX. Maybe that's the reason why i sound optimistic / unrealistic sometimes, but this is not related to classical RT and it's strength.


But in none of my recent posts i said HW RT shopuld be abandoned and replaced with compete RT, or did i?
accepting RTRT hardware because we know RT ideally solves all those problems, and exploring how to use that to make new paradigms and algorithms to use those rays effectively.
Said the same earlier:
Get HW RT right instead is the way to go, and actually things don't look bad here.
I only pointed out some HW drawbacks to defend the Crytek demo, and accepting HW RT does not make those issues to disappear.
At this point, you or someone else needs to present proof that better is economically possible. Crytek's best effort so far, very commendable, shows significant limitations.
Nothing else can be economically reasonable, knowing ALL GPU vendors will implement HW RT.
But pointing out limitations in Cryteks demo, although HW RT has the exact same limitations, just compensated with higher performace... this makes no sense to me as a developer and also seems unfair.
Many people still think RTX is too unmature / too slow to be worth it yet. Likely that's the reason 5700XT sells pretty good although you can get similar performance including HW RT. I do not understand why they think like that, but they do.
So i could just say 'RTX is too slow too', but i do not - we work with whatever performance we have. Agree? If so, then this implies we could also work with Cryteks SW RT and push it as far as possible. No difference.
Additionally i said: If we would have to use SW RT, then we also would need to wait longer until RT is feasible in practice, and i mean some years with that.

This said to defend myself, which should not be necessary. It feels like we get stuck again in saying the same things but having different timeframes in mind, different relations and even motivations / emotions. Is it me or the curse of RTX discussion? I don't know :D
 
Does anybody seriously doubt their RT would not work for a complete game, just as the demo shows?
:oops::oops::oops::oops:
Perhaps I'm just pessimistic about it, but I think with a lot of moving parts happening, fast movement, high triangle counts for a scene, like Death Stranding amounts, or the Beanie hat levels of detail and triangle fidelity; I think the amount of load could be enormous.
But if we're talking something closer to the Minecraft RT (user ones) I'd agree that this is sufficient and probably pretty good.
 
:oops::oops::oops::oops:
Perhaps I'm just pessimistic about it, but I think with a lot of moving parts happening, fast movement, high triangle counts for a scene, like Death Stranding amounts, or the Beanie hat levels of detail and triangle fidelity; I think the amount of load could be enormous.
But if we're talking something closer to the Minecraft RT (user ones) I'd agree that this is sufficient and probably pretty good.
Come on, it's so easy to solve. Many game still rely on static, baked lighting. Ist his great? No. Can we deal with it? Yes.
So if many dynamic objects are a problem, exclude them for reflections and fall back to SSR if you want.
If high triangle counts are problem, reduce the count until it works. Visible only in perfectly sharp reflections which are rare.
... just do what games do all the time.

But as this is all over a hypothetic and so pointless discussion, i'll leave it at this. No need to drag a play of thought over multiple pages.
 
I think you're being overly optimistic for the alternatives here. MM have worked for years trying to find the best solutions, and have happily gone clean-slate design based on their findings with their current attempts. That they haven't found a better solution while you think there is a much better solution out there...well, you've been saying much better is possible than everyone else is achieving since you came on this board, but if you remember, you abandoned your approach to RTRT because you realised it wasn't going to work.

Perhaps if RT hardware didn't exist, devs would find even better solutions on compute. I was arguing that before myself. But given what we're actually seeing from people investing lots into compute-based solutions, versus what we know RT hardware can bring both in performance and simplicity of solutions, it's a long ask to say to the industry, "abandon RTRT hardware and focus on finding knew paradigms that'll eventually be better," rather than accepting RTRT hardware because we know RT ideally solves all those problems, and exploring how to use that to make new paradigms and algorithms to use those rays effectively. At this point, you or someone else needs to present proof that better is economically possible. Crytek's best effort so far, very commendable, shows significant limitations. As do all the other alternatives like voxelised lighting.

When sebbbi asked about it on twitter Alex Evans told they rasterized cube because it was done like this in the brick engine. One version of the test they have done, maybe they did not have time to choose other solutions.

EDIT: Alex Evans it would have work with raytracing because sebbbi asks.



 
Last edited:
Back
Top