Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

That would be nice but would still be subject to Nanite’s significant limitations regarding which assets are supported. We need a general purpose geometry representation. Nanite isn’t that.
I missed this in the other reply but meant to poke a bit more here... the path to having Nanite support a broader range of geometry types is in some ways more clear to me than having fixed function hardware RT support it. Obviously we want both in the long run, but currently RT is at least as restrictive (if not more) than Nanite is in terms of what it can trace against, right? I guess if you count intersection shaders things can be slightly more general, but you could do similar things with Nanite if desired... the issue is really that it's all too slow to be useful and I don't see something as general as that being the path forward.
 
I'm not even willing to go so far as to say that we require continuous LOD in a topology-altering fashion in the future.
A semantical issue - personally i classify Nanite as 'continuous', although technically it's discrete. Truly continuous LOD seems to make little sense outside of parametric surfaces. So i would not criticize RT to lack support, since supporting parametric primitives would be a more attractive solution for that.
But a fine grained and practical solution such as Nanine has to work, otherwise RT adds more restrictions than new possibilities, and i remain disappointed.

the issue is really that it's all too slow to be useful and I don't see something as general as that being the path forward.
I think this applies to RT as a whole, not just its intersection shaders. We say RT can be parallelized easily because pixels are indepenent, but it's still a single threaded algorithm meant to be the future of realtime graphics.
Technically possible? Seems so, with upcoming 70TF GPUs. But they will end up even more expensive than the current generation i guess, and i don't see how we can sustain 2 billion PC players active if such hardware is the only option.
I expect we get two classes of gamers. A high end niche and a mainstream with much less powerful HW. Or - if there will be no such HW - streaming (assuming consoles can't do much better for long).
Idk what the future brings, but it's still not clear to me if we can expect RT to become a general feature everybody has at sufficient performance. It looks like this first attempt of realtime RT is as successful as VR. Too strong to die out, too weak to become mainstream.
 
Do you think in the next 5-6 years we might get to the point where raster side of GPU's will start seeing reductions in performance and the RT side taking up more and more transistors?

Say Play Stattion 6 for example, for BC for PS5 it would only need 10Tflops.

In 5-6 years time that will be such a small amount that Sony could dedicate a massive amount of the transistor count to RT?

More difficult to do this in the PC space mind you.
 
Do you think in the next 5-6 years we might get to the point where raster side of GPU's will start seeing reductions in performance and the RT side taking up more and more transistors?
That's a bad question somehow, becasue it sounds you would think GPUs do just rasterizing triangles and now RT, which is not the case.
I like this quote from here:
What we need are compute shaders and the ability to spawn threads from shaders effectively. Everything else (including raytracing) can be easily implemented on Compute Shader level. Ten years ago, the performance of Intel Larrabee wasn’t enough for compute-only mode, but now it’s possible to discard all other shaders.
Compute is what really matters. Because it can do anything precisely as we want it. Past, present and future algorithms included.
Actually we see it beats ROPs, so existence of ROPs is questionable. The same will happen to RT sooner or later.
Yes, fixed function traversal and intersection is 10 times faster, but what does it help if restrictions prevent efficient solutions in other areas so there is no net win?

Traditionally visual progress was lazily coupled to GPU scaling and features. But it does not have to be like that - the alternative of better software is just harder and takes more time to develop.
Now, as shortage and cost issues show a conflict of interests between game and HW industry, we hopefully can expect a bit more than just using latest GPU features and call it a day.
 
If i understand correctly, its that AMD, Nvidia, MS, Sony (and soon Samsung) are quite 'dumb' in this whole Ray tracing stuff? Seems like they implemented it just for the PR then.
 
That's a bad question somehow, because it sounds you would think GPUs do just rasterizing triangles and now RT, which is not the case.

Yea I didn't word it the best.

What I should have said will we see a point where the only part that see's an increase is the RT units.

So for example, an RTX4090, instead of Nvidia increasing every aspect of the GPU they leave it as it is now and use all the additional transistors purely to increase their RT units.

So it would be an RTX3090 but with loads more RT.

It's been a rough week already so I'm on the whiskey :runaway:
 
What I should have said will we see a point where the only part that see's an increase is the RT units.

So for example, an RTX4090, instead of Nvidia increasing every aspect of the GPU they leave it as it is now and use all the additional transistors purely to increase their RT units.
I hear you, and this makes sense and might happen.
But if you spend most die area on RT fixed function units, compute performance will suffer in comparison. And we are back to HW vendors defining how software developers should solve their problems.
I wish we would have more examples like UE5 to illustrate how good software can achieve generational leaps at lower requirements. Imagine all other devs would be that innovative as well.
It really depends on them. Will they focus on classic RT adoption, or will they innovate to come up with better solutions to solve lighting? I guess now both will happen, so we'll see... ;)
 
So for example, an RTX4090, instead of Nvidia increasing every aspect of the GPU they leave it as it is now and use all the additional transistors purely to increase their RT units.
This would just reduce the cost per frame (in time) spent doing RT. Once you trace rays you still have to compute and draw the stuff you just determined would be in the scene, which increases your compute cost per frame.
 
To fix the problem, we need to update BVH together with the geometry

I can see good use cases for updating the "BVH" during rendering even, in which case more than an API, it needs to be available to shaders. Though rather than a BVH I think SVO (or SV-64-tree) would be more suitable to on the fly acceleration structures.
 
This would just reduce the cost per frame (in time) spent doing RT. Once you trace rays you still have to compute and draw the stuff you just determined would be in the scene, which increases your compute cost per frame.

Isn't the point?

To slowly start transitioning over to path tracing?
 
Until we find a replacement for silicon how do people expect we will come close to having enough performance to get rid of rasterization?
 
What lower requirements exactly? Have you tried running sw lumen and Nanite together on Pascal or GCN? It is a big old slideshow even at 1080p!
Yeah, i tried in on GCN (Vega 56). Seems i can enjoy next gen games at stable 30fps. I hope the GPU still lasts many years, becasue i won't upgrade for >1500 bucks to get similar visuals at similar performance with RT games.
There is no game good enough to justify such prices - independent of gfx features, and i won't start to mine some crypto shit to pay for my new shiny GPU.
That's not just my problem. It seems what 'hybrid era' truly means is that we get two camps. Those willing to pay ridiculous amounts of money, and those who prefer to stick at current gen. Games industry won't let the latter camp down - RT remains an optional bolt on feature, so the former camp looses too.
As a developmer i won't upgrade either, becasue i can't use RT anyway due to its LOD restrictions.

I can see good use cases for updating the "BVH" during rendering even, in which case more than an API, it needs to be available to shaders. Though rather than a BVH I think SVO (or SV-64-tree) would be more suitable to on the fly acceleration structures.
Yes, i meant the API has to be used from GPU side. Otherwise it would be useless.
I've used SV64 recently for CPU fluid simulations, and thought it's really attractive for GPU as well.
But for raytraycing octrees and the like have the big downside that we need to rebuild them on any movement, even if no LOD change or skinning etc. happens. We loose the nice option of refitting, and the desired advantage of streaming acceleration structure would become pointless.
Depending on a global grid is bad, though we could address this by supporting movable grids. Not sure, would need to think about this...
One important argument is we often want consistent parent - child relationships. LOD is such example, becasue we want (precomputed) clusters to remain at their place and not swimming over a moving mesh, which could casue visible popping.
Whatever - it seems the whole RT community settled on BVH for good reasons, so this increases the chances we get some common standard. If there are outliers using Octrees or kD-trees in hardware, chances of a standard would decrease.
 
Until we find a replacement for silicon how do people expect we will come close to having enough performance to get rid of rasterization?

We don’t need to get rid of rasterization altogether. In this current generation rasterization is already being limited to just calculation of primary visibility in some games. I expect that will be the norm next generation with everything else (shadows, GI, reflections, particles) rendered via RT and compute.

I wish we would have more examples like UE5 to illustrate how good software can achieve generational leaps at lower requirements.

We haven’t really seen Nanite and Lumen in action yet so don’t know what the requirements are. Once there are games shipping on the engine I hope we’re able to see head to head comparisons of Lumen with and without RT, VSM vs RT shadows etc.
 
We don’t need to get rid of rasterization altogether. In this current generation rasterization is already being limited to just calculation of primary visibility in some games. I expect that will be the norm next generation with everything else (shadows, GI, reflections, particles) rendered via RT and compute.



We haven’t really seen Nanite and Lumen in action yet so don’t know what the requirements are. Once there are games shipping on the engine I hope we’re able to see head to head comparisons of Lumen with and without RT, VSM vs RT shadows etc.
We have only seen RT used in games with mostly last gen console quality visuals. We don't know what the performance implications are for using RT with more advanced visuals. We have 2 or 3 more shrinks left before we are at the end of silicon scaling. Thats not much more performance.
 
That doesn't solve all the problems and isn't feasible for consoles/mass market prices.

Thing's will get bigger again. Look at cell phones. They started as giant bricks and then became super tiny and folded to get even smaller and then they started to get bigger again and if you look at the folding devices its mostly being used to get bigger. So the duo , fold and others just get bigger.
 
Thing's will get bigger again. Look at cell phones. They started as giant bricks and then became super tiny and folded to get even smaller and then they started to get bigger again and if you look at the folding devices its mostly being used to get bigger. So the duo , fold and others just get bigger.
Look at the prices of cell phones though. Upon the launch of PS6 and NextBox do people really expect consoles fast enough to ray traced most things outside of primary visibility for 4-500$?
 
Back
Top