Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Isn't the point?

To slowly start transitioning over to path tracing?
With the RTX4090? I don't know if current hardware is fast enough everywhere else for nVidia to just speed up RT performance. Because as things are right now, if you start using more RT effects, it's going increase the rendering load on the non-RT parts of the GPU. If we were going to see a GPU with RTX3090 performance except faster RT it would probably be a further down the product stack.
 
With the RTX4090? I don't know if current hardware is fast enough everywhere else for nVidia to just speed up RT performance. Because as things are right now, if you start using more RT effects, it's going increase the rendering load on the non-RT parts of the GPU. If we were going to see a GPU with RTX3090 performance except faster RT it would probably be a further down the product stack.

Never mind.
 
With the RTX4090? I don't know if current hardware is fast enough everywhere else for nVidia to just speed up RT performance. Because as things are right now, if you start using more RT effects, it's going increase the rendering load on the non-RT parts of the GPU. If we were going to see a GPU with RTX3090 performance except faster RT it would probably be a further down the product stack.
That's an interesting point. Presently, rendering cost is proportional to what's on screen. Raytracing starts to move rendering cost to scene complexity. However, that's not linear nor uniform for all RT uses, so we should be able to see some improvements in some areas.
 
Look at the prices of cell phones though. Upon the launch of PS6 and NextBox do people really expect consoles fast enough to ray traced most things outside of primary visibility for 4-500$?

What about prices ? Samsung has $200 phones that even 5 or 6 years ago people would drool over. But the high end has even more features like folding phones where the screen size doubles plus has a front screen so you don't even have to open it. You have phones with 4-5 cameras on them and so on and so forth

If we take a look at AMD's pc video cards the issue with ray tracing is that they are behind the competition. RDNA 2 is their first Raytracing hardware and its about as fast as nvidia's first generation ray tracing tech. So the question is what year do the new systems hit and what iteration of AMD's ray tracing tech do they have. If rdna 3 is able to double ray tracing performance I don't think that would be enough. But if they use RDNA 4 in say 2024 and that doubles RDNA 3s already doubled ray tracing performance that could very well be enough for the consoles.

I think the important thing for next gen consoles is the viability of 8k displays. Over this console generation will they drop in price enough that people will expect next gen consoles to fully support them ? If not then next gen consoles should be fine for ray tracing when targeting 4k. They can use AMD's machine learning tech and render at a lower resolution and upscale. They can do the same for 8k but I am not sure how much I would want to take a sub 4k (sometimes right about 1080p levels) and dlss upscale it to 8k. 1440p to 4k sure , 1440p to 8k... i think your pushing it. I have actually been able to use a 3090 using dlss on a 8k screen and well... I rather just play the game at 4k

edit in
Just want to say , I do hope they stick with a 4k target for next generation. All companies should leave 8k to the realm of pcs. I think next gen they should be able to hit 4k for reals this time guys.
 
Last edited:
So for example, an RTX4090, instead of Nvidia increasing every aspect of the GPU they leave it as it is now and use all the additional transistors purely to increase their RT units.
I don't think I care about them increasing the raw throughput of the hardware accelerated bits at this point really... as I noted that's not really the practical limitation in most cases.

What we need for this to become more practical is general improvements to writing more complex code on GPUs. I covered a lot of things in my SIGGRAPH 2017 talk that are still highly relevant to this today:
https://www.ea.com/seed/news/seed-siggraph2017-compute-for-graphics

In terms of fixed function hardware, we may need something to help with BVH building in the long run, but I don't know that anyone has enough confidence in any one solution to know what to build today. It's also hard to design hardware like that that doesn't interfere with wanting to provide flexibility for engines to use their own data structures and layouts (ex. streaming BVH/raytracing into Nanite data structures or whatever).

I feel pretty confident in saying the primary problems for general game workloads right now are on the BVH building/streaming/LOD side, but I don't think there's a silver bullet. They aren't the *only* problems either... stuff like foliage will continue to be very expensive for RT and really any systems that rely on fine grained, quasi retained-mode data structures.
 
What about prices ? Samsung has $200 phones that even 5 or 6 years ago people would drool over. But the high end has even more features like folding phones where the screen size doubles plus has a front screen so you don't even have to open it. You have phones with 4-5 cameras on them and so on and so forth

If we take a look at AMD's pc video cards the issue with ray tracing is that they are behind the competition. RDNA 2 is their first Raytracing hardware and its about as fast as nvidia's first generation ray tracing tech. So the question is what year do the new systems hit and what iteration of AMD's ray tracing tech do they have. If rdna 3 is able to double ray tracing performance I don't think that would be enough. But if they use RDNA 4 in say 2024 and that doubles RDNA 3s already doubled ray tracing performance that could very well be enough for the consoles.

I think the important thing for next gen consoles is the viability of 8k displays. Over this console generation will they drop in price enough that people will expect next gen consoles to fully support them ? If not then next gen consoles should be fine for ray tracing when targeting 4k. They can use AMD's machine learning tech and render at a lower resolution and upscale. They can do the same for 8k but I am not sure how much I would want to take a sub 4k (sometimes right about 1080p levels) and dlss upscale it to 8k. 1440p to 4k sure , 1440p to 8k... i think your pushing it. I have actually been able to use a 3090 using dlss on a 8k screen and well... I rather just play the game at 4k

edit in
Just want to say , I do hope they stick with a 4k target for next generation. All companies should leave 8k to the realm of pcs. I think next gen they should be able to hit 4k for reals this time guys.
Those 200$ Samsung phones are the equivalent of using a Switch as a primary home console.

Looking at a 3090 it already struggles trying to add multiple ray traced effects to last gen visuals. Usually requiring 1080p-1440p to attain 60 fps. If we assume the next 3 shrinks before we cap out at 1 nm each allow the typical 50% performance improvement we end up with a GPU nearly 3.5x as fast as the 3090. I don't think that speedup is nearly enough to get us away from relying primarily on rasterization. What if these new GPUs keep increasing their power draw as Ampere did? Where do consoles slot in on the performance scale? What if prices continue to rise every generation and the GPU performance at a mass market price point continues to decrease? All of these are legitimate issues.
 
Last edited:
@Andrew Lauritzen I'm not certain if some of these features you mentioned in your presentation are ever going to materialize among any of the shading languages ...

Function calls or virtual methods ? Are you asking for SPIR-V/OpenCL style unstructured control flow ? I don't think the industry are interested in implementing SPIR-V kernel capabilities to their compilers or in any standardized fashion.

Cooperative Groups ? Do you need independent thread scheduling implemented as well ? If you do then only current Nvidia hardware will be able to offer the complete feature set thus ruling out the other vendors.

Dynamic Parallelism ? Interestingly OpenCL 2.0 exposed this feature but it's amusing how Nvidia will implement the feature in CUDA yet they'll refuse to implement OpenCL 2.0 altogether.

The good news is that you got your wish granted regarding buffer pointers since Khronos Group standardized this feature in Vulkan and it can be used in GLSL. The Khronos Group also plans to expose texture handles as well in Vulkan. If you beg hard enough at Microsoft, you could see the equivalent feature with Direct3D 12/HLSL in the future or maybe you'll find shader model 6.6 Dynamic Resource Binding feature to be good enough for you ?

That being said, have you thought about contingency plans yet if you don't see some of these features ever being exposed in any shading languages ? It would make for some interesting presentation material if the alternate future wasn't much different than what we currently have now compared to your ideal future ... (in some ways the alternate future I mentioned could very well be closer to the reality or ground truth than you think)
 
Last edited:
Looking at a 3090 it already struggles trying to add multiple ray traced effects to last gen visuals. Usually requiring 1080p-1440p to attain 60 fps
I think you mean the 2080Ti by that, the 3090 can sustain 1440p with multiple ray traced effects at over 60fps with ease. For example it can do RT GI with RT reflections in Metro Exodus @90fps 1440p. By comparison the 3090 and 6900XT can hardly sustain 1080p60 in the latest Lumen demo.
 
I think you mean the 2080Ti by that, the 3090 can sustain 1440p with multiple ray traced effects at over 60fps with ease. For example it can do RT GI with RT reflections in Metro Exodus @90fps 1440p. By comparison the 3090 and 6900XT can hardly sustain 1080p60 in the latest Lumen demo.
No I mean the 3090. Some games its 1080p others it’s 1440p. Is there a UE5 demo newer than Valley of the Ancients? That Demo a 3090 can do 1080p/60 no problem. UE5 also looks substantially better than any current RTX game so it makes sense the demand is higher.
 
That demo doesn't look substantially better than anything, it's just deserts with a bunch or rocks.

Its when we start to use things like UE5 demo isntead of actual games when discussions get abit awkward. Someone has mentioned it before, but its much more intresting to discuss and analys when actual UE5 based games (with all the bells and whistles) appear. 'Software' RT might be the (long-ish) future, but its way to slow for the nearer future, hw rt (as in nv, intel etc) is probably what we need for this generation. Next generation perhaps things might change.
Abit like vertex, pixel shaders, hardware T&L, 3D rendering....
 
I hope you have some examples for that 1080p thing.

barely.

That demo doesn't look substantially better than anything, it's just deserts with a bunch or rocks.
Watch Dogs, The Ascent and Cyberpunk all require sub 1440p to achieve 60 fps. Even at 1080p a 3090 cant maintain 60 in Watch Dogs and Cyberpunk actually. What games do you think have visuals that approach Land of the Nanite and Valley of the Ancients?
 
Watch Dogs, The Ascent and Cyberpunk all require sub 1440p to achieve 60 fps. Even at 1080p a 3090 cant maintain 60 in Watch Dogs and Cyberpunk actually. What games do you think have visuals that approach Land of the Nanite and Valley of the Ancients?

What specifically is impressive about those demos aside from the high poly rocks? It’s way early in the game to make judgements about what a UE5 game will look like.
 
What specifically is impressive about those demos aside from the high poly rocks? It’s way early in the game to make judgements about what a UE5 game will look like.
That many polys alone is very impressive, especially when properly shaded, shadowed and lit by realtime, dynamic multi bounce GI of such a quality.
 
What specifically is impressive about those demos aside from the high poly rocks? It’s way early in the game to make judgements about what a UE5 game will look like.
Specifically, I'd say the massively reduced LOD pop-in is extremely impressive. Just as temporal anti-aliasing creates a more stable image quality, the same could be said for Nanite in UE5 with regards to "geometric stability".

Of course, I know that only applies to static and non transparent objects.. and alas foliage and other objects would be just as susceptible to pop-in and LOD switching like normal.. :(

We'll have to see... but I think there's going to be some pretty incredible looking games come from it. I can't wait to see what The Coalition has in store for Gears with UE5, as well as any other projects they have cooking up.
 
Specifically, I'd say the massively reduced LOD pop-in is extremely impressive. Just as temporal anti-aliasing creates a more stable image quality, the same could be said for Nanite in UE5 with regards to "geometric stability".

That is nice. I’m playing Far Cry 4 and the weird pixelated LOD transition mechanic in that game is atrocious.

We'll have to see... but I think there's going to be some pretty incredible looking games come from it. I can't wait to see what The Coalition has in store for Gears with UE5, as well as any other projects they have cooking up.

Yeah Nanite should enable some amazing environmental detail. Other objects like weapons hopefully get the Nanite treatment too.
 
Yeah Nanite should enable some amazing environmental detail. Other objects like weapons hopefully get the Nanite treatment too.
Really hope Nanite for terrain and object tesselation/displacement support.
Additional displacement layer should be quite awesome for reducing discs pace requirement and as a tool for artists.

Also Nanite water mesh would be awesome. (New water system renders it as opaque surface, so it should be feasible.)
 
I wouldn't be surprised if Epic quite quickly finishes R&D on multiresolution ways of rendering aggregates (foliage/fur).

It's the lesser problem compared to animation.
 
Back
Top