Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Thats a pretty big assumption to make based on a change log.

I found this quite interesting which sounds like a potential alternative to using shadow maps:

  • (Experimental) You can enable initial support for native ray tracing and path tracing of Nanite meshes by setting r.RayTracing.Nanite.Mode=1. This approach preserves all detail while using significantly less GPU memory than zero-error fallback meshes. Early tests show a 5-20% performance cost over ray tracing a low-quality fallback mesh, but results may vary based on content.

:oops:

Taking a stab, meshlet tracing is certainly a thing, as is LOD selection if you're clever enough (see AMDs recent paper). I suppose they could insert super low detail meshlet LODs as leaves into the BVH structure since those are present in Nanite already. This could make code overhead lower and scalability easier. Consoles could eventually trace the lowest lod meshlets, while high end PCs and virtual production could scale up to hypothetically the highest meshlet lod representations.

Anyway, also confirmed there was indeed overdarkening on foliage and added fix for it. Tracing through the surface cache alpha is clever and seems obvious in retrospect.
 
Last edited:
That is great. Lumen reaching 60fps is what most expected.
But...

"High scalability level is set for a 60 fps budget. However, note that achieving a 60 fps budget with acceptable quality is still a work in progress."

What that means? What is their acceptable quality? I want to see some pics/videos comparing the lose in quality between Lumen Epic (30fps) and High (60fps).
 
That is great. Lumen reaching 60fps is what most expected.
But...

"High scalability level is set for a 60 fps budget. However, note that achieving a 60 fps budget with acceptable quality is still a work in progress."

What that means? What is their acceptable quality? I want to see some pics/videos comparing the lose in quality between Lumen Epic (30fps) and High (60fps).

I think they talk about indoor lighting. It will be interesting to see with Silent Hill 2 if the game continue to use software lumen.
 
That is great. Lumen reaching 60fps is what most expected.
But...

"High scalability level is set for a 60 fps budget. However, note that achieving a 60 fps budget with acceptable quality is still a work in progress."

What that means? What is their acceptable quality? I want to see some pics/videos comparing the lose in quality between Lumen Epic (30fps) and High (60fps).
I think is in reference the current "high" scalability. High drastically cuts Lumen GI and Reflection quality to bring performance up (so as to attempt to target 60 fps on consoles).
high.png
1/16th res of 1080p internal resolution for lighting is not exactly going to look good for a lot of things.
 
1/16th res of 1080p internal resolution for lighting is not exactly going to look good for a lot of things.
Once devs have gotten a handle on it, I expect they'll be able to process the results for better information recovery (or fabrication). I remember a paper on how very sparse sampling can be reconstructed to very accurate shadowing by applying a smart algorithm. It's something ideally suited to ML too - feed it a 1/16th scale reflection map and it can coarsely upscale it. Heck, apply DLSS on the RT buffer!
 
1/16th res of 1080p internal resolution for lighting is not exactly going to look good for a lot of things.
That seems pessimistic. Your screenshot is the worst case scenario (all glossy/reflective surfaces) and it looks fine. Obviously pc will look better than consoles, but for fully dynamic GI on consoles that resolution looks completely acceptable to me.
 
That seems pessimistic. Your screenshot is the worst case scenario (all glossy/reflective surfaces) and it looks fine. Obviously pc will look better than consoles, but for fully dynamic GI on consoles that resolution looks completely acceptable to me.

Worst case scenario? I'm pretty sure something like Spiderman would show up those low res reflections far worse.
 
That is great. Lumen reaching 60fps is what most expected.
But...

"High scalability level is set for a 60 fps budget. However, note that achieving a 60 fps budget with acceptable quality is still a work in progress."

What that means? What is their acceptable quality? I want to see some pics/videos comparing the lose in quality between Lumen Epic (30fps) and High (60fps).
I wonder if Tekken 8 is using lumen or not. it's obviously 60fps but hmm
 
I think is in reference the current "high" scalability. High drastically cuts Lumen GI and Reflection quality to bring performance up (so as to attempt to target 60 fps on consoles).
View attachment 7546
1/16th res of 1080p internal resolution for lighting is not exactly going to look good for a lot of things.

I'm sure they'll be adding generalized RESTIR in the nearish future, should be able to push up quality on static scenes/larger relatively contiguous surfaces. That should probably push them to their "acceptable" quality level.

Off the top of my head, could also switch to fused BVH LOD to further LOD selection, early out specular traces to worldspace probes once the ray gets far enough away to not notice (probably with some sort of correction term), and possible try switching from tris to quads for ray testing. Quads can be faster, but this may conflict with their goal of using the nanite meshlet trees natively. Could build quad HLOD proxies still.
 
Last edited:
I think is in reference the current "high" scalability. High drastically cuts Lumen GI and Reflection quality to bring performance up (so as to attempt to target 60 fps on consoles).
View attachment 7546
1/16th res of 1080p internal resolution for lighting is not exactly going to look good for a lot of things.
Couldn't they use CBR for the reflections like Insomniac successfully do in their games?
 
Which would add another 2ms and thus you're not really saving anything?
You're saving expensive hardware. Either you include enough RT performance to sample 1/4 rays per pixel, or stick with enough to sample 1/16th and upscale to similar quality. This can also be done independently and in parallel to the rendering with another frame of latency, so 2ms of work that's not adding 2 ms to every frame generation time but, at worst, 2ms to frame latency.
 
This can also be done independently and in parallel to the rendering
But doing things on another queue in parallel does not make them free. Not if you already do modern software which aims to saturate HW using multiple queues anyway.
Your argument only holds, e.g. if you have a single threaded engine, and some guy opens the door and excitedly shouts: 'Hey, we have quad cores since 10 years! Let's use the other 3 cores too, so we can get stuff done for free!' :D
That's no longer the case. At least i hope so.

Couldn't they use CBR for the reflections like Insomniac successfully do in their games?
CBR cuts sample count in half. They already reduce it to 1/4 and 1/16.

I'm sure they'll be adding generalized RESTIR in the nearish future, should be able to push up quality on static scenes/larger relatively contiguous surfaces. That should probably push them to their "acceptable" quality level.
Not sure how much restir can halp with such a low sample count. Also, i guess they already have prefiltered / low res geoemtry, so an approximation of cone tracing, which already gives them a better inital condition, and making point sampling more effective isn't their sapproach here. But really not sure.

If you ask me, their reflection results already are acceptable.
It's just that everybody is a bit spoiled from HW raytracing results, which have their price, but can be used as well where applicable.
Personally i'll enjoy the improvement over SSR and cube map hacks. (If frame rate is high enough for joy.)
For a game like Spiderman, a single planar reflection plane for the building in focus, plus some blending to alternatives for the rest wouldn't be that bad either, see Hitman.
 
But doing things on another queue in parallel does not make them free. Not if you already do modern software which aims to saturate HW using multiple queues anyway.
Your argument only holds, e.g. if you have a single threaded engine, and some guy opens the door and excitedly shouts: 'Hey, we have quad cores since 10 years! Let's use the other 3 cores too, so we can get stuff done for free!' :D
That's no longer the case. At least i hope so.
Firstly, GPUs aren't saturated yet. Secondly I thought the ML stuff on GPUs was largely being unused save for DLSS. Are nVidia Tensor cores laregly saturated during games without doing RT buffer upscaling? Thirdly, what's the other option? RT performance can't be improved in existing GPUs. If it's not adequate to get higher quality, aren't workarounds the only option? I mean, upscaling is completely welcomed to render less pixels and then do work to get that approximating the high-end target with less overall effort. Why is the same principle applied to RT reflections a bad idea? Obviously it'd be a choice for games to prioritise where some will need every spare cycle for other work, but it's also a point I think that smart RT buffer processing will help get more overall quality per cycle by working smarter, not harder, once again.
 
If you ask me, their reflection results already are acceptable.
It's just that everybody is a bit spoiled from HW raytracing results, which have their price, but can be used as well where applicable.
Personally i'll enjoy the improvement over SSR and cube map hacks. (If frame rate is high enough for joy.)

In the same vein, upscaled high quality reflections can be had at high frame rates and are therefore equally (or more) acceptable.
 
Firstly, GPUs aren't saturated yet. Secondly I thought the ML stuff on GPUs was largely being unused save for DLSS. Are nVidia Tensor cores laregly saturated during games without doing RT buffer upscaling? Thirdly, what's the other option?
1. Depends on engine and size of GPU. I assume modern engines already do enough async stuff so adding one more async task can't be free. (That's all i wanted to say with my post.)
2. Tensor cores do only math ops. To use them, regular shader cores must run a program and call those instructions. So you can't get ML async work for free just becasue it uses tensor cores. (But i fully agree upscaling should be applied to the expensive RT stuff, not the whole frame. Though that's what UE actually does. They have reflections at 1/16 and final image upscaling at something like 1/2, for example.)
3. Scratch Lumen and use a faster GI algorithm. I have the impression UE5 is mainly a 30fps engine, and Lumen seems the cause.

I mean, upscaling is completely welcomed to render less pixels and then do work to get that approximating the high-end target with less overall effort. Why is the same principle applied to RT reflections a bad idea? Obviously it'd be a choice for games to prioritise where some will need every spare cycle for other work, but it's also a point I think that smart RT buffer processing will help get more overall quality per cycle by working smarter, not harder, once again.
I agree in principle, but there may be a different cost / benefit ratio. DLSS for instance upscales the whole frame and gives a constant speedup pretty independent of the scene and game. So the cost is predictable and well spent.
But if you use another ML pass just for reflections, probably at similar cost, the speedup depends on how many sharp reflections there are currently visible. In most images, sharp reflections are not very noticable. And it's very scene dependent. Looking at a mirror at first, then some frames later, looking at a wall of concrete. So the cost of reflections isn't constant but fluctuates over time, which i think is the primary problem. And ML can't help this.
The best general option for now seems to put little focus on accurate reflections, but invest on shadows and GI, which is both visually more important and causes more constant workloads too, so optimization is worth it.
The options for specific cases then are RT, planar reflections, dynamic cubemaps, or eventually some new ideas like spending accuracy only on light sources, because on most materials that's all we actually see in reflections. On the red robot image, we could search for bright spots in the reflection of the 'High' setting, refine resolution for those and get higher quality only there. Looks like we could get the same quality from the 'Epic' setting by refining 10% of the image area.

I think there are many options left to try, including ML methods. We will see progress in some form. And that's what we want.
There already is a major uplift vs. prev gen reflections in UE5, so we can be happy with the current state as well, imo.
Maybe i missed the proper context of the discussion, but it sounded like many people complained about blurry reflections.
Though, isn't that nit picking? I mean, the image with the red robot is not really impressive or realistic at all. It's gamey, maybe by intent, maybe not. But it does not look like better reflections make a big difference about this impression as a whole.
 
Back
Top