Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Technically yes, but the claims here are that software-based Lumen specifically does not offer offscreen reflections.
Software Lumen totally offers offscreen reflections - but not of certain types (no skinned meshes, for example). They are also dramatically lower quality.
We need footage of these reflections on cards without HW RT badly. I've looked up so many videos of Non HW RT cards, and nobody would bother to come close to one of these frickin' windows :)

On a side note, Lumen also does reflections. It is that clever mix of SSR, RT and cube maps.
This is a rough shot and my shaders are still compiling for software lumen as you can see on the coat (this takes like 5 hours?) but this is the gist.
software.00_00_07_30.fxj51.png
 
Last edited:
A Vega 56 is faster than RTX 2060, there is no way this demo is using any sort of hardware ray tracing.

Yeah the results are pretty devastating.

I do think it is using HW-RT though. My possible explanation is that the BVH structure built with HW-RT is overflowing the VRAM, which is why the 5600XT performs a lot better especially at WQHD.

Still, that should not be a problem in a full game, as by then DirectStorage and Sampler Feedback will likely be used, as the PCGH article suggests too. Plus scalable texture settings of course. We already know the textures in that demo are for 8 GB cards and up.
 
A Vega 56 is faster than RTX 2060, there is no way this demo is using any sort of hardware ray tracing.

Maybe it does HW ray tracing, but not in the quantity or quality to differinate between ancient vega and modern rtx hardware perhaps.

Yeah the results are pretty devastating.

I do think it is using HW-RT though. My possible explanation is that the BVH structure built with HW-RT is overflowing the VRAM, which is why the 5600XT performs a lot better especially at WQHD.

Still, that should not be a problem in a full game, as by then DirectStorage and Sampler Feedback will likely be used, as the PCGH article suggests too. Plus scalable texture settings of course. We already know the textures in that demo are for 8 GB cards and up.

Test GPU's with the same amount for vram, do they show the same weirdness?
 
Test GPU's with the same amount for vram, do they show the same weirdness?

The 2070 Super also performs a tiny bit worse than the 5700XT. But not to this extent.

Also seems like my theory about VRAM overflowing is already debunked by that PCGH article showing the out of memory budget error on these cards.

The 5600XT is actually 300 MB more over budget than the 2060.

Now I am even more confused. Aaaah!

Okay, another possible explanation: The AMD drivers are simply more optimized for UE5 right now. Which does make sense, as Nvidia has not been releasing drivers specifically for UE5 yet.

But if HW-RT were to be used properly, the 6600 would also be significantly faster than the 5700XT. But it performs a lot worse. Which could also be due to HW-RT if it tanks performance. But why is the 2070 Super not performing much worse than the 5700XT as well if that's the case....??

giphy.gif


Okay I give up. :confused:
 
Last edited:
The 2070 Super also performs quite a tiny bit worse than the 5700XT. But not to this extent.

Also seems like my theory about VRAM overflowing is already debunked by that PCGH article showing the out of memory budget error on these cards.

The 5600XT is actually 300 MB more over budget than the 2060.

Now I am even more confused. Aaaah!

Okay, another possible explanation: The AMD drivers are simply more optimized for UE5 right now. Which does make sense, as Nvidia has not been releasing drivers specifically for UE5 yet.

But if HW-RT were to be used properly, the 6600 would also be significantly faster than the 5700XT. But it performs a lot worse. Which could also be due to HW-RT if it tanks performance. But why is the 2070 Super not performing much worse than the 5700XT as well if that's the case....??

giphy.gif


Okay I give up. :confused:

Indeed weird. Was giving vram a though when you said it but if thats not it. Even accounting non-hw rt, these gpus should perform differently against eachother then they do there.
Previous land of nanite (pc/XSX) demo didnt behave like this right?
 
Indeed weird. Was giving vram a though when you said it but if thats not it. Even accounting non-hw rt, these gpus should perform differently against eachother then they do there.
Previous land of nanite (pc/XSX) demo didnt behave like this right?

You are referring to Valley of the Ancient, right? Yes that one did not behave that way. But it was defaulting to Software-Lumen, so it doesn't mean much when we want to inspect HW-Raytracing performance.

Also really like Alex screen above. The difference in reflection quality is pretty big. But yeah we have to wait until all shaders are compiled.
 
You are referring to Valley of the Ancient, right? Yes that one did not behave that way. But it was defaulting to Software-Lumen, so it doesn't mean much when we want to inspect HW-Raytracing performance.

Also really like Alex screen above. The difference in reflection quality is pretty big. But yeah we have to wait until all shaders are compiled.

Yes valley of the ancient (land of nanite was the PS5 demo, mixed these up). I ment in general GPU raster performance, say a 5700XT and a 2070/S. Almost a year ago so there should be enough tests out there.


Just looking across YT, seems to run (much) better, and arguably looks more stunning and closer to the original UE5 PS5 tech demo. As someone else noted, the new matrix demo isn't that impressive (it is, but not as the hype suggested), atleast it doesnt blow away the previous tech demos, which had a more awe-effect but ok :p
I played ran awakening on PS5, valley of ancient on 2080Ti and watched the PS5 UE5 demo via DF.
 
The difference of quality between HW-RT and non RT is huge. This is ok software Lumen is good for performance of non HW accelerated raytracing.
 
Patience is a far better path than tolerance. We all wait months to years for titles we are excited about so what’s a couple of hours (or days)? Devs should at least provide an option.
You're talking about gamers that pre-purchase and preload games to ensure they can play it at unlock time.

Which then makes me think if they have a preload, then allow some kind of executable that generates the shaders. FOMO might actually have a benefit?
 
It's mostly the *driver* PSO/shader compilation that is the problem here. Since that depends on your specific SKU/driver combination, it can't easily be done in advance by the game (although IHVs will prepopulate caches sometimes for popular upcoming games). Sometimes it can be done at "load time" instead, but obviously making people wait for ages the first time they run the game (or whenever they update drivers) isn't ideal either. The other related issue is that there's not always a good way to know in advance which PSOs will actually be needed, so doing them *all* in advance is infeasible. That's why the PSO caching thing in Unreal basically logs which permutations are needed as you play the game, then let's you precache those. That said, it's still kind of tedious without significant automation (which is besides the point of this demo and would obfuscate the purpose to the developers somewhat).

PSO caching on PC is also made more complex by the fact that you have to take the conservative set of all state that any GPU/driver ever needed to bake into a shader compilation, and have unique PSOs for all of that. Individual drivers/hardware will only need unique shaders for some subset of those different states, but which ones will vary from SKU to SKU. Drivers will typically try and hash things to avoid recompiling based on state that they know doesn't actually affect the output, but that doesn't really help the high level issue that we had to generate all those "potential" permutations in the first place.

I agree with the commentary on the DF podcast - it would be great for there to be easier ways to automate this in Unreal though, as while IMO it isn't a big issue if a dev-focused tech demo doesn't have a precompiled PSO cache, some games have shipped in a similar state which should never really be the case.

That said, on PC there's always a delicate dance between the game, the OS and the graphics driver. As I noted earlier, PSO compilation is a big piece of that, but there are similar problems with things like allocation patterns and the like. I imagine now that UE5 is out in the wild there will be significantly more attention on tuning it from all parties. As soon as anyone starts to benchmark anything the IHVs get really interested, really quickly ;)
That makes sense.
For a demo it is really not a big issue but we are seeing that "struggle" more and more in retail games including consoles (that is a more fixed set of hardware/driver).
IMO I wish to have a button or option to compile it before hand than have the issues in gameplay.

Edit - I worte about consoles being more a fixed set of hardware/driver before read that one.
It's actually one step further - on most consoles you can ship the actual compiled GPU code, because effectively the equivalent of the user-mode driver is compiled right into the application and can't be changed once you have shipped it without patching the game, so it's safe to bake it all down.
So that is how I thought.
The issue is that even in console games we are having this gen more issues with that shader compilation than for example PS4/XB1 gen... in console it should be easily avoided from what you say.

It is being rare the games DF review on consoles that doesn't have at least some issues with shader compilation.

Saw the always reliable MJP speculating about some sort of peer to peer distributed/torrent kind of shader cache. So if someone with your setup compiles and uploads you can just grab that and go. It'd be a more formal and easier system than what people already do for emulating more modern consoles. I'm sure Valve could manage it for Steam at the very least, and eventually Microsoft whenever they get their bureaucratic collective selves into gear.
That should be a bit too advanced or hard for most user.
But I like the ideia to have an option where the own developer shares the most common shader cache compilation in a server and the game automatically download it.
Something like the most common GPU/Driver combinations.
I don't know how much bandwidth that should cost to the developer... so maybe it is not a good ideia either.
 
Last edited:
Still, it doesn't explain how the 5700XT is delivering same or faster fps than RTX cards.
Not wanting to deny any theory but did you guys compared the reflections between 5700XT and RTX cards? Performance can be the same but RTX cards showing better reflections.

Software Lumen totally offers offscreen reflections - but not of certain types (no skinned meshes, for example). They are also dramatically lower quality.

This is a rough shot and my shaders are still compiling for software lumen as you can see on the coat (this takes like 5 hours?) but this is the gist.
Thanks.
The difference is very very noticeable.
 
Last edited:
Not wanting to deny any theory but did you guys compared the reflections between 5700XT and RTX cards? Performance can be the same but RTX cards showing better reflections.

If so, then I will be very curious how future game handle settings.

Will the Epic and high settings be locked to HW-RT cards only?

Or will both cards run max settings, but the RT capable card will simply run it with better reflections like we see here in this sample? If so, then that may be quite problematic for benchmarks, as RTX/RDNA2 cards perform automatically worse compared to RDNA1/Pascal. Or they do perform similar like I have been suggesting, but even then max settings on the non DXR card are not really max settings, are they?

Anyway, I do think the first option is the most likely scenario.

Settings in the future could look like this:

Epic - HW RT Lumen, targeted at 30 FPS on next gen consoles @ 1080p with TSR to 4K
High - HW RT Lumen, targeted at 60 FPS on next gen consoles @ 1080p with TSR to 4K
Medium - Software Lumen for older hardware.
Low - Lumen disabled for really old hardware.

Yep, that makes the most sense.
 
Last edited:
You seem to be going somewhere with this, makes sense i think, and explains why theres not too much gain in performance when enabling hw RT. There just isnt enough ray traced effects to see these gains. Thats bound to happen in the future though.

Anyway, i truly hope theres someone out there remaking just this little part of Unreal 1 in UE5 on pc.


That's one remake I'd really like to see. The original Unreal remade with updated graphics but all gameplay kept the same.

Regards,
SB
 
That's one remake I'd really like to see. The original Unreal remade with updated graphics but all gameplay kept the same.

Regards,
SB

I truly hope we will see something like that someday. Going for max fidelity and stay as true to the original as possible. I think the opening scene when coming out of the space ship in the very first level is the most stunning graphics ive ever seen at a given time (ofcourse).



Coming to PC aswell, going to be something for DF to do some comparisons ;)
 
Back
Top