Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

with the dlss slider set to 100 (dlaa)
even with these settings there is noticeable pop-in of vegetation and some other lod stuff. i noticed around the heads in the water and tufts of grass after the river portion is over

Yeah there’s a bit of vegetation pop in at max settings. Ton of shadow artifacting and pop in with RT off. It’s hard to get a good look at the texture work because of the aggressive DoF. Geometry detail was inconsistent. Some really detailed stuff and some really blobby stuff.

Overall looks good and runs pretty well. Got to 60fps with 1440p high settings RT off on a 3090. Probably not representative of gameplay though.
 
Looks pretty gorgeous to me, reminds of that old 3DMark nature scene, was it 2001?

I get a 30fps average with 26fps low at 3640x1600 DLSS Q with everything at max (unplayable), but going down to DLSS Performance and turning frame gen on results in a buttery smooth 71fps with a low of 59fps.

Interestingly the resolution scaling can be changed in 1% intervals. So while you man manually set DLSS Performance by selecting 50%, if like me you are a tiny bit below 60fps on the lows (not that it matters with gsync) you could bump it down to just 49% and likely clear it up.
 
Performance numbers for GPUs is out. A 4080 is 4x faster than 7900XTX at native 1440p, the 4090 is probably 5x faster at this point, while a 3080Ti is 2.3x faster than 6900XT at 14400p, so Ada is defenitily having a huge advantage here.


 
Gave it a quick run... first impressions so take with a grain of salt:

1) The geometric density is reasonable. Nothing earth shattering but not distractingly low either.
2) Performance in the non-RT mode is "ok" but not great. I suspect dropping some settings could help but I'm curious where most of it is going here.
3) Shadows in the non-RT mode are blurry and resolution transitions are obvious. I suspect this is just CSMs, which is pretty unfortunate from a quality perspective.
4) Shadow quality looks a lot better in the RT mode, but many of the foliage shadows do not animate in the RT mode.
5) In the RT path the denoising of foliage when it is initially revealed is pretty distracting even with a relatively slow flythrough. I'm worried how this will look with fast-paced camera movement.
6) Performance of the RT mode is not great, but kind of to be expected.

I'm not sure what they are actually using in terms of tech though and presumably the final game will let us see that shown off in more environments. This demo flythrough is probably not be the best example of any lighting or GI tech in particular, as it looks closer to the sort of thing you could accomplish last gen with baked lighting and good art.
 
Not only compared to Radeons but also in comparison to Turing and Ampere.

Yes its really interesting actually. In non-RT mode all the GPU's scale as per historical raster expectations. But once you turn on the RT (to max at least) then Turing has a big jump over RDNA 2/3 (but no major change between the AMD generations), Ampere has a big jump over Turing and Ada has a big jump over Ampere.

Nvidias generational RT improvements are being highlighted like crazy in this game!
 
Gave it a quick run... first impressions so take with a grain of salt:

1) The geometric density is reasonable. Nothing earth shattering but not distractingly low either.
2) Performance in the non-RT mode is "ok" but not great. I suspect dropping some settings could help but I'm curious where most of it is going here.
3) Shadows in the non-RT mode are blurry and resolution transitions are obvious. I suspect this is just CSMs, which is pretty unfortunate from a quality perspective.
4) Shadow quality looks a lot better in the RT mode, but many of the foliage shadows do not animate in the RT mode.
5) In the RT path the denoising of foliage when it is initially revealed is pretty distracting even with a relatively slow flythrough. I'm worried how this will look with fast-paced camera movement.
6) Performance of the RT mode is not great, but kind of to be expected.

I'm not sure what they are actually using in terms of tech though and presumably the final game will let us see that shown off in more environments. This demo flythrough is probably not be the best example of any lighting or GI tech in particular, as it looks closer to the sort of thing you could accomplish last gen with baked lighting and good art.
I gotta agree with what you are saying - I prefer the full RT modes reflections on water to the weird SSR stuff in non-RT mode, but there are some issues here in visuals otherwise in all modes in this bench.
One positive though from CSM use is potentially it will run better on consoles than something that uses VSM. Perhaps 60 fps will require less sacrafices here in performance and IQ in comparison to other UE games on console we have recently seen.
 
Yes its really interesting actually. In non-RT mode all the GPU's scale as per historical raster expectations. But once you turn on the RT (to max at least) then Turing has a big jump over RDNA 2/3 (but no major change between the AMD generations), Ampere has a big jump over Turing and Ada has a big jump over Ampere.

Nvidias generational RT improvements are being highlighted like crazy in this game!
UE5 is just so outdated. This engine is full of bloated nonsense which cant use every flop on a GPU. In 720p "Full Raytracing" is free on a 4090 and yet the difference in performance between a 4090 and 6950XT is nearly same in 720p and 1440p. Why does this engine not scale better with resolution? Why is a 4090 not 4x faster in rasterizing when "full raytracing" can use the full GPU?!

Going by PCGH numbers - scaling in 720 and 1440p between 4090 and 6950XT:
720p
Rasterizing: 1,86x
Pathtracing: 5,25x

1440p:
Rasterizing: 1,96x (+5%!)
Pathtracing: 7,27x

4x the resolution and yet the performance loss between a 4090 with 100TFLOPs and a 6950XT with 25 TFLOPs is basically the same. Doesnt make any sense...
 
UE5 is just so outdated. This engine is full of bloated nonsense which cant use every flop on a GPU. In 720p "Full Raytracing" is free on a 4090 and yet the difference in performance between a 4090 and 6950XT is nearly same in 720p and 1440p. Why does this engine not scale better with resolution? Why is a 4090 not 4x faster in rasterizing when "full raytracing" can use the full GPU?!

Going by PCGH numbers - scaling in 720 and 1440p between 4090 and 6950XT:
720p
Rasterizing: 1,86x
Pathtracing: 5,25x

1440p:
Rasterizing: 1,96x (+5%!)
Pathtracing: 7,27x

4x the resolution and yet the performance loss between a 4090 with 100TFLOPs and a 6950XT with 25 TFLOPs is basically the same. Doesnt make any sense...
Let's remember that teraflops are a useless metric in this comparison, since fp32 flops throw all calculations out of the window. A 4090 isn't 4 times more powerful than a 6950xt in raster.
 
According to PCGH.de the Wukong benchmark is running UE 5.0.0.0 which is pretty crazy. I assume the final version will be the same. Wouldn't that make it the only game shipped on that early of a version?
 
Let's remember that teraflops are a useless metric in this comparison, since fp32 flops throw all calculations out of the window. A 4090 isn't 4 times more powerful than a 6950xt in raster.
No, they are not useless. A 4090 has 4x the compute performance and gets >4x the performance with Raytracing+. So it works. Raytracing puts more work on a GPU.

The problem is that hardware doesnt matter in a world in which software is so unoptimized like UE5. In 720p a 4070 with Full Raytracing is only 6% slower than a 6950XT with rastering. And a 4070 is not CPU limited here...
 
No, they are not useless. A 4090 has 4x the compute performance and gets >4x the performance with Raytracing+. So it works. Raytracing puts more work on a GPU.

The problem is that hardware doesnt matter in a world in which software is so unoptimized like UE5. In 720p a 4070 with Full Raytracing is only 6% slower than a 6950XT with rastering. And a 4070 is not CPU limited here...
So when, for example, PS5 pro comes out and it performs like a 16.7 teraflops FP16 GPU instead of a 33 teraflops FP32 GPU, should we say that all games are unoptimized? Cause I don't think a single software has come out that uses "fp32 flops" in a way that matches the teraflops of those gpu's that support it.

I'm looking at some raster benchmarks, and the 4090 is barely 2 times faster in most cases, most of the time it's 80% faster.

The reason why the 4090 can be 4x faster or more in ray tracing is because it accelerates a lot of the ray tracing pipeline, while AMD barely has the hardware to do it.
 
Last edited:
Looking at Techpowerups GPU database a 4090 is 66% faster than a 6950XT in raster at 4k.

So running at roughly 2x faster at lower resolutions is probably about right for the hardware.

When RT is turned on, GPU's are not raster limited, they're RT pipeline limited.

And as Nvidia accelerate more of the RT pipeline they'll naturally perform faster.

So the RT scaling results between the 4090 and 6950 make perfect sense, and are nothing that we haven't seen in other titles that use a different engine with high RT loads (Alan Wake 2 and CP2077)
 
Last edited:
This game has some of the ugliest motion blur I've ever seen. If you want to lose fine details, be my guest and this is at a very low benchmark speed.
 
According to PCGH.de the Wukong benchmark is running UE 5.0.0.0 which is pretty crazy. I assume the final version will be the same. Wouldn't that make it the only game shipped on that early of a version?
If it's using 5.0, that's a shame. Was upgrading to a more recent version that time consuming?
 
New fortnite season this Friday, which means potentially more UE5.5 features are rolled in. Unfortunately they don't normally disclose stuff like that.
 
If it's using 5.0, that's a shame. Was upgrading to a more recent version that time consuming?
They've been developing their game with UE4 originally and have since moved on to using a custom branch (NVRTX) of UE5 ever since it was available just over 2 years ago. They can't afford to face potential regressions crop up with their game releasing in less than a week from now ...

If you're only using the main branch of the engine without any customizations, Epic Games will make the attempt to fix any said regressions in newer versions unless stated otherwise. Can you really trust newer versions of Nvidia's custom branch with your game at stake now that your Xbox console version of the release of your game is currently fubar'd ?
 
Last edited:
I gotta agree with what you are saying - I prefer the full RT modes reflections on water to the weird SSR stuff in non-RT mode, but there are some issues here in visuals otherwise in all modes in this bench.
One positive though from CSM use is potentially it will run better on consoles than something that uses VSM. Perhaps 60 fps will require less sacrafices here in performance and IQ in comparison to other UE games on console we have recently seen.
Yeah I think these choices make more sense if the console version is 60fps. It's always nice to have more options on PC of course but I respect focusing on a main content design point/performance target as well.

I'm curious how much Nanite they are using (ex. perhaps not the foliage), which may be part of the motivation for not using VSMs as well. With mostly Nanite VSMs almost always outperform CSMs, but if you have a significant amount of non-nanite geometry it can be a performance problem. Conversely though if your scene is mostly Nanite then local light shadow maps (non-VSMs) can be very expensive. That's why we generally recommend pairing the two as much as possible.

[Edit] I guess if they are truly still on 5.0 that would also be a reason they might not be using the new features as much, as they have improved a lot since then. I'm guessing that this is more just they didn't update whatever metadata gets embedded into the binaries though as they pulled integrations.
 
Last edited:
They've been developing their game with UE4 originally and have since moved on to using a custom branch (NVRTX) of UE5 ever since it was available just over 2 years ago. They can't afford to face potential regressions crop up with their game releasing in less than a week from now ...

If you're only using the main branch of the engine without any customizations, Epic Games will make the attempt to fix any said regressions in newer versions unless stated otherwise. Can you really trust newer versions of Nvidia's custom branch with your game at stake now that your Xbox console version of the release of your game is currently fubar'd ?
I understand, it's just that 5.4 made some progress in CPU utilization and, even if it's not sure that it would have fixed it, maybe it would have lowered the amount of traversal stutters in the game, that you can even see in the demo.

I'm not that confident in the final version of this game. I'm expecting the whole suite of problems in unreal engine games, and 40 to 60 fps on console.
 
Back
Top