Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

RTXGI used to be a super performant alternative to Lumen
Maybe, but you have to factor in feature parity, which is not given. Afaict RTXGI only gives an approximation of low frequency diffuse GI with enough angular accuracy to do normal mapping. But Lumen gives pretty sharp reflections too and its spatial sampling density is higher than - say a 5*4*2 grid of volume probes for an entire room.

But yes, it would be very interesting to do performance / visuals comparisons of various techniques in UE.
(Edit: Temporal lag needs to be compared too, which is rarely covered. If tech X only needs one 1ms per frame but needs a whole second to turn the room black when turning off the light, it's still slower than tech Y needing 4ms for a lag of 5 frames.)
When i've had it installed, i did not turn Lumen on and off to see the cost. :(


until Nvidia decided to completely destroy its performance in its latter iterations and making it useless in the process.
Maybe they did only higher default settings? Why should they make it slower?
Anyway - from the link i see RTXGI can be used without RT GPU. But how? I guess it uses DXR software emulation on older NV GPUs, which AMD did not implement?
 
Last edited:
RTXGI used to be a super performant alternative to Lumen, until Nvidia decided to completely destroy its performance in its latter iterations and making it useless in the process. What a damn shame. Now Lumen looks better and is faster, so no point in using RTXGI anymore.

I remember turning it on and RTXGI actually improved performance while looking better at the same time. (See here: https://www.neogaf.com/threads/rayt...aced-gi-for-no-little-to-no-fps-loss.1572981/)

Just sad how things turned out :/ In Warhammer, RTXGI completely destroys performance while it doesn't look much different.
In Warhammer having it "Off" uses RTXGI prpbes still... Just they are completely offline and static
 
Most games use an alternative GI approach like Voxel GI. Lumen is just another software solution with a huge performance impact. Hardware RT is faster and provides better image quality - like in Metro Exodus EE or Wahrhammer Darktide (the non RT version uses precalculated GI - i lose ~23% performance).

There is not enough discussion that on Ampere and especially Lovelace hardware RT is so fast that Pathtracing is possible in real time without upscaling. That is Cyberpunk in native 3440x1440:


That is Fortnite in native 3440x1440:


Even on a 4090 i cant get 100 FPS. And relying on upscaling (like DLSS performance) shows the problem with these inefficient renderer- Cyberpunk and Fortnite with DLSS performance in 3440x1440:
Cyberpunk:

Fornite:
 
Sadly yes. It sucks that i could help the current arms race of buying visual improvements with too expensive hardware, but actually can't because i can't finish the tools.
But i don't say everybody is doing it 'wrong' - i say they do it inefficiently by using brute force methods. Some people will find better ways, no matter if i personally succeed or not.
No doubt of your abilities, but surely you have to agree that finishing the tools is the hard part!

Lumen is shipped, to a wide audience of non expert users -- that puts a lot of pressure on it above even internal custom engine tools, let alone proof of concepts/true cutting edge advances. We shouldn't knock their accomplishments.
 
No doubt of your abilities, but surely you have to agree that finishing the tools is the hard part!

Lumen is shipped, to a wide audience of non expert users -- that puts a lot of pressure on it above even internal custom engine tools, let alone proof of concepts/true cutting edge advances. We shouldn't knock their accomplishments.
There are tons of super smart people out there, inside and outside of Epic and I’m sure they have hundreds of ideas to make things better. But to ship a product and execute you need to narrow the scope, focus on the problems you are truly trying to solve before boiling the entire ocean and solving everything at once.

I think that’s just the difference between a business who is space as their form of employment revenue, and someone who is dabbling thinking of creative ways to make something go fast; one has to focus on solving the problems for their customers the other only needs to solve the problem that is only important to themselves.

It’s why we often see a lot of parents and academic papers about the possibility of things, but we often never seen them implemented once it comes to application.
 
We shouldn't knock their accomplishments.
hmm... You make me realize i sound disrespectful about the creators of Lumen.
You're right, and now i feel sorry. I should express myself in better ways. I work alone for too many years already, and i start to lack manners...
Calling it a failure isn't nice. But my path is full of failures too. Failures are expected and often still good enough. I don't mean it as bad as it eventually sounds.
Calling it a failure also isn't right. It works. But in my opinion it's cost is too high to call it a success.

There is not enough discussion that on Ampere and especially Lovelace hardware RT is so fast that Pathtracing is possible in real time without upscaling. That is Cyberpunk in native 3440x1440:

On the screenshots i see two games with similar image quality at similar performance. I can't see why PT should be better than Lumen in general.
To me, the realtime PT we currently get isn't superior just because of it's name. CP still looks like a game and not realistic.

And there are serious temporal glitches like this:
The whole image goes to black and recovers only slowly after some disocclusion events. But this tends to happen a lot in games. That's quite a problem. I'd turn PT off at this point - it's worse than SSAO / SSR artifacts, which are at least local.

Even on a 4090 ...
Hehe :D
 
Taking Immortals as an example, it does not look better then prev gen to me. When i saw it, i was not sure if they use Lumen and Nanite at all, and i've guessed that they do not.
But the minimum specs are pretty high. I would need to upgrade, which i won't do. I won't buy a console either, because i may be tired about the 60fps promise on every new gen, just to see them settling at 30fps again after the cross gen transition is over.
So adding to the problems of lacking innovation in game design and accusing games industry to care just about profit and self appointed trends, but then failing to release games in a working state,
we now have the additional new problem of very expensive HW requirements on top.

That's exactly the problem i have with UE5. It's extremely demanding, while you hardly see a difference outside of tech demos and it's not only lumen. Look at Remant 2, which is using Nanite without any Lumen.
https://www.techpowerup.com/review/remnant-2-benchmark-test-performance-analysis/5.html

45 FPS at 4K with a 4090. That's not far away of Cyberpunk Pathtracing and with Pathtracing everyone knows it's an extremely inefficient way of rendering. So is Nanite really an efficient way? I don't have the feeling. It's extremely scalable and has big advantages, but the base speed needed is extreme and it seems not to scale down good. I fear a lot of mediocre looking games with terrible performance thanks to Unreal Engine 5.
 
I can't see why PT should be better than Lumen in general.
With Path Tracing, you get far better GI, dynamic lights from every light, dynamic shadows from every light .. you also get far better reflections with much higher resolution and on many more surfaces.

Lumen has far worse reflections, often limited in the number of surfaces and in resolution, GI is also limited to main lights, shadows are not enabled for every light.
 
That's exactly the problem i have with UE5. It's extremely demanding, while you hardly see a difference outside of tech demos and it's not only lumen. Look at Remant 2, which is using Nanite without any Lumen.
https://www.techpowerup.com/review/remnant-2-benchmark-test-performance-analysis/5.html

45 FPS at 4K with a 4090. That's not far away of Cyberpunk Pathtracing and with Pathtracing everyone knows it's an extremely inefficient way of rendering. So is Nanite really an efficient way? I don't have the feeling. It's extremely scalable and has big advantages, but the base speed needed is extreme and it seems not to scale down good. I fear a lot of mediocre looking games with terrible performance thanks to Unreal Engine 5.
How interesting. So we have a game not using Lumen but Nanite only, and perf. is worse than i would have guessed.
You know what i hate the most about game dev? It is so hard to beat brute force, because we can rarely effort large enough workloads so the efficient but complex alternative wins. :( This truly sucks.
But i still doubt that's what going on here. Now i want to see a Nanite game with a more moderate level of detail... : )

With Path Tracing, you get far better GI, dynamic lights from every light, dynamic shadows from every light .. you also get far better reflections with much higher resolution and on many more surfaces.
Yes, but the difference is not night and day in practice. I would maybe agree if we talk about for Portal, but CP does not convince me. And that's the only up-to-date example we currently have.
The lighting effect i personally pay most attention to is correctness of diffuse GI. I want color bleeding and those nice subtle gradients. What i hate is wrong reflections, e.g. fresnel causing this ugly rim lighting effect from reflected stuff which should not be visible. And for those things Lumen does pretty good.

Lumen has far worse reflections, often limited in the number of surfaces and in resolution, GI is also limited to main lights, shadows are not enabled for every light.
The main downside to me is dynamic objects like characters. They are no occluders or emitters, only receivers (w/o using HW RT). Thus dynamic objects often pop out and feel wrong.
But that's not that bad either. If it runs on affordable HW, i prefer it.

What i do like is Fortnite running 60fps on Series S (iirc). That's really impressive, so i still wonder why any demo i have tried, and the games discussed here seem so demanding on PC.
Something makes no sense here.
 
What i do like is Fortnite running 60fps on Series S (iirc). That's really impressive, so i still wonder why any demo i have tried, and the games discussed here seem so demanding on PC.
Something makes no sense here.
Fortnite is quite compromised on XSS though. Resolution down to 540p, Lumen RT reflections replaced by SSR, less shadows on objects, lumen lighting visibly downgraded. Compared to XSX one could say fortnite seriously underperforms on XSS.

 
Fortnite is quite compromised on XSS though. Resolution down to 540p, Lumen RT reflections replaced by SSR, less shadows on objects, lumen lighting visibly downgraded. Compared to XSX one could say fortnite seriously underperforms on XSS.
Judging just from watching the video i would be happy playing this on XSS. It's eye candy on all platforms, difference is proportionate.
Looks like a perfect job to me. If i can have this on my PC which has a GPU twice as powerful, i won't complain anymore. (Never tried Fortnite - maybe i should)
We know Epic did specific optimizations. So can it be that other studios just don't do such extra work, and that's why UE5 seems so demanding?

Cyberpunk with Raytracing vs Pathtracing
Ha, that's really a good shot. I was staring at it for a long time. You almost managed to reincarnate the straved enthusiast in my inner self. : ) Agree there is a big difference here.

But i found it more interesting to figure out what's still missing. To me that's high frequency details. Like those small vent boxes on the lower left. If they would cast detailed soft shadows, the lighting would be at CGI level.
Now i wonder if they have excluded such small models from RT, or if the details get lost from the spatial denoising filter.
This gives me the idea to analyze the depth buffer for variance, and spend more rays where such details are likely to appear.
If i would work on RT, those gritty details is what i would try to get...
 
Resolution down to 540p

I don't want to cross path too much with the upscaling debate thread, but internal rendering resolution is a meaningless measure. Fortnite's 1080p temporial output looked clean and sharp enough to my middle aged eyes. Keep meaning to give it a go on our X as well. Haven't got round to that yet.
 
Judging just from watching the video i would be happy playing this on XSS. It's eye candy on all platforms, difference is proportionate.
Looks like a perfect job to me. If i can have this on my PC which has a GPU twice as powerful, i won't complain anymore. (Never tried Fortnite - maybe i should)
We know Epic did specific optimizations. So can it be that other studios just don't do such extra work, and that's why UE5 seems so demanding?
And others are happy to play Fortnite on Switch. But that doesn't mean Fortnite is technically impressive on Switch. Some would argue difference is quite bigger than the difference in specs between XSS and XSX. Only the heavily lowered resolution (around 3x) should suffice based on the specs alone (compute as Fortnite relies heavily on that).

But it's far from being the case here. If we look at others games on Series platform one could say Fortnite displays the biggest performance gap between Series consoles.
 
Not to derail too much but I recently actually played through Remnant II... first to just see what the fuss was about but then because it was actually pretty fun (and despite the complaining, completely playable :p). I agree the graphics in general are not exactly a showcase but it's also clearly not a AAA game in terms of budget and team size. That said, some parts of the game look very interesting and have a lot more geometric detail than other games I've seen to date. Particularly the places with lots of "clutter", like in some of the cave sections look quite nice, and add significantly to the realism vs. what we've seen before with very sparse clutter that pops in close to the camera. I'd actually recommend a playthrough to folks partially to see some unique stuff, but mostly because it's actually a pretty fun game.

45 FPS at 4K with a 4090. That's not far away of Cyberpunk Pathtracing and with Pathtracing everyone knows it's an extremely inefficient way of rendering
... it's not even close to cyberpunk pathtracing if it were doing 4k samples and enough samples to denoise without spatiotemporal reuse (i.e. probably 10-100+ million traces per frame). PT is still at least 10x slower. People are going to have to get over this weirdness they have with reconstruction techniques. It makes almost as little sense to run Remnant 2 without TSR or equivalent as it does to run Cyberpunk PT without spatiotemporal reconstruction, which in itself makes so little sense it isn't even an option in the game.
 
Last edited:
That's exactly the problem i have with UE5. It's extremely demanding, while you hardly see a difference outside of tech demos and it's not only lumen. Look at Remant 2, which is using Nanite without any Lumen.
https://www.techpowerup.com/review/remnant-2-benchmark-test-performance-analysis/5.html

45 FPS at 4K with a 4090. That's not far away of Cyberpunk Pathtracing and with Pathtracing everyone knows it's an extremely inefficient way of rendering. So is Nanite really an efficient way? I don't have the feeling. It's extremely scalable and has big advantages, but the base speed needed is extreme and it seems not to scale down good. I fear a lot of mediocre looking games with terrible performance thanks to Unreal Engine 5.

Just an fyi, the first patch of Remnant 2 increased performance significantly.

The thing with Nanite is it's very efficient if you compare the performance to trying to use hw rasterizers to rasterize small triangles. I don't know exactly where the inflection point is where nanite becomes faster than hw rasterizers, but I believe nanite switches to hardware when a triangle covers six pixels. Nanite is solving a technical problem that can allow for much higher geometric complexity, which will improve visuals. It's handling a case that the fixed hardware can't handle without tanking performance. Essentially games can probably be much faster if they massively reduce the complexity of geometry.
 
Just an fyi, the first patch of Remnant 2 increased performance significantly.
DF thinks it's because they separated VSM shadows into a separate option (under High Quality Shadows) and had that set to off by default in the new patch, this could be responsible for the large fps increase people are reporting. (timestamped below)

 
DF thinks it's because they separated VSM shadows into a separate option (under High Quality Shadows) and had that set to off by default in the new patch, this could be responsible for the large fps increase people are reporting. (timestamped below)

It does affect it but there's a large increase even with it on (larger than the delta between on/off in most scenes it seems) so they clearly did some work under the hood. Scott_Arm grabbed numbers in the thread about upscaling: https://forum.beyond3d.com/threads/...has-become-a-crutch.63342/page-3#post-2310171
 
DF thinks it's because they separated VSM shadows into a separate option (under High Quality Shadows) and had that set to off by default in the new patch, this could be responsible for the large fps increase people are reporting. (timestamped below)


Even with detail shadows enabled it ran faster than it did at launch.

 
Back
Top