Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

The developers of Immortals of Aveum have published a revised system requirement for the game and. a write-up of their extensive use of UE5.1 and its features along with videos showing how these features were used and its impact both on the development of the game and on the game itself.
Interesting! Regarding their PC settings, that is a lot of granular controls to expose to users - should be fun for this community to play with :D
 
Great to see that kind of customization available and the budget system looks quite handy. Wouldn't the budget be relative to a target FPS though? Your budget will differ depending on whether you want 30, 60 or 120fps. So is it based on a set fps? Whatever your monitor refresh rate is? Or based on what the developers think is a good experience, which is likely 30 or 60fps?

Also restart on setting change is really fucking annoying. Is that a limitation of UE5/Nanite/Lumen in general, or their implementation?
 
I wonder why they are not using UE's TSR instead on consoles. TSR has been shown to be a better upscaler than FSR2 and yet FSR2 is used here. Maybe TSR is more expensive to use.
 
Also restart on setting change is really fucking annoying. Is that a limitation of UE5/Nanite/Lumen in general, or their implementation?
Not really, although it will depend somewhat on the setting how annoying it is to do without a restart. I think raytracing is one that tends to need restart because it just affects so much of the shader and resource setup. Possibly something that swapped HW/SW Lumen then might require a restart. Of course any setting could be made to toggle live, but there's definitely a cost/benefit band.

For stuff like Lumen and Nanite and VSMs - those can all be toggled live easily so I'm not sure why they would require a restart. But I also didn't get the impression any of those are really optional anyways, so it's more tweaking the options under them. We'll see what options the final game links to restarts I guess.
 
I wonder why they are not using UE's TSR instead on consoles. TSR has been shown to be a better upscaler than FSR2 and yet FSR2 is used here. Maybe TSR is more expensive to use.
I would be interested to know this as well. Hopefully TSR will at least be an option on the PC version.
 
Great settings and tuning options there but I don't hold much hope for how well this game is optimised on PC unless it's also running at terrible settings and resolutions on console.

The specs say it requires a 2080 Super for 1080p / 60fps with FSR Quality enabled. That's a 720p internal resolution at low setting.

They also state the consoles will run at 60fps and output up to 4K with upscaling. So unless they are upscaling all the way from 720p to 4k with FSR2 (yikes!) while simultaneously running at the PC's low settings, then performance on PC looks worrying.

That said AMD performance looks extremely good here vs Nvidia. Is that a trend we're seeing in UE5 powered games?
 
The developers of Immortals of Aveum have published a revised system requirement for the game and. a write-up of their extensive use of UE5.1 and its features along with videos showing how these features were used and its impact both on the development of the game and on the game itself.
Just what I wanted to see!
 
That said AMD performance looks extremely good here vs Nvidia. Is that a trend we're seeing in UE5 powered games?
Fortnite, Remnant 2, Layers of Fear, Satisfactory, DESORDRE and several other UE5 demos don't show that, even the upcoming Lords of the Fallen, Tekken 8 and STALKER 2 don't behave that way (according to their published system requirements). So, this game appears to be the exception so far.
 
The developers of Immortals of Aveum have published a revised system requirement for the game and. a write-up of their extensive use of UE5.1 and its features along with videos showing how these features were used and its impact both on the development of the game and on the game itself.

https://store.steampowered.com/news/app/2009100/view/3648531811779214420


Isnt that the real problem? Using Nanite for traditional geometry content is inefficient and because Nanite doesnt work proper with hardware raytracing you have to use another inefficient GI solution like Lumen...
That is nVidia's endless city demo 12 years ago:

Why would i be impressed by the video above when nVidia did it 12 years ago?!
 
Isnt that the real problem? Using Nanite for traditional geometry content is inefficient and because Nanite doesnt work proper with hardware raytracing you have to use another inefficient GI solution like Lumen...
That is nVidia's endless city demo 12 years ago:

Why would i be impressed by the video above when nVidia did it 12 years ago?!
Which games have shipped without manually authored LODs?
 
Isnt that the real problem? Using Nanite for traditional geometry content is inefficient and because Nanite doesnt work proper with hardware raytracing you have to use another inefficient GI solution like Lumen...
That is nVidia's endless city demo 12 years ago:

Why would i be impressed by the video above when nVidia did it 12 years ago?!
Hardware accelerated tessellation as introduced in DX11 was bad.

Also it doesn't work with current RT hardware either, so I kind of miss the point.
 
Why would i be impressed by the video above when nVidia did it 12 years ago?!
Completely different. That's taking limited information models and interpolating missing information as you get closer to approximate what would be there. This falls apart in plenty of situations resulting in swimming triangles in fine detail. It's also using lots and lots of repeated assets that'll fit in VRAM where Nanite is streaming completely novel geometry for whatever's in view.
 
Why would i be impressed by the video above when nVidia did it 12 years ago?!
Adding to the above, the tessellation demo uses a low poly base mesh.
If the base mesh is a cube, the tessellated object will likely look like a sphere.
Notice the effect applies smoothing to the base mech, but it does not add detail.

In other words: With increasing tessellation factor we add more vertices. The vertices appear at higher frequency.
But we don't get detail at this same frequency - we only make our sphere smoother and smoother so flat sections become round.
If we want to add detail, we can do so e.g. using displacement mapping. (Seems the demos does this, but did not watch)
Now we can add detail, and we could also express our smoothing function in the height values of the displacement map. Then we get something like NVs new raytraced micromesh stuff, avoiding the need to implement parametric or subdivision surfaces.

That's not bad at all, but it's still very limited.
The problem is that such methods can not reduce beyond the base mesh level. Thus we failed on the biggest goal of having dynamic LOD: We want to reduce any geometry to the detail we actually need.
The base mesh can become pretty high poly for topologically complex objects, and at some distance, any base mesh resolution is too high. So we waste performance and also memory.
Personally i call such methods 'detail amplification' - it allows to add details dynamically, but we can not reduce detail.

That's the problem Nanite solves efficiently. I mean, it's limited to instances of objects, and can not merge many objects to a single one to reduce it further. But for now i see no efficiency issues in practice - we can just cull such very distant objects.
Nanite can not add detail, it is a pure reduction method. It can reduce a (potentially detailed) model to a single digit count of triangles, i guess.
So the NV demo is not the same and solves a different problem.

Using Nanite for traditional geometry content is inefficient and because Nanite doesnt work proper with hardware raytracing you have to use another inefficient GI solution like Lumen...
Why do you say Nanite is inefficient? Have i missed something interesting?
I looked at the code and it seems efficient. Following presentations as well, it seems well thought, solves the hard problems in efficient and elegant ways. It's just good. Exactly what this industry needs.
From what i know about Lumen it's pretty much the opposite. Inefficient and a bunch of hacks. But it's not that others in the industry do better.
My assumption is that Lumen causes the high HW requirements on UE5 games, but not Nanite. But maybe that's a dated impression, because it dates back to discussion about the first demo on PS5.

Notice that the geometry in the Immortals game is not the crazy detail we saw in early UE5 demos. It's average geometry resolution i would say. And this makes it less likely that Nanite is the bottleneck.

Also, please notice Nanite not working with HW RT is entirely a problem RT APIs. It's the APIs which are broken. The guys who designed the APIs did not think dynamic geometry is required, it seems.
But any dynamic and fine grained LOD solution which aims to reduce details requires dynamic geometry. Thus the API designers have actively prevented a solution to the LOD problem, period.
Intent or not - it's their failure alone, not Epics. As is, RT is currently not future proof, but Nanite is.
 
From what i know about Lumen it's pretty much the opposite. Inefficient and a bunch of hacks. But it's not that others in the industry do better.
Well those 'hacks' are producing far better results than whatever everyone else is doing. Lumen powered UE5 scenes actually look next-gen with a sense of realism to the lighting. When you see indies trotting out their little creations, the UE5 ones are obvious and look immediately a 'class act' even if just a bunch of store-bought assets plomped into a generic game. To the point that UE5 games look immediately like UE5 games simply on the merit of looking better!

If far better is attainable, no-one's doing it. Particularly not in a generic engine that anyone can use. And to engage in a bit of friendly smack-talk, you've been on this board a while talking about lighting solutions and how wrong everyone else, but AFAIK you've yet to present anything of your own work that shows how to do it right... :p
 
If far better is attainable, no-one's doing it. Particularly not in a generic engine that anyone can use. And to engage in a bit of friendly smack-talk, you've been on this board a while talking about lighting solutions and how wrong everyone else, but AFAIK you've yet to present anything of your own work that shows how to do it right... :p
Sadly yes. It sucks that i could help the current arms race of buying visual improvements with too expensive hardware, but actually can't because i can't finish the tools.
But i don't say everybody is doing it 'wrong' - i say they do it inefficiently by using brute force methods. Some people will find better ways, no matter if i personally succeed or not.

When you see indies trotting out their little creations, the UE5 ones are obvious and look immediately a 'class act' even if just a bunch of store-bought assets plomped into a generic game.
Yes, but there is a problem with this: If you play this indie game, you'll enjoy the better visuals and you will be impressed.
Maybe for minutes, maybe for an hour. But then you are used to the new and higher standard. You do not even need to play - watching a video on YT is enough.
After that moment of impression, you may be left with the following conclusion:
The visuals are better, but the game is still the same as former similar games. It's not more fun, the experience is the same. So do i really want to spend 3000 on a new gaming PC to play the same old games just with a decent visual upgrade?

Taking Immortals as an example, it does not look better then prev gen to me. When i saw it, i was not sure if they use Lumen and Nanite at all, and i've guessed that they do not.
But the minimum specs are pretty high. I would need to upgrade, which i won't do. I won't buy a console either, because i may be tired about the 60fps promise on every new gen, just to see them settling at 30fps again after the cross gen transition is over.
So adding to the problems of lacking innovation in game design and accusing games industry to care just about profit and self appointed trends, but then failing to release games in a working state,
we now have the additional new problem of very expensive HW requirements on top.

That's the real reason why i say Lumen is a failure. It's not about techniques - it's about this stubborn assumption we could get better gfx for free from an ever lasting Moores Law.
If i was that indie dev you have mentioned, i would need to decide: Do i use Lumen and brag with realism, hoping everybody does the upgrade as usual before?
Or do i not use Lumen, show no impressive lighting, but have a larger number of potential buyers?
That's a though question, so i do not think that Lumen makes things easier for devs.
 
RTXGI used to be a super performant alternative to Lumen, until Nvidia decided to completely destroy its performance in its latter iterations and making it useless in the process. What a damn shame. Now Lumen looks better and is faster, so no point in using RTXGI anymore.

I remember turning it on and RTXGI actually improved performance while looking better at the same time. (See here: https://www.neogaf.com/threads/rayt...aced-gi-for-no-little-to-no-fps-loss.1572981/)

Just sad how things turned out :/ In Warhammer, RTXGI completely destroys performance while it doesn't look much different.
 
Back
Top