Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Oh there is already a toggle in the UI. I was just saying, it's the biggest performance eater for me in these demos.
The whole thing don't look that great anyway.
 
This Unreal Engine 5 shooter that imitates the aesthetics of body cameras or body cams, is the most photorealistic and incredible thing you will see today.

unreal-engine-5-shooter-body-cam-2841845.jpg



Great concept and amazing execution. Makes for a very visceral experience. Many games in the 2010's aimed for that feel, but none got this close. COD MW campaigns always tried to capture that documentary/journalistic feel. Infinity Ward / Treyarch / Whatever other studio maes CODs must have raised an eybrow of two when they saw this demo on twitter.
 

Silent Hill 2 use Unreal Engine 5. They are using Nanite and Lumen.

Built with Unreal Engine 5​

We are updating the Silent Hill 2 experience comprehensively. With the possibilities of the Unreal Engine 5, we’re bringing the foggy, sinister town to life in ways that were impossible up to this point. The game will delight PlayStation 5 players visually, auditorily, and sensorily.

Some of the Unreal Engine 5 features that really shine are Lumen and Nanite. With them we’re raising the graphics to new, highly-detailed and realistic levels, while turning the game’s signature nerve-racking atmosphere to eleven.

e0c502a163dd3e4af33f74c60f7727c343953de4-scaled.jpg



f27dd116bda5baa7d6945db124c2f16d83ee3601-scaled.jpg



EDIT: the youtube video is 60 fps
 
Last edited:

In my post Unreal released the opening of the Unreal Fest and they talk about the improvement they did with Unreal 5.1. I took some screenshot but better to see by yourself. The improvement with Nanite foliage. They show example, this is much better than see it inside a twitter video.

EDIT: The coalition helped with Motion matching too.
 
Last edited:
The reality is whenever UE5 ships it is going to force the issue. Nanite uses a persistent threads style kernel as part of its culling that is - how shall we say - at the edge of what is spec-defined to work.
Sorry for the bit of necro, but I don't really keep up with the state of hardware very well so checking on dynamic parallelism support again after a few years and seeing this is still the most practical portable way (the bitrotting support by AMD in OpenCL is useless and how long Intel will really keep working on modern OpenCL and interop when they are trying to push SYCL is questionable).

I wonder though, would Epic have done differently even if that weren't the case? Persistent threads are hacky and ugly ... but you just can't beat that latency and control over memory, most importantly shared memory, which you aren't ever likely too get with any higher level API.
 
Sorry for the bit of necro, but I don't really keep up with the state of hardware very well so checking on dynamic parallelism support again after a few years and seeing this is still the most practical portable way (the bitrotting support by AMD in OpenCL is useless and how long Intel will really keep working on modern OpenCL and interop when they are trying to push SYCL is questionable).

I wonder though, would Epic have done differently even if that weren't the case? Persistent threads are hacky and ugly ... but you just can't beat that latency and control over memory, most importantly shared memory, which you aren't ever likely too get with any higher level API.
I don't even think what they're doing is explicitly allowed by either PC APIs or the shading languages. They admit to relying on some undefined behaviour in how certain will GPUs schedule their work so that threads won't be starved indefinitely ...

The major sticking point behind nanite is that it needs some form of a forward progress model so that it can use persistent threads to do hierarchal culling. I don't see how dynamic parallelism will get us there since that's mostly a software feature and forward progress guarantees is mostly a hardware property of how GPUs scheduling works ...
 
Oh there is already a toggle in the UI. I was just saying, it's the biggest performance eater for me in these demos.
The whole thing don't look that great anyway.
Sorry I wasn't clear... that cvar does not toggle off VSMs, it makes VSMs much less expensive in that scene (which has lots of non-nanite geometry) without really any hit to quality. Should ideally have been done in the demo itself, but just noting VSM does not *have* to be as expensive as it is in the stock demo.
 
The GI provided by AMD even in its first 1.0 version already runs faster and makes use of HW-RT.
As noted before though, HWRT will almost always "win" in static and/or simpler scenes where you don't need to update any significant amount of the geometry in the BVH. Stuff like architectural visualization is a perfect case for HWRT. Simple geometry, but want lots of accurate light bouncing.

The place where it falls apart is complicated/large scenes that need significant BVH updates every frame. It's not GI specifically that falls apart there, it's HWRT as it exists today. I'm sure we will get improvements there in the future, but I'd love to see it get more focus from the tech press since that is where some of the real limiting issues lie IMO, and tech demos/research never stresses those paths significantly.
 
I wonder though, would Epic have done differently even if that weren't the case? Persistent threads are hacky and ugly ... but you just can't beat that latency and control over memory, most importantly shared memory, which you aren't ever likely too get with any higher level API.
It's more that by shipping persistent threads with the scheduling assumptions embedded we forever constrain the hardware/drivers in a few ways. We absolutely would prefer to use some sort of higher level interface that the IHVs could bake down into something safe and optimized (or even HW accelerated in the future) on a given GPU generation. Even taking a moderate performance hit to have "safe" spec-compliant code would be probably acceptable, it's just the current option (chaining tons of dispatch indirects) is unusable slow, not just a bit slower.
 
As noted before though, HWRT will almost always "win" in static and/or simpler scenes where you don't need to update any significant amount of the geometry in the BVH. Stuff like architectural visualization is a perfect case for HWRT. Simple geometry, but want lots of accurate light bouncing.

The place where it falls apart is complicated/large scenes that need significant BVH updates every frame. It's not GI specifically that falls apart there, it's HWRT as it exists today. I'm sure we will get improvements there in the future, but I'd love to see it get more focus from the tech press since that is where some of the real limiting issues lie IMO, and tech demos/research never stresses those paths significantly.

They probably need to improve research paper and go further than the Sponza scene. Maybe we need some scene like the City sample to help.
 
Back
Top