He has a YouTube channel but this is the best we're going to get, apparently:Is there a decent quality feed of this? Twitter's video is . And that always adds to the realism.
He has a YouTube channel but this is the best we're going to get, apparently:Is there a decent quality feed of this? Twitter's video is . And that always adds to the realism.
Turn off r.Shadow.Virtual.NonNanite.IncludeInCoarsePages 0 in the console and it will make it much faster with no real quality difference in this scene/case. More info in docs if curious https://docs.unrealengine.com/5.0/en-US/virtual-shadow-maps-in-unreal-engine/Hard to bench since you can move freely. Virtual shadows are killing the perfs on my 3090 at 1440p. CPU is nothing nothing (it even keeps low frequency), it's strictly gpu bound it seems
This Unreal Engine 5 shooter that imitates the aesthetics of body cameras or body cams, is the most photorealistic and incredible thing you will see today.
Built with Unreal Engine 5
We are updating the Silent Hill 2 experience comprehensively. With the possibilities of the Unreal Engine 5, we’re bringing the foggy, sinister town to life in ways that were impossible up to this point. The game will delight PlayStation 5 players visually, auditorily, and sensorily.
Some of the Unreal Engine 5 features that really shine are Lumen and Nanite. With them we’re raising the graphics to new, highly-detailed and realistic levels, while turning the game’s signature nerve-racking atmosphere to eleven.
Big thing is that it doesn't need objects to be split.
This is funny. AMD is working on a scalable GI that is accelerated by Hardware RT and compares it to Software Lumen.
The GI provided by AMD even in its first 1.0 version already runs faster and makes use of HW-RT.
Sorry for the bit of necro, but I don't really keep up with the state of hardware very well so checking on dynamic parallelism support again after a few years and seeing this is still the most practical portable way (the bitrotting support by AMD in OpenCL is useless and how long Intel will really keep working on modern OpenCL and interop when they are trying to push SYCL is questionable).The reality is whenever UE5 ships it is going to force the issue. Nanite uses a persistent threads style kernel as part of its culling that is - how shall we say - at the edge of what is spec-defined to work.
This is funny. AMD is working on a scalable GI that is accelerated by Hardware RT and compares it to Software Lumen.
The GI provided by AMD even in its first 1.0 version already runs faster and makes use of HW-RT.
I don't even think what they're doing is explicitly allowed by either PC APIs or the shading languages. They admit to relying on some undefined behaviour in how certain will GPUs schedule their work so that threads won't be starved indefinitely ...Sorry for the bit of necro, but I don't really keep up with the state of hardware very well so checking on dynamic parallelism support again after a few years and seeing this is still the most practical portable way (the bitrotting support by AMD in OpenCL is useless and how long Intel will really keep working on modern OpenCL and interop when they are trying to push SYCL is questionable).
I wonder though, would Epic have done differently even if that weren't the case? Persistent threads are hacky and ugly ... but you just can't beat that latency and control over memory, most importantly shared memory, which you aren't ever likely too get with any higher level API.
Sorry I wasn't clear... that cvar does not toggle off VSMs, it makes VSMs much less expensive in that scene (which has lots of non-nanite geometry) without really any hit to quality. Should ideally have been done in the demo itself, but just noting VSM does not *have* to be as expensive as it is in the stock demo.Oh there is already a toggle in the UI. I was just saying, it's the biggest performance eater for me in these demos.
The whole thing don't look that great anyway.
Man UE5.1 is capable of some truly incredible stuff.
As noted before though, HWRT will almost always "win" in static and/or simpler scenes where you don't need to update any significant amount of the geometry in the BVH. Stuff like architectural visualization is a perfect case for HWRT. Simple geometry, but want lots of accurate light bouncing.The GI provided by AMD even in its first 1.0 version already runs faster and makes use of HW-RT.
It's more that by shipping persistent threads with the scheduling assumptions embedded we forever constrain the hardware/drivers in a few ways. We absolutely would prefer to use some sort of higher level interface that the IHVs could bake down into something safe and optimized (or even HW accelerated in the future) on a given GPU generation. Even taking a moderate performance hit to have "safe" spec-compliant code would be probably acceptable, it's just the current option (chaining tons of dispatch indirects) is unusable slow, not just a bit slower.I wonder though, would Epic have done differently even if that weren't the case? Persistent threads are hacky and ugly ... but you just can't beat that latency and control over memory, most importantly shared memory, which you aren't ever likely too get with any higher level API.
As noted before though, HWRT will almost always "win" in static and/or simpler scenes where you don't need to update any significant amount of the geometry in the BVH. Stuff like architectural visualization is a perfect case for HWRT. Simple geometry, but want lots of accurate light bouncing.
The place where it falls apart is complicated/large scenes that need significant BVH updates every frame. It's not GI specifically that falls apart there, it's HWRT as it exists today. I'm sure we will get improvements there in the future, but I'd love to see it get more focus from the tech press since that is where some of the real limiting issues lie IMO, and tech demos/research never stresses those paths significantly.