Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

@Andrew Lauritzen I have an *optimization* idea for virtual shadow maps & texturing using the tiled resources API on modern Intel graphics hardware. They have a hardware feature called Tiled Resources Translation Tables (TR-TT) where you can bypass the overhead of updating the page table mappings which makes the UpdateTileMappings API run fast ...
I'm well aware of it as you might imagine from my past life :D Been there since... I think Skylake but perhaps the one after it. The main issue with using TR for VSM or other sparse rendering is that the rasterizers in GPUs are not optimized with any sort of acceleration structure, or even early-outs related to the tile mappings. So it will still do all the work to rasterize the unmapped pixels and throw them out in the best case before PS launch, but worst case in the ROP itself... Either case is not particularly useful when doing depth-only rendering.

And of course the hardware mappings don't necessarily match the optimal page sizes and so on.

TR unfortunately basically always falls into the realm of "theoretically cool and interesting, but in practice too many issues to get any sort of real gains vs. software methods". The main use it has these days is sadly just bypassing WDDM overheads and legacy "fragmentation" nonsense to be able to back resource heaps with smaller chunks of allocations.
 
TR unfortunately basically always falls into the realm of "theoretically cool and interesting, but in practice too many issues to get any sort of real gains vs. software methods". The main use it has these days is sadly just bypassing WDDM overheads and legacy "fragmentation" nonsense to be able to back resource heaps with smaller chunks of allocations.
I know a real use case for tiled resources and some games like Assassin's Creed Valhalla use them for mipmap streaming. Aside from that that's about the only positive that I can think of for the feature ...
 
State of Unreal 2024 keynote on March 20.


I'm hoping for a new Unreal singleplayer game or at least a glorious reinterpretation of the original. Never give up hope.

0018yp.gif
 
I know a real use case for tiled resources and some games like Assassin's Creed Valhalla use them for mipmap streaming. Aside from that that's about the only positive that I can think of for the feature ...
Yeah the problem is... they aren't even very good for mipmap streaming for a variety of reasons (PT update overhead, restrictive layouts, awkward mip tail handling on PC, etc). Pair that with the fact that the only real benefit they bring over software-managed atlases is you don't need the few pixel border for filtering and it gets pretty hard to justify. That's why most engines (Unreal included, to stay on topic) use atlases for virtual textures. TR can be made to work of course, but even in the best cases it's really not much of a win.

I have very mixed feelings on the whole TR thing, having been part of the initial discussion when they were added. Having a "free" hardware indirection in various places could certainly be useful, but making that specifically the same indirection as the OS-controlled page mapping (that has many more important implications for performance and security!) just has too many issues. In practice it feels like another case where some detail of how the hardware and software interacted at the time was pushed through to the public-facing API and in the long run the only reasonable thing to do is what Intel did, i.e. make a whole separate user-mode page table translation just for TR. But I don't think very many people will argue that putting that hardware in is worth the relatively small benefits that it has brought in practice, which is why AMD and NVIDIA have yet to do anything like that I imagine.

To add even more frustration, for the cases where it is useful today it's mostly because the kernel-mode components of WDDM move at such a glacial pace that they still aren't even treating GPUs as if they have virtual memory for the purposes of allocation and residency. Thus TR ends up being more of a work-around for the fact that the kernel mode software is so dated rather than something that is actually that useful from a theoretical perspective at this point.
 
Last edited:
Kinda neat, but as always with colored shadows they end up being used like once in a game with some stained glass window thing a lighting artist specifically wanted for one area, and otherwise you never really notice they're missing (especially since translucency still = expensive).
 
Kinda neat, but as always with colored shadows they end up being used like once in a game with some stained glass window thing a lighting artist specifically wanted for one area, and otherwise you never really notice they're missing (especially since translucency still = expensive).
You can already do coloured translucent shadows by projecting an emissive decal as shown below in a separate video:

 
Some of the excited chatter seems to be "phew! We don't have to do that anymore".
If it works nearly just as well (dynamic & integrates with Lumen) then why even risk your own project with an unofficial custom branch of an engine that has virtually no community support ?
 
If it works nearly just as well (dynamic & integrates with Lumen) then why even risk your own project with an unofficial custom branch of an engine that has virtually no community support ?

Nvidia seems to give you PR money if you advertise "RTX" to the high heavens. For a super indie that's probably a good amount of money.

Otherwise if it was really in demand I imagine you could add moments to virtualized shadowmaps and RT through them in, but as stated I don't really see it in demand that much. Most triple a games that have carnivals/county fairs/whatever that would even have colored balloons usually take place like, at night after the zombies have eaten everyone or somesuch. Other than the "stained glass window" thing I can't name almost any scenario where you actually see colored shadows as such really.
 
Nvidia seems to give you PR money if you advertise "RTX" to the high heavens. For a super indie that's probably a good amount of money.
Yes but they can't directly "monetize" Unreal Engine since it isn't their product so does indirectly advertising their hardware serve a strong enough impetus to fix project breakage ?

If a game project is using an advertised feature that doesn't work, Epic Games does have an incentive to fix the problem since they collect royalties on every project using their tools and if it is a "low priority" for them to deliver it then you can fallback on the community to contribute patches faster depending on how much the issue affects them ...
 
Yes but they can't directly "monetize" Unreal Engine since it isn't their product so does indirectly advertising their hardware serve a strong enough impetus to fix project breakage ?

It’s a strong enough incentive for them to create the branch in the first place. So yes?
 
And just exactly how confident are you in it to think that it'll materialize into regular use in the context of big projects ?

No idea but why does it matter? It’s an option for devs to consider for their projects should they find it useful. Do games even advertise use of Nvidia’s UE branches?
 
Brickadia was one developer that integrated physx into UE 5.1 early during development, and now are upgrading UE 5.3 with Physx 5.3.
From their blog posting Dec 27, 2023:
Those changes include our PhysX integration, now targeting PhysX 5.3 (isn't that number a funny coincidence?), which was originally based on a version of Epic's code in Unreal Engine 4 but somehow strays further from the beaten path with every upgrade. There are also many rendering and shader modifications, bug fixes, networking and performance improvements, ... and that's before we can start fixing all the new bugs we just added.

For example, Proto-Lumen had broken in multiple ways during this upgrade. It was no longer getting any velocity information, which I finally managed to track down to be caused by Epic having allowed async compute on NVIDIA GPUs for the first time in 5.3, causing Lumen to go down a different code path that was previously never used.

In this path, Lumen was not compatible with not having a depth pre-pass, because that'd result in velocity information only being generated after Lumen work had been dispatched. Brickadia does not have a depth pre-pass, because that actually slows down the rendering with lots of meshes and only simple shaders being used. Having to process thousands of draw calls and millions of vertices twice, perhaps unsurprisingly, doesn't make the game faster.
...
On that note, I've received many requests from other indie developers if they could use our PhysX integration for other, non-competing projects. That sounds reasonable to me, but the changes have been slowly getting mixed up with other commits and fixes. Before we can share it, it would need to be separated out and cleaned up and tested more, and I don't currently know when there will be time.

At the moment, it doesn't even compile outside of the Brickadia project with its very specific set of enabled/disabled plugins and related changes.
 
No idea but why does it matter? It’s an option for devs to consider for their projects should they find it useful. Do games even advertise use of Nvidia’s UE branches?
Well there was that one indie puzzle game made by a single developer but more power to them if they find it useful to create more tech demos ...
 
Back
Top