Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

In one of the videos brian karis said the assets in new demo were sized so that a machine with 64GB ram can store whole level in ram. This will bypass any DirectStorate/Streaming issues. We can't stream optimally in PC until microsoft releases windows version with directstorage and epic releases UE5 version with directstorage support. To fit in 64GB ram is likely half of the reason for asset downgrade from ue5 reveal(valley of death). Other part is storage space required.

We are looking at technology that is going to take maybe 1-2more years before it's going to ship in any game.


There is no downgrade. He never said that. The lumen in the land of nanite was unchanged and not downgraded in anyway.

Secondly the demo doesn’t require 64 RAM. It requires only 3 GB Ram and up to 7 GB VRAM (allotted) and only about 4-5 GB used according to @Dictator.

It’s the editor that needs 32GB - 64GB. What makes the editor so heavy is because you are accessing all the source assets, source textures source materials, source shaders, uncompressed etc.
This allows you to be able to manipulate things, change colors, materials, renders. This is why you need 32-64 GB memory.

When you compile/package the project, you have none of the source assets, everything you have is compressed and baked out and can't be changed. So you don't have uncompressed source assets and shaders hanging in memory just in case you need to edit it. That's why the demo is 100 GB but the compile version is ~25GB.
 
Did you watch the Nanite stream with Brian Karis? Valley of the Ancients is HEAVIER than Lumen in the Land of Nanite. Nanite is 2x the millisecond cost in Valley of the Ancients than Lumen in the Land of Nanite. Hence why the performance target for Epic settings is 1080p 30 and not the sub 1440p 30 average that Lumen ran at. This is all readily available information on their wiki or from the livestreams.

Hey, thanks for posting this, can you point to a link or timestamp where they mention it being heavier? I think it's pretty obvious looking at the demos but I couldn't find a source.


----


Regarding ram talk: I wish we could analyze this granularly, I suspect a significant portion of it is textures and lumen stuff. It doesn't make sense to store a whole big tree for every object in the scene in ram if your whole goal is to only access a few leaf levels every time the content changes.


The screenshots someone was showing with lot details must have been a slow HDD paired with a small VRAM pool.

It was, the poster artificially lowered their ram or the streaming pool. However, even so, it was the result of a bug:

Controlling Nanite Streaming Pool Size said:
For Early Access, if the pool is not large enough to fit all the data needed for a view, cache thrashing can occur where streaming never settles even for a static view.
 
There is no downgrade. He never said that. The lumen in the land of nanite was unchanged and not downgraded in anyway.

I'm not claiming any downgrade in engine/features/perf. All those should be better. The asset quality was downgraded and reason given for that was to make it work on all platforms. Original demo used 8k textures and extremely detailed assets. One reason for asset downgrade is disk storage space and other part is the demo fits in 64GB ram.

It is difficult to stream many small blocks from ssd efficiently in windows until we get directstorage in windows and directstorage integration in ue5 + hw/gpu based decompression. Likely epic has this done already but we don't have directstorage available yet in consumer windows.

Once ue5 ships things should be fine. Consumer windows with directstorage will get released and polished. Just the same as ue5 is still WIP and being developed and optimized.

edit. There also is the discrepancy between editor and original demo. If someone just looks at the footage it's easy to come to wrong conclusion due to editor not enabling all features

 
Last edited:
I'm not claiming any downgrade in engine/features/perf. All those should be better. The asset quality was downgraded and reason given for that was to make it work on all platforms. Original demo used 8k textures and extremely detailed assets.

What's the source on this? And why would this be a problem? Virtual texturing has been around for over ten years, we can put as big of textures as we want on hdds.
 
What's the source on this? And why would this be a problem? Virtual texturing has been around for over ten years, we can put as big of textures as we want on hdds.

Brian Karis on one of the ue5 streams was talking about asset quality downgrade and reasons behind it. Unfortunately I don't have the exact timestamped link available. I didn't think this would become some kind of argument as it was clearly stated in stream.

Streaming and decompressing small random blocks in windows is not very good until windows with directstorage ships. UE5 is all about streaming dynamically what is needed. Allow huge source level assets in disk and create appropriate lod dynamically. New demo circumvents need to stream by allowing whole scene fit ram in machines with 64GB or more ram.

About DirectStorage

About having the whole level in memory in pc to avoid streaming

Epic could have released a demo with more detailed assets. It would have likely worked fine. But it would have required more disk space and ram or directstorage enabled windows which is not widely available yet.

This is kind of crazy in the original demo. Also shows how the original content works in free roam debunking the on the rails a arguments.
 
The asset quality was downgraded and reason given for that was to make it work on all platforms.

No it wasn’t.

Original demo used 8k textures and extremely detailed assets.

Valley of the ancient uses more texture data than lumen in the land of nanite.

and no it’s the same assets from Quixel and Nanite isn’t affected by the amount or quality of asset you use.

One reason for asset downgrade is disk storage space and other part is the demo fits in 64GB ram.

It is difficult to stream many small blocks from ssd efficiently in windows until we get directstorage in windows and directstorage integration in ue5 + hw/gpu based decompression. Likely epic has this done already but we don't have directstorage available yet in consumer windows.

Once ue5 ships things should be fine. Consumer windows with directstorage will get released and polished. Just the same as ue5 is still WIP and being developed and optimized.

Stop spreading FUD. You have been corrected many times, even by the creator of nanite.

The nanite data for the lumen in the land of nanite is 6.14 GB.

You don’t need a super fast SSD.
You don’t need direct storage.
You don’t need RTX IO.
You don’t need 32-64 GB memory.

Valley of the ancient needs only ~3GB ram and 4 GB VRAM used out of 7 GB Alotted (in some case)

edit. There also is the discrepancy between editor and original demo. If someone just looks at the footage it's easy to come to wrong conclusion due to editor not enabling all features

There are no discrepancies.
The demo as it is run on Brian’s PC. Period.
You either accept the facts or you don’t.
 
Streaming and decompressing small random blocks in windows is not very good until windows with directstorage ships. UE5 is all about streaming dynamically what is needed. Allow huge source level assets in disk and create appropriate lod dynamically. New demo circumvents need to stream by allowing whole scene fit ram in machines with 64GB or more ram.

  1. It's not very good, no, but it's plenty good enough to access the tree for virtual textures or nanite style virtual geometry. That's one of the main problems nanite is solving and why it's impressive -- the load is small enough for ssds to handle because it only has to pull a few leaf nodes per mesh. (You remember playing games with virtual textures on your hdd right? Textures are bigger than meshes, especially nanite meshes)
  2. The 64gb ram claim is completely false and has been disproven repeatedly. The whole demo is less than 30gb built, and vram loads are under ~5gb.
The demo as it is run on Brian’s PC. Period.
You either accept the facts or you don’t.
You're totally right about everything in this post, but:
tbf, its safe to assume his pc much better than yours or is. Dev computers on big engine teams usually are nuts, threadrippers with 128gb+ of ram. (But that's so they can do fast builds, not because its the minimum requirement to run a demo)
 
Brian Karis on one of the ue5 streams was talking about asset quality downgrade and reasons behind it. Unfortunately I don't have the exact timestamped link available. I didn't think this would become some kind of argument as it was clearly stated in stream.

Streaming and decompressing small random blocks in windows is not very good until windows with directstorage ships. UE5 is all about streaming dynamically what is needed. Allow huge source level assets in disk and create appropriate lod dynamically. New demo circumvents need to stream by allowing whole scene fit ram in machines with 64GB or more ram.

About DirectStorage

About having the whole level in memory in pc to avoid streaming

Epic could have released a demo with more detailed assets. It would have likely worked fine. But it would have required more disk space and ram or directstorage enabled windows which is not widely available yet.

This is kind of crazy in the original demo. Also shows how the original content works in free roam debunking the on the rails a arguments.
It was not brian karis talking about it - it was the artist responsible for the demo (Galen Davis) describing how they used tiling detail normal maps instead of discrete ones for disk space reasons in distributing the demo's raw assets (it is 100 GB after all). It was not because a wide range of devices need to run it, but more a bandwidth concern of distribution.

EDIT: 16:45
 
Last edited:
I got the compiled demo at max settings running on my RTX 2060 laptop at a rock solid 30 FPS at 1440p (actually 67% of that via TSA) in every scenario. I've added CAS Sharpening and it looks gorgeous:

ancientgame2021-06-06ajkeo.png


ancientgame2021-06-06r8kx5.png


ancientgame2021-06-06n0jur.png


ancientgame2021-06-06fpjnm.png


ancientgame2021-06-066jjoz.png


It's really blowing my mind how much picture quality I can extract from this little thing. TSA does wonders too. Motion Blur ruined two of these shots but it still looks great!

Just curious is there any reason why the fire in the first shot is so pixelated? Also everything appears a bit soft/blurry I'm guessing that's the TSA?
 
I did (and do) say that the 2020 demo wouldn't run off a 2020 PC's storage IO, and from Tim Sweeney's statements it does not.
Why are people so convinced there are excessive or esoteric IO requirements for Nanite? Brian showed that demo running just fine on a regular PC in his live stream to try and quell some of these myths but they appear to persist.

In general why do people even get the impression that lots of IO is even required? As noted in the live streams as well, the texture data in the demo exceeds the size of the geometric data and no one seems to think virtual texturing systems that have been in use for many years require DirectStorage or super high bandwidth SSDs.

SSDs are definitely important (but not necessarily required) for latency reasons. Some amount of bandwidth is necessary to keep up with streaming while moving quickly through a world too, but in my experience it's not nearly as much as people seem to think it is. Where are people getting these impressions?

I have plenty of posts out there discussing how a slower IO could be used to show similar results, and having a ton of RAM with pre-decompressed assets to stream to the GPU was definitely one of the ways.
I think it's safe for me to reiterate from the stream that there is nothing special going on in either of the demos on PC in terms of IO... it's doing the same thing as the consoles (consoles use the platform-specific final package compression stuff of course, but if anything that makes it even smaller there). You can go play with the cvars and arbitrary limit bandwidth and so on yourself now and see how they affect things.

Also to reiterate from the live stream - the new demo is heavier than the old one. The overdraw is pretty nuts and there are more lights with full screen coverage in the dark world segment.

But most importantly: the full engine and source code is out now; there's no more need to speculate on stuff. Please go test it (and report bugs/issues) and see for yourselves! (Minor aside/reminder: if you're interested in representative game-like resource usage be sure to be running a fully cooked build in shipping config with DX12.)
 
Quite the difference, look at the structure ahead of her.
I would be remiss if I didn't also note something that was already mentioned on Twitter - Nanite currently targets the output resolution *before* TSR upscaling in terms of its polygon sizes, thus you will see differences in the default config. You can adjust the nanite CVars to compensate if the goal is to compare TSR itself.

There's not necessarily a "right" way to do this per se and it may change in the future. There are similar questions for virtual shadow maps (which resolution do you target), which also currently target the pre-upscaled resolution (and can also be modified with cvars). These are all choices a specific game could make for various platforms in any case, so I wouldn't take the current defaults too seriously.

In any case I just wanted to note that in case people are trying to compare "just TSR itself". If you want to do that, you should probably also modify Nanite's and VSM's resolution targets to be equivalent to what they would be at 4k native.
 
About having the whole level in memory in pc to avoid streaming
The whole point there was to point out there wasn't any trickery at all in the original demo. There was no "loading tunnel" and it can easily stream all of the nanite geometry/textures for the whole level dynamically. What he meant by "I have the whole level loaded" is that old demo did not use world partition (it wasn't ready yet, as he mentions in other places); it instead used classic UE4-style sublevels. He just "loaded" all the sublevels (in terms of UE4 assets, not in terms of Nanite geometry/virtual texture streaming!) for the purposes of flying around all of it. Nanite and virtual texture streaming still operate basically the same way in editor as they do at runtime, although the editor itself often keeps around more copies of uncompressed data for editing purposes. Hence the RAM cost in editor is higher, but it's not because of anything related to the cost of rendering the scene. As I noted above, if anything performance in editor is lower than a shipping build due to various factors.

All of this is pretty orthogonal to Nanite though and the tldr was meant to be "all of these platforms can happily stream all this data just fine, as you can see". Nothing fancy required.


edit. There also is the discrepancy between editor and original demo. If someone just looks at the footage it's easy to come to wrong conclusion due to editor not enabling all features
Yeah it looks better now than it did then because we've improved a lot of stuff! :)

Brian is just pointing out that the directional light direction isn't exactly the same when he was just fooling around because that was part of the scripting in the original demo (ex. as the roof opened and so on). That in no way would make performance better. Indeed he had the directional light enabled for the entire time he flew around, in addition to the spot lights the artists used for the cave sequence. In the original demo it was only ever one or the other as our shadow caching and optimization was not at the same level it is now.

Why does everyone think there's a conspiracy here? Just go play with it yourselves and see! IMO it's pretty easy for even amateurs like me to drop in a bunch of Quixel geometry and get a similar sort of test case as the various demos. Indeed the exact same assets are available to drop in for free! Sure our test scenes will look like trash compared to real artist-authored stuff, but you can observe the performance characteristics for yourselves.
 
Last edited:
... for disk space reasons in distributing the demo's raw assets (it is 100 GB after all). It was not because a wide range of devices need to run it, but more a bandwidth concern of distribution.
And my poor SSD space on dev machines with multiple of these sorts of projects synced. 16TB SSDs when....
 
In general why do people even get the impression that lots of IO is even required? As noted in the live streams as well, the texture data in the demo exceeds the size of the geometric data and no one seems to think virtual texturing systems that have been in use for many years require DirectStorage or super high bandwidth SSDs.

SSDs are definitely important (but not necessarily required) for latency reasons. Some amount of bandwidth is necessary to keep up with streaming while moving quickly through a world too, but in my experience it's not nearly as much as people seem to think it is. Where are people getting these impressions?
Probably because the demo last year was all about how this was only possible due to the PS5's insanely fast IO.
 
Just because the current demo loads everything to VRAM doesn't mean all future games built on UE5 will reside that way.

It's a lot like Ridge Racer containing the whole game in the PS1's RAM. This isn't a game, it's an engine demonstrating two brand new technologies.

Games certainly will use the SSD. Fully saturate it? Dunno, we'll have to wait and see.

I want to see Wipeout on this engine.
 
In general why do people even get the impression that lots of IO is even required?
I guess it's things like those:
Nanite shows insane detail, indicating storage cost is now insane too.
Next Gen console marketing made SSD a big argument, indicating it is necessary for next gen experience.
Those two points sum up well.
Traditionally, open world games often stream in bigger chunks of world. Because chunks are not fine grained, one could think increasing detail also increases those IO loads accordingly.

Edit: Maybe it's just this: It's not really obvious the advantages Nanite enables for gfx performance and memory requirements apply to IO just as much. Personally i would not have thought about that, until some days ago somebody mentioned this here.
 
Last edited:
Next Gen console marketing made SSD a big argument, indicating it is necessary for next gen experience.
Sure, but there's a difference between "we need/really want an SSD because they are hilariously faster and lower latency than HDDs and optical drives that previous consoles relied on" and "we need tons of bandwidth and lots of fancy stuff on top of that". Don't get me wrong, all the new fancy IO stuff is great and I'm sure will help improve things further in the future across the board in games, but I feel like Epic has been pretty clear from the start that this stuff works well on all of the target platforms and will continue to get better.

Traditionally, open world games often stream in bigger chunks of world. Because chunks are not fine grained, one could think increasing detail also increases those IO loads accordingly.
That's the whole point in Nanite after all: virtualized geometry in the same way as virtual textures. You need not think about loading in big chunks of stuff at once, it just handles it automatically as you move around sticking to a fixed physical memory budget.
 
Last edited:
In general why do people even get the impression that lots of IO is even required?

https://www.gamesradar.com/epics-un...-ps5-vision-that-sony-has-only-told-us-about/

Nick Warden, vice president of engineering at Epic, also touched on the capabilities of PS5 and what it means for in-game visuals. "There are tens of billions of triangles in that scene," he says, referring to a room of statues in the aforementioned Unreal Engine 5 demo, "and we simply couldn't have them all in memory at once. So what we ended up needing to do is streaming in triangles as the camera is moving throughout the environment. The IO capabilities of PlayStation 5 are one of the key hardware features that enable us to achieve that level of realism."


Is he wrong?
 
The video with the guy showing the original PS5 demo on a PC states that they changed the technology from a steaming one to something else. I can't recall the specifics and have a toddler climbing on me.
 
Back
Top