Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

So do you believe that a PS4 with the PS5 IO would offer superior results to a PS5 with an HDD?

i think a PS4 with PS5 IO would achieve better details in games than a PS4 with 10Tflops. i'm not talking res or FPS here.

that's a thing i wanted to improve in games, details per scene, i mean, even if it's still under 4K and it's only 30fps. when i see mass effect remaster in 4K, i only see a very HD ugly game. i don't want last gen games in higher res with just RT.
 
i think a PS4 with PS5 IO would achieve better details in games than a PS4 with 10Tflops. i'm not talking res or FPS here.

that's a thing i wanted to improve in games, details per scene, i mean, even if it's still under 4K and it's only 30fps. when i see mass effect remaster in 4K, i only see a very HD ugly game. i don't want last gen games in higher res with just RT.
Ehhh :) a bridge too far imo.
We needed I/O because of where we are headed with resolution and the texture detail that comes with supporting that. Without compute and the corresponding bandwidth you’ll have the ability to support high quality assets with no way to display them.
 
It doesn't mean we don't need more powerful GPU but people don't understand Nanite virtual geometry at all. Here at the end they goes from source scene with above 16 billions polygons to 20 millions polygons to treat into the rendering engine.

WCCFunrealengine52.jpg


image008.jpg

A million triangles each in 8k textures—a billion triangles of source geometry in each frame and the engine reduces it to 20 million triangles losslessly.

For reference Infamous second son rendered 11 millions polygons scene

https://www.dualshockers.com/infamo...-11-million-rendered-regularly-by-the-engine/

At the end you can render the same scene at higher resolution and framerate.

And Nanite is 60 fps ready for the demo on PS5. The problem was Lumen. The cost is much higher than Nanite.

Ec4t3ycU4AAIZqq


Ec4t67iVcAIK62c
 
I'd rather have a slower SSD/HDD in my PS5 (but with a modern RDNA GPU) as opposed to a PS5 with a HD7870 GPU. Some seem to think bandwith from storage is more important than gpu compute power, CPU processing power or even other hardware features such as ray tracing etc.
Seems that only flies for a small group though (thank god).

Out of all the components in any console (or pc etc) the GPU is going to be most important factor for delivering what you see on screen. Theres still no future for intel integrated GPU's for serious gaming.

If we should believe what some think, the pc is by far the most capable graphics renderer as it has the fastest storage solutions around, in special considering direct storage and things like optane.
 
So do you believe that a PS4 with the PS5 IO would offer superior results to a PS5 with an HDD?

With an HDD not virtualisation of geometry, this will be impossible to do. No available GPU is able to render exactly like this this scene with traditionnal method not a 3090. It means tons of memory, a GPU running at a very low utilisation because of the huge number of polygons. After no continuous LOD like in Nanite need to have multiple LOD of each asset.

optimizing-the-graphics-pipeline-with-compute-gdc-2016-51-638.jpg


1 polygon per pixel means only 6,25% efficiency of the GPU shading, this is coming from a DICE presentation "optimizing the graphics pipeline with compute". If you want to render the same scene with better efficiency without Nanite, need to use more normal maps non casting shadows, lost details and so on shitty manual LOD. It would not have the same look at all and it would ask more work from the artist. Or you have another choice is to let all assets in RAM with a compute rasterizer but at the end it is probably to do a full game even with reduce complexity because each time you will change of biome or landscape you need to refill the full memory.


bastian-hoppe-artstation-bh-05.jpg


bastian-hoppe-artstation-bh-07.jpg


You need everything an NVME SSD is now a must have, Direct Storage will be a must have. The biggest gap between the two generation of consoles are the SSD and I/O system and it is essential.

Without SSD this level of geometry is impossible.

EDIT: Imagine for each level you need to fill 64 GB of data with an HDD. This is 10 minutes loading time with a fast HDD without compression maybe 5 minutes with compression and 2 minutes with a SATA SSD or 1 minutes with compression. This is 11 seconds with PS5 SSD without compression and 5,5 second with compression. And no possibilty for the game designer to do a portal with high end graphics, no Doctor Strange game or like the rumor Doctor Strange in Spiderman 2.
 
Last edited:
Direct Storage will be a must have

Without SSD this level of geometry is impossible.

Laptop without DS says hi.

Anyway, your making very bold claims, and i wonder why your reposting the game images in the same thread. Obviously, whats done above is possible without nvme tech, you'd be just needing more vram/longer load times.

Without the new GPU, CPU, mem BW etc etc that SSD tech wont be doing all this. I would say, that SSD tech would shine much more so if there would be a 6900XT class GPU in there. Same for ray tracing, theres not much an SSD can do to lift it to 3090 performance.
 
Obviously, whats done above is possible without nvme tech, you'd be just needing more vram/longer load times.
Their demo uses 'few' models as building blocks - they build buildings from duplicating the same element over and over.
It's not clear to me if this keeps general practice with UE5 games, and what's the main future limit here: RAM / storage space / storage speed?
 
Their demo uses 'few' models as building blocks - they build buildings from duplicating the same element over and over.
It's not clear to me if this keeps general practice with UE5 games, and what's the main future limit here: RAM / storage space / storage speed?

Like in Dreams you need to be clever with how you create the assets.

Storage space, this is virtualisation it means stream only visible part of geometry and textures. You never need to have the the full assets in RAM but you need low latency and fast storage to stream the visible part of assets in RAM. A SSD is mandatory. The unoptimised buffer for Nanite is only 768 MB. This is a demo for a game you need much more assets. Brian Karis told that at least the statue level of details is not very realistic in a game because of storage space.

8wl1rua.png


And this was not very optimised Brian Karis creator of Nanite said it himself.

Again this is in my other post, he talked only about one limitation for the demo storage space.

In the future he thinks it is possible to support transparent materials and skinned animation for character but this is impossible to use Nanite for vegetation and hair.

Ec4toUTU8AQIj_N


EDIT: For RT you can use a separate datastructure with LOD when it will not be a limitation in DXR.
 
Last edited:
Their demo uses 'few' models as building blocks - they build buildings from duplicating the same element over and over.
It's not clear to me if this keeps general practice with UE5 games, and what's the main future limit here: RAM / storage space / storage speed?

True. Its also just that we havent really seen any of this in actual games, and unable to test on different equipped systems. Theres no doubt that nvme was a dire need to new gaming systems, but to say the GPU isnt all that of a jump anymore.... Obviously the GPU is still going to be the most important component. That to say, all components are needed, be it SSD, CPU, GDDR ram, GPu etc etc.
And there i can agree on, they made the best balance possible for the price for 500 dollars. Going with say a 15TF gpu but a hampered SSD drive would have made a less capable machine. But going with a PS4 just with the PS5's IO subsystem.... nah :p
 
Storage space, this is virtualisation it means stream only visible part of geometry and textures.
But you still need all content of the game on disk, that's what i meant with 'space'. They did not give any info on that, e.g. telling how many GB their demo uses.
I guess it's the main limitation and reason why UE5 games may end up at lower detail than this demo.

768 MB streaming pool also does not tell me much. Does it mean that's only the buffer to load and unpack stuff, or does it include the RAM needed for all the final geometry, texture, SDF volumes etc. as well?
It's likely just the former. For the rest we only know it's less than 16GB PS5 ram. So this could be a reason for the heavy instancing as well.
 
Their demo uses 'few' models as building blocks - they build buildings from duplicating the same element over and over.
It's not clear to me if this keeps general practice with UE5 games, and what's the main future limit here: RAM / storage space / storage speed?

I remember when I was dying on the hill pushing for ReRAM as the secret sauce, you sir were the first ones to tell everyone of the benefit of the fast transfer between storage and RAM. This is what you said to me long before the UE5 was shown.

Assuming magic compression exists, CPU could uncompress insane details to ReRam so it is available quickly, e.g. when player turns view.
Then my initial Megatexture argument would make more sense again, if we make some assumptions:
* decompression very expensive, so need to cache full environment around player, not just what's currently on screen. 10 x more data.
* Sub millimeter texture resolutions everywhere. 100x100 more data?


Obviously ReRAM didn't happen. But what you were saying about submillimeter and megatexture details as the benefit of quickly available data were proven by UE5 demo. :cool:
 
But you still need all content of the game on disk, that's what i meant with 'space'. They did not give any info on that, e.g. telling how many GB their demo uses.
I guess it's the main limitation and reason why UE5 games may end up at lower detail than this demo.

I remember an argument I read somewhere that one way epic could manage the ssd space is by knowing up to which point a detail, texture, or triangle is visible to the player when playing the game. Then they will only store such amount of detail and data to save space.

For example, in the UE5 demo, there's no point storing the the asset details of the ceiling of the cave viewed up-close when the character cannot fly anyway to see up-close the rocky ceiling. The engine or if not, the devs, should know which assets are not viewable up-close by the player. Then they don't need to store the entire asset detail to save up space. Does that make sense?
 
But you still need all content of the game on disk, that's what i meant with 'space'. They did not give any info on that, e.g. telling how many GB their demo uses.
I guess it's the main limitation and reason why UE5 games may end up at lower detail than this demo.

768 MB streaming pool also does not tell me much. Does it mean that's only the buffer to load and unpack stuff, or does it include the RAM needed for all the final geometry, texture, SDF volumes etc. as well?
It's likely just the former. For the rest we only know it's less than 16GB PS5 ram. So this could be a reason for the heavy instancing as well.

Brian Karis said at least for the statue this is not reasonnable to have such huge assets on disk for a game. Same be clever with instancing will be used because I doubt games will go above 200 GB. If you want to fit the game inside a reasonnable size you will need to be clever.

For the moment Brian Karis talked about storage size limitation and he said there is no other limitation.
With 16 GB of RAM and a SSD able to do on average 10/11 GB/s with oodle kraken and oodle texture compression I doubt the storage speed is the limit and latency is low enough.

Honestly I expect data delivery to be one of the biggest constraints in game graphics for next gen. Virtualization tech like Nanite, VT and fast SSDs make the run-time side a nonissue.



768 MB streaming pool is for the Nanite geometry. You never have the final geometry in RAM for the scenery because it works under Nanite, virtualisation you only have the streaming pool. They have a special format on disk for this. For textures, this is virtual texture too and they have a streaming pool and only this, You have the final geometry for the woman character, the bat and you have the vegetation with some of the element at different LOD and if one day they succed to expand Nanite to transparent object and object with animation, it will probably reduce the memory because the woman and the bat will just be inside the Nanite streaming pool.

Ec4yGs5UMAMCvZq


"With Nanite, we don't have to bake normal maps from a high-resolution model to a low-resolution game asset; we can import the high-resolution model directly in the engine. Unreal Engine supports Virtual Texturing, which means we can texture our models with many 8K textures without overloading GPU"

I suppose they need only LOD0 for the woman and after you have only a few element like vegetation, sand, a little water, some flag, the bat, some insect and the portal in the demo. I don't think it takes tons of RAM.

Vegetation and hair are impossible to render with Nanite.

Lumen use Voxel for long distant scenery, sdf for mid range scenery and screen space GI for low distance GI. You need Voxel volumes and SDF volumes in RAM.
 
Last edited:
you sir were the first ones to tell everyone of the benefit of the fast transfer between storage and RAM.
No that was not me. I'm myself here just following discussion to learn what's the fuzz about SSDs in games. I lack experience with open world and streaming tech.

The megatexture example was brought up because it's a lot about those things. Interestingly UE5 did the opposite: Achieving high detail with instancing, so their demand on storage and BW is limited at the cost of variety.
It's unthinkable we could get another Rage game without any duplication of content but at UE5 quality levels.

I remember an argument I read somewhere that one way epic could manage the ssd space is by knowing up to which point a detail, texture, or triangle is visible to the player when playing the game. Then they will only store such amount of detail and data to save space.

For example, in the UE5 demo, there's no point storing the the asset details of the ceiling of the cave viewed up-close when the character cannot fly anyway to see up-close the rocky ceiling. The engine or if not, the devs, should know which assets are not viewable up-close by the player. Then they don't need to store the entire asset detail to save up space. Does that make sense?
Yes this makes a lot of sense. Likely any game can at least make a bounding box the player will never leave, so mountains at the background do not need high LODs on disk.
But i think UE5 can not do whole mountains with millimeter details at all. Instead they compose mountains from some Quixel scans of cliffs. Works well for table mountains like in the demo, but will have problems to replicate things like erosion flow at all scales like for my native Alps.
For the background mountains they seemingly use an upscaled rock, which in reality was probably much smaller than a mountain. Works pretty well because nature often behaves fractal.

I predict we'll soon see tool that takes a heightmap and procedurally rebuilds it with splatting quixel models on its surface :D
 
768 MB streaming pool is for the Nanite geometry. You never have the final geometry in RAM for the scenery because it works under Nanite, virtualisation you only have the streaming pool.
Yeah, but that's not what i meant again. Ofc. they leave their geometry in their native renderable format, which differs from traditional mesh data structures. Let's call their format Octree for a moment. Then we have roughly two options:
1. Compressed Octree from SSD to 768MB buffer. Uncompression (eventually background task with latency). Decompressed Octree from buffer to (V)RAM, keep it there as long as needed.
2. Octree from SSD to 768 MB buffer, which is large enough for the whole visible scene at any moment during the demo.

You seemingly assume second option? I don't. Notice they also talked about 'compression on disk is better / more aggressive'.
 
Lumen use Voxel for long distant scenery, sdf for mid range scenery and screen space GI for low distance GI. You need Voxel volumes and SDF volumes in RAM.
For UE4 they already merge SDF per model to global static cascaded volume. I guess they keep this and do not really use 'voxels', because SDF is faster to trace. On the other hand merging SDF is more expensive than merging density, so could be wrong.

Tried SDF shadows recently with UE4. Very nice results. It's insane to me what brute force work current GPUs can do.
 
I remember an argument I read somewhere that one way epic could manage the ssd space is by knowing up to which point a detail, texture, or triangle is visible to the player when playing the game. Then they will only store such amount of detail and data to save space.

For example, in the UE5 demo, there's no point storing the the asset details of the ceiling of the cave viewed up-close when the character cannot fly anyway to see up-close the rocky ceiling. The engine or if not, the devs, should know which assets are not viewable up-close by the player. Then they don't need to store the entire asset detail to save up space. Does that make sense?
Rage did this for their textures.

Basically a check from volume where player camera can go, or where it usually goes to each surface and limit resolution to what would be visible in 720p or similar.

So yes, for UE5 it should be quite feasible to do similar for objects.
Track closest the camera can go to object types bounding volume and when exporting game data trim to maximum needed quality.

Should help in cases where some random pencil in background uses 8k texture and few million polygons.
And when it fails we get FF7r doors and jars.
 
Last edited:
For UE4 they already merge SDF per model to global static cascaded volume. I guess they keep this and do not really use 'voxels', because SDF is faster to trace. On the other hand merging SDF is more expensive than merging density, so could be wrong.

Tried SDF shadows recently with UE4. Very nice results. It's insane to me what brute force work current GPUs can do.

For Lumen it comes from devs explanation


https://www.digitalfoundry.net/2020...-on-ps5-epics-next-gen-leap-examined-in-depth
 
With an HDD not virtualisation of geometry, this will be impossible to do. No available GPU is able to render exactly like this this scene with traditionnal method not a 3090. It means tons of memory, a GPU running at a very low utilisation because of the huge number of polygons. After no continuous LOD like in Nanite need to have multiple LOD of each asset.

optimizing-the-graphics-pipeline-with-compute-gdc-2016-51-638.jpg


1 polygon per pixel means only 6,25% efficiency of the GPU shading, this is coming from a DICE presentation "optimizing the graphics pipeline with compute". If you want to render the same scene with better efficiency without Nanite, need to use more normal maps non casting shadows, lost details and so on shitty manual LOD. It would not have the same look at all and it would ask more work from the artist. Or you have another choice is to let all assets in RAM with a compute rasterizer but at the end it is probably to do a full game even with reduce complexity because each time you will change of biome or landscape you need to refill the full memory.


bastian-hoppe-artstation-bh-05.jpg


bastian-hoppe-artstation-bh-07.jpg


You need everything an NVME SSD is now a must have, Direct Storage will be a must have. The biggest gap between the two generation of consoles are the SSD and I/O system and it is essential.

Without SSD this level of geometry is impossible.

EDIT: Imagine for each level you need to fill 64 GB of data with an HDD. This is 10 minutes loading time with a fast HDD without compression maybe 5 minutes with compression and 2 minutes with a SATA SSD or 1 minutes with compression. This is 11 seconds with PS5 SSD without compression and 5,5 second with compression. And no possibilty for the game designer to do a portal with high end graphics, no Doctor Strange game or like the rumor Doctor Strange in Spiderman 2.

No-ones suggesting that SSD's won't be important to next gen, or that they don't allow new approaches to real time rendering. And yes I'm aware that Nanite requires an SSD (I already stated that in an earlier post). However none of that means that the SSD is more important than a modern and powerful GPU, or that "next gen level visuals" are impossible without one, or conversely, that an SDD enables next gen visuals without a modern, powerful GPU.

But yes there will be certain things that are impossible without an SDD, particularly if VRAM/RAM constrained.
 
Back
Top