Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Did I say than GPU power doesn't count? I just say that without SSD this is impossible to have this level of details after it will be impossible to fully reach this level because of storage size.
But that's not true. Nvidia Marbles demo proved that other graphics engines can easily use pre-tessellated static geometry without the need for an SSD.

Small hint the demo render at 30 fps because of Lumen not Nanite. And they are very confident they can reach 60 fps but if the PS5 GPU was more powerful they would be able to be at 60 fps without optimization or at 60 fps with a better resolution.

Yea, that's why I say GPU will reach a limit before SSD will.

Lumen is still using the old tried and true pre-baked lighting setup or a derivative of it. It's not using the accuracy of RT and it shows in the demo. The theme this new generation is RT. The transformation in lighting is way more important IMO than static highly detailed objects that can't interact with the scene. I do like not having to bake out normal maps though. Thankfully, UE can do Lumen or the more accurate RT lighting.

As an aside, this UE5 demo tech isn't going to show up on any of the Sony 1st party games. The other developers will have to make their own and it could be better or worse. I'm told that all the 1st party studios are making their own.
 
But that's not true. Nvidia Marbles demo proved that other graphics engines can easily use pre-tessellated static geometry without the need for an SSD.



Yea, that's why I say GPU will reach a limit before SSD will.

Lumen is still using the old tried and true pre-baked lighting setup or a derivative of it. It's not using the accuracy of RT and it shows in the demo. The theme this new generation is RT. The transformation in lighting is way more important IMO than static highly detailed objects that can't interact with the scene. I do like not having to bake out normal maps though. Thankfully, UE can do Lumen or the more accurate RT lighting.

As an aside, this UE5 demo tech isn't going to show up on any of the Sony 1st party games. The other developers will have to make their own and it could be better or worse. I'm told that all the 1st party studios are making their own.

You compare marbles demo to the scale of this demo a bit of seriousness. Again Nanite is new they have idea for skinned animation or transparent material. The only case where they know it would not work is aggregates hair, grass and leaves where you need subpixel precision with multiple polygon per pixel.

A very funny thing will be the moment someone will try to RT this level of geometry details I don't think current GPU are able to do it at 30 fps or more even a 3090

Did I talk about Sony first party? The only Sony studio to use Unreal Engine is Sony Bend but it has nothing to do with the subject.

Ec4toUTU8AQIj_N


And for reference for Lumen
Ec4t3ycU4AAIZqq


Ec4t67iVcAIK62c


There no baking at all. It use voxel cone, tracing for distant object, Sdf GI for midrange object and screen space GI near the player.
 
You compare marbles demo to the scale of this demo a bit of seriousness.

Marbles demo is definitely doing more processing than UE5 demo. You talking about Path-tracing with every single BRDF using importance sampling. The UE5 demo is a display of fast baked data into a scene to test SSD->VRAM speed.

So don't think that on a per frame basis that the UE5 demo is more demanding than the Marble demo.

Again Nanite is new they have idea for skinned animation or transparent material. The only case where they know it would not work is aggregates hair, grass and leaves where you need subpixel precision with multiple polygon per pixel.

Only?? Those cases are HUGE and affect visual quality in a number of scenes. How many times do you see foliage in a game? Like nearly 100%.

A very funny thing will be the moment someone will try to RT this level of geometry details I don't think current GPU are able to do it at 30 fps or more even a 3090

Correct.

And for reference for Lumen
Ec4t3ycU4AAIZqq


Ec4t67iVcAIK62c


There no baking at all. It use voxel cone, tracing for distant object, Sdf GI for midrange object and screen space GI near the player.

Voxel cone tracing isn't new. The mere fact that you have to have voxels means it's an organized data structure where accuracy depends on the number of voxels for a reasonable approximation. A ray has infinite precision. They just aren't comparable. Also screenspace is what we are trying to get away from this gen. RT doesn't have the limitations of screenspace rendering.
 
Aha so, GPU's still have their importance, SSD/IO tech is just complementing them as that tech has seriously lagged behind (or the lack of adaption really).

Your first sentence is exactly what we’ve been saying all the time here. I guess then we are finally in agreement.
 
Marbles demo is definitely doing more processing than UE5 demo. You talking about Path-tracing with every single BRDF using importance sampling. The UE5 demo is a display of fast baked data into a scene to test SSD->VRAM speed.

So don't think that on a per frame basis that the UE5 demo is more demanding than the Marble demo.



Only?? Those cases are HUGE and affect visual quality in a number of scenes. How many times do you see foliage in a game? Like nearly 100%.



Correct.



Voxel cone tracing isn't new. The mere fact that you have to have voxels means it's an organized data structure where accuracy depends on the number of voxels for a reasonable approximation. A ray has infinite precision. They just aren't comparable. Also screenspace is what we are trying to get away from this gen. RT doesn't have the limitations of screenspace rendering.

At least we agree on something, I know what is screen space. This is a compromise. Here they are rendering high level of details and raytracing performance is not good enough with current GPU.
 
As an aside, this UE5 demo tech isn't going to show up on any of the Sony 1st party games. The other developers will have to make their own and it could be better or worse. I'm told that all the 1st party studios are making their own.
bend studio used ue in days gone so its possible they will use ue5 in new project
 
Voxel cone tracing isn't new. The mere fact that you have to have voxels means it's an organized data structure where accuracy depends on the number of voxels for a reasonable approximation. A ray has infinite precision. They just aren't comparable. Also screenspace is what we are trying to get away from this gen. RT doesn't have the limitations of screenspace rendering.
Pretty sure that is the exact reason why they have the SDF tracing in between the two. (I wonder if it's in object or world space.)
Would be amazing to finally get more detailed information on both Nanite and Lumen, currently we just do not have much.

Will be very interesting to see what the improvements for future Nanite will be as well.
High quality tessellation would certainly be quite nice.. (well.. DX11 level tessellation is pretty useless so anything would be improvement.)
 
Last edited:
That's a confusing statement since the PS5 has a GPU. Are you saying no PC GPU will render a trillion visible triangles at 60FPS but the city scene vista (which clearly has a background layer that's unplayable) is rendering a trillion triangles? The GPU renders triangles to the screen not the SSD.
Sorry for the confusion, I was saying no GPU now (or in 10 years will be able to render a trillion tris a frame at 60fps) so even if the scene does contain a trillion tri's you will have to find some way to reduce this number by orders of magnitude, visiblity culling, LOD etc to send to be rendered
Just watched this last night, most impressive gfx in existence ATM.
perhaps not a trillion tris but prolly in the billions. I can see a lot of scenes where visible tri count is in 8 figures (LOD0), I assumethey are rendering 8 figures I see nary an edge. now I assume the world is a bigger 100s of times bigger = a shit ton of triangles in the world
 
Let's not pollute this thread like we did for R&C one. ;)

Really disliked Nanite presentation for boasting LOD0 / source triangle counts, as it is meaningless metric as soon as any LOD is used.
So unless you actually use the LOD0 for everything, do not count with it.

With proper continuous LOD for opaque objects, way to billions polygons will most likely come from complex surfaces like hair, moss, vines, trees.. etc.
Which will need an alternative rendering method to be performant if we want high quality. (Something that handles possibly thousands polygons per pixel.)
 
R&C is far from pushing billions of triangles on screen. It's a lot more detailed than it's last gen itération, but still uses traditionnal rendering, the UE5 demo shows a lot more detail on screen, worth billions of polygons in the look, bit not actually rendering all of them.
 
Let's not pollute this thread like we did for R&C one. ;)

Really disliked Nanite presentation for boasting LOD0 / source triangle counts, as it is meaningless metric as soon as any LOD is used.
So unless you actually use the LOD0 for everything, do not count with it.

With proper continuous LOD for opaque objects, way to billions polygons will most likely come from complex surfaces like hair, moss, vines, trees.. etc.
Which will need an alternative rendering method to be performant if we want high quality. (Something that handles possibly thousands polygons per pixel.)

+1

The most interesting part of Nanite is the authoring side, the continuous LOD and the polygon density with the compute rasterizer saying you can reach 1 polygon per pixel efficiently is interesting.

For hair, DICE have an interesting solution again using compute rasterizing of line and analytical AA.
 
+1

The most interesting part of Nanite is the authoring side, the continuous LOD and the polygon density with the compute rasterizer saying you can reach 1 polygon per pixel efficiently is interesting.

For hair, DICE have an interesting solution again using compute rasterizing of line and analytical AA.
The promise of importing source quality object to editor and bypassing all sorts of baking from high to low poly is huge.
We finally can just bake at source level. ;)

Also if it really is as easy as selecting object and press covert, we should see games using it quite soon.

If Nanite culling efficiency is as good as they say and we can get dither transparency with it, it would be absolutely amazing for games like UFO.
Create art in decent quality, create maps with pieces like usual while not worrying with too much about occlusion etc.

It very well could be a lot faster to do and rendering could be quite bit faster than it would be with old HW rasterizer pipeline.
 
Sorry for the confusion, I was saying no GPU now (or in 10 years will be able to render a trillion tris a frame at 60fps) so even if the scene does contain a trillion tri's you will have to find some way to reduce this number by orders of magnitude, visiblity culling, LOD etc to send to be rendered
Just watched this last night, most impressive gfx in existence ATM.
perhaps not a trillion tris but prolly in the billions. I can see a lot of scenes where visible tri count is in 8 figures (LOD0), I assumethey are rendering 8 figures I see nary an edge. now I assume the world is a bigger 100s of times bigger = a shit ton of triangles in the world

Its nowhere near a billion triangles. I took this screenshot just a bit ago.

5JNsZHJ.png
 
Its nowhere near a billion triangles. I took this screenshot just a bit ago.
no idea we will have to wait for tech paper I suppose, Note Im not claiming there is a billion triangles on the screen, Im saying the world prolly consists of at least a billion tri's at LOD0, eg perhaps that grass tuft is 2000 tri, now if theres 10s of thousands of them in the world well thats a lot right there, it rapidly adds up, the world in my personal game has ~30 million tris and its no where the level of what I've seen in R&C not even in the same ballpark
 
This actually gets to the heart of my point.

"That lets us devote all of our system memory to the stuff in front of you right now,"

So if you believe what's being said there then they're using the full 16GB to render the current viewport only. Therefore with 32GB, they could do even more in that viewport.

Of course I don't take what's being said there at face value though given that would pretty much result in you consuming the entire game content by simply turning 360 degrees!

Let me give another scenario. If R&C truly is able to load data behind the character from the SSD in real time then presumably turning speed must be limited in line with streaming capacity. This is similar to how last gen games limit traversal speed in line with last gen streaming capabilities. So you couldn't, for example have a "look backwards" button, or enable mouse control. However with 32GB of memory you could keep the data behind as well as in front of the character in memory, thus allowing the above scenario's because there's no dependency on the SSD to load that data in when the character turns around/looks behind them. This is just one pretty basic example but I'm sure good developers could come up with lots of scenario's where it's better to have more RAM, than ultra fast streaming speeds. In fact I'd bet that ultra fast in game streaming won't play a particularly big part in many games this generation because of the storage limitations. I've said it countless times before on this forum but you're not going to be regularly streaming at the PS5's maximum IO rate because if you were, you'd be consuming your entire game content in a matter of seconds. There will of course be in game scenario's when you want the SDD to be maximising it's transfer speeds but unless we expect every game this generation to use Rift like portal mechanics then I expect them to be quite a bit rarer than many here are assuming.
I think it’s a poor example to use the ‘as the player turns’ comment - but it helps people easily picture how fast the data loads. Now - let’s random fast travel in a game to a new part of the map - or to another planet- or beam down from a ship to planet or in an massive open world have super zoom to extreme details miles away (etc). The speed will be directly limited to the data transfer speed.

If Sony and MS hadn’t put all this effort in the IO systems then you’d have to have all games designed around slower data streaming and that expensive extra memory would take even longer to fill up, which will mean similar scenarios like we had last gen in game design.

Gotcha. Now I understand. Thanks for clearing that up.

I think the main draw this generation will be on rendering power though. RT is much more coveted than SSD->VRAM in my opinion. We've got several games using RT as opposed to 2 using the SSD.
I’d say all games are using SSD ;)
 
Back
Top