Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

In that case Epic lied in the interview with Geoff Keighley where they stated that the capture was taken directly from the output of the PS5 devkit and they could control the character with the PS controller

"Press X to gameplay" :D

Demo looked really, really nice for the most part, but I know it's a linear, limited interactivity, single character demo.

When we do start seeing games they're going to have to have things like traversable environments (for AIs as well as human players), cover areas, spawn points positioned for gameplay purposes, far, far less predictable stress points (many enemies + AI + multiple players + grenades and special weapons), plus ... just, like, etc etc.

I'm sure the engine will still be leading edge and produce amazing visuals, but when actual games land the gameplay bits will often have to look rather more .... game like.
 
Many ps3 games culled non contributing polygons which didn't hit sample points.

Yes but I specifically remember a talk where they showed a scene where the number of triangles matched the number of pixels almost one to one as a result of their optimizations in rendering, which then was something like 483,000 or something triangles for 1280x720 (912,600 pixels). I remember it so well because it was the first time I realized that we were pretty close to the moment where we could have as much geometry detail as we have resolution.

I did find this quote:

“Our characters they typically have eighty thousand polygons in you know, one character, like the main character. In total- what we try to push through to the graphics chip at one point two million triangles that we try to draw every frame. We do a lot using the cell processor.”

And this, which refers to the making of video that is linked to in the article: https://www.eurogamer.net/articles/uncharted-2-mastering-the-cell-blog-entry

Anyway amazing that Uncharted 2 will soon be two generations of hardware ago.

But this UE5 demo certainly restores hope that the PS5 will show a generational leap ... !
 
Last edited:
this looks dated now !

usually the tech demo of one generation is the quality we see in game a generation afterwards.
Really no game fully matched this, this generation.
No game fully matched Final Fantasy 7's tech demo on PS3 during it's own generation.
No game fully matched the Final Fantasy 8 tech demo on PS2 fully.

So I hope what we saw on Unreal will change this and will represent the quality we will get on PS5 fully
 
usually the tech demo of one generation is the quality we see in game a generation afterwards.
Really no game fully matched this, this generation.
No game fully matched Final Fantasy 7's tech demo on PS3 during it's own generation.
No game fully matched the Final Fantasy 8 tech demo on PS2 fully.

So I hope what we saw on Unreal will change this and will represent the quality we will get on PS5 fully
But in this context, many games surpassed the UE4 demo we saw at the PS4 launch, and not even at the end of the generation but relatively soon after.
 
i think you are wrong
that luminous demo has been surpassed, the only nice thing that might have been hard to do are the clothes, everything elsa has been matched or bested.
FF8 tech demo was worse looking than silent hill 2 and 3
FF7 PS3 tech demo, well i don't know, was it even running on a PS3 ? Still FFXIII game at least pretty damn close.


 
Last edited:
Imo, this generation wasn't that impressive graphically mainly due to hardware not advancing as before. Now we have ray tracing and DLSS like tech though, and powerfull CPU's for a console.
 
i mean there was literally the equivalent of 16 billions of polygons just for the statues in that demo, even if you cut that to a third to throw more game logic and characters and what not, we're still left with a few billions worth per frame, no current gen of consoles game could even reach a tenth of that.
 
i mean there was literally the equivalent of 16 billions of polygons just for the statues in that demo, even if you cut that to a third to throw more game logic and characters and what not, we're still left with a few billions worth per frame, no current gen of consoles game could even reach a tenth of that.
minor correct: those are the assets. There are only 20M in the frame. Of which they cull to 3.7M triangles (or the same number of triangles as there are pixels) to render out. So 4K resolution would contain 8.7M triangles for instance.
 
Looks like the inspiration for this started with geometry textures, and then Carmack's sparse voxel octree and then onwards to other things. Looks like progressive view-dependent meshes was a topic of research. Maybe based on distance to camera they can select different groupings of vertices so the polygons are roughly scaled to one pixel. Basically dynamic lod scaling (down) in a way that makes it simple and fast to reject vertices
 
minor correct: those are the assets. There are only 20M in the frame. Of which they cull to 3.7M triangles (or the same number of triangles as there are pixels) to render out. So 4K resolution would contain 8.7M triangles for instance.
That's why i said equivalent. Cause it's still looks like it. And ni current gen game can touch that level of détail seen here.
 
minor correct: those are the assets. There are only 20M in the frame. Of which they cull to 3.7M triangles (or the same number of triangles as there are pixels) to render out. So 4K resolution would contain 8.7M triangles for instance.
So now I think I get it.
So basically we are looking at max (or fixed?) 8.7M triangles at any given frame (assuming 4K res) with each triangle's minimum size at a pixel level. Everything that is not visible is 100% culled hence the REYES reference?
It is like a point cloud only this is represented by micropolygons instead of data points dictated by some new type of vertex shader?
There should be some kind of next gen alternative to tessellation also I suppose. Because I can imagine a rock taking an assumed number of 1million polygons at pixel level at distance X, while the same rock at a closer distance Y (lets say 10 times closer), it should take 10 times the polygons at pixel level to retain the detail. So what exactly happens with those polygons as we get closer and closer to a hyper detailed object?
 
So now I think I get it.
So basically we are looking at max (or fixed?) 8.7M triangles at any given frame (assuming 4K res) with each triangle's minimum size at a pixel level. Everything that is not visible is 100% culled hence the REYES reference?
It is like a point cloud only this is represented by micropolygons instead of data points dictated by some new type of vertex shader?
There should be some kind of next gen alternative to tessellation also I suppose, cause I can imagine a rock taking an assumed number of 1million polygons at pixel level at distance X, at a closer distance Y (lets say 10 times closer), it should take 10 times the polygons at pixel level to retain the detail?
You got the right idea. It is dictated by a compute shader or mesh shader, vertex shader is the fixed function pipeline which chokes with small triangles. 10x the polygons requirement is slightly off. 4K resolution is 3840x2160px

But:
4K textures are 4096x4096px.
8K textures are 8192x8192px
And
16K textures are 16384x16384px.

They are powers of 2.
So if you’re wrapping 16K textures around a mesh. Chances are fairly reasonable that even close up 4K may not be sufficient enough to bring it to 1:1 pixel to pixel ratio. I guess it depends on how large that object is.
 
You got the right idea. It is dictated by a compute shader or mesh shader, vertex shader is the fixed function pipeline 10x the polygons requirement is slightly off. 4K resolution is 3840x2160px

But:
4K textures are 4096x4096px.
8K textures are 8192x8192px
And
16K textures are 16384x16384px.

They are square powers of 2.
So if you’re wrapping 16K textures around a mesh. Chances are fairly reasonable that even close up 4K may not be sufficient enough to bring it to 1:1 pixel to pixel ratio. I guess it depends on how large that object is.
Ah yes I wasnt thinking when I made the multiplication. But yes I understand what you are saying :)
Thanks.
 
So now I think I get it.
So basically we are looking at max (or fixed?) 8.7M triangles at any given frame (assuming 4K res) with each triangle's minimum size at a pixel level. Everything that is not visible is 100% culled hence the REYES reference?
It is like a point cloud only this is represented by micropolygons instead of data points dictated by some new type of vertex shader?
There should be some kind of next gen alternative to tessellation also I suppose. Because I can imagine a rock taking an assumed number of 1million polygons at pixel level at distance X, while the same rock at a closer distance Y (lets say 10 times closer), it should take 10 times the polygons at pixel level to retain the detail. So what exactly happens with those polygons as we get closer and closer to a hyper detailed object?

It's possible they're doing subdivision when the object gets closer to the camera, but as I understand it even small objects have enough polygons that you'd never be practically close enough to them to start to see the shape of the polygons.

It sounds like they're just writing general compute shaders that do all of this, instead of taking advantage of primitive shaders, mesh shaders, vertex shaders. Some way of testing and rejecting polygons without using the fixed raster hardware. The "virtual" geometry part is interesting because it suggests caching subsets of meshes for rendering in RAM instead of loading entire meshes into RAM before culling, analogous to virtual texturing. Since they're not using the fixed function raster hardware on the gpu, and they're testing visibility in compute, they'd either have to ray cast or ray march. There's no way they can efficiently ray cast or ray march against hundreds of millions of polygons, so they have some acceleration structure that can make it fast. Maybe ray marching an SDF representation first, to figure out parts of the models to cull and only load from disk the parts that are needed. Maybe based on distance to the camera, load only particular vertices so polygons are no smaller than a pixel. Perhaps the models are stored on disk in a hierarchical fashion so instead of having multiple LODs you have one LOD where you can select pieces/pages/chunks/meshlets, or maybe load certain levels of a hierarchy easily. I'm highly interested in reading about it.
 
I suppose when Epic said that there are some cases where the hardware rasterization will be faster, I suspect it's when the triangle pixel ratio is closer to normal. So if you hit a low poly mesh and you are running high resolution, suddenly each triangle will cover up to say 16 pixels, suddenly fixed function is going to go warp speed again. So there could definitely be quite a few times in which it may switch over to the hardware rasterizers for those instances, mainly when you are walking close up to an object but the developer did not provide the high enough fidelity assets.
 
I suppose when Epic said that there are some cases where the hardware rasterization will be faster, I suspect it's when the triangle pixel ratio is closer to normal. So if you hit a low poly mesh and you are running high resolution, suddenly each triangle will cover up to say 16 pixels, suddenly fixed function is going to go warp speed again. So there could definitely be quite a few times in which it may switch over to the hardware rasterizers for those instances, mainly when you are walking close up to an object but the developer did not provide the high enough fidelity assets.

That makes sense, but they probably don't expect that to be the case often.
 
Back
Top