Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

I did not read the papers myself but if what you said is correct wouldn't that put hardware that does not have primitive shaders at a disadvantage? From my understanding only certain AMD GPUs and APUs have the hardware for primitive shaders. What about Nvidia's RTX GPUs, GTX 1600 series and I'm guessing the Xbox Series APU's?
The latest incarnation of primitive shaders is another term for mesh shaders.
 
Good optimizations or pc lacking optimizations? The engine is in preview, not final. From documentation console target is 1080p and then use TSR to scale into 4k. PC could also be enabling more features than console versions are using.

I guess detail is proportional to resolution. If they're planning on limiting the polygons to one per pixel, then increased resolution would be increased polygon counts.
 
Excessive geometry is the new ray traced reflections in your face as a "cheap" effect to wow casuals. I hope artists don't go crazy, this makes no sense

I hope they do actually. Not necessarily through Nanite approach though.

The latest incarnation of primitive shaders is another term for mesh shaders.
Not sure that this is correct.
Note that PS5 doesn't support mesh shaders but do support primitive shaders.
Primitive shaders do run on RDNA2 but they are not the same thing as mesh shaders.
AFAIU primitive shaders are about geometry culling while mesh shaders are about geometry processing.
 
Excessive geometry is the new ray traced reflections in your face as a "cheap" effect to wow casuals. I hope artists don't go crazy, this makes no sense

it's really a testament to how effective their compute shader setup is at culling triangles. Something that the hardware path cannot do.
This is really important for sure, to be able to do something like this, even on machines that don't support mesh shaders.
 
I guess detail is proportional to resolution. If they're planning on limiting the polygons to one per pixel, then increased resolution would be increased polygon counts.

That's exactly what they are trying to do. Another side effect of this is that needed ssd bandwidth is likely directly tied to resolution used. 4k probably needs 4x data from ssd compared to 1080p.
 
Not sure that this is correct.
Note that PS5 doesn't support mesh shaders but do support primitive shaders.
Primitive shaders do run on RDNA2 but they are not the same thing as mesh shaders.
AFAIU primitive shaders are about geometry culling while mesh shaders are about geometry processing.
True. I did not mean they were synomous but had some similarities in common. Based on this read mesh shaders seem to offer more flexibility regarding programmability.
So will PS5 also support mesh shader?
There are several noticeable differences between the two. With the primitive shader setup AMD described, you still have to "assemble" the input data of predefined format (vertices and vertex indices) sequentially, where as with mesh shader the input is completely user defined, and the launching of mesh shader is not bound by the input assembly stage - it's more like compute shader that generates data to be consumed by the rasterizer. Also with primitive shader, the tessellation stage is still optionally present, whereas for mesh shader tessellation simply doesn't fit.

There are also overlaps between the two. Primitive culling are done programmatically and can be performed in a more unrestricted order; LDS is now (optionally) visible in user shader code, whereas in the traditional vertex shader based pipeline, each thread is unaware of Local Data Store's existence.
...
Here's my take: To support mesh shader in its purest form, you'll need the GPU's command processor to be able to launch shader in mesh shader's way. If the command processor somehow isn't able so, as long as the API exposes good level of hardware detail, developers should be able to take most of mesh shader's advantages. If it's like AMD's approach with in-driver shader transformation, then the advantage will be limited compare to full mesh shader support, as programmability will be greatly sacrificed.
Primitive Shader: AMD's Patent Deep Dive | ResetEra
 
Last edited by a moderator:
I'm getting really noticeable artifacting around the character when in motion. Looks like ghosting from the TAA or motion blur implementation.

Has anyone been able to package the demo as a standalone executable so that it’s runnable outside the editor?I’m getting an error that the SDK isn’t installed properly whenever I try.
 
Just playing around with nanite :p

What nanite can drive vs "traditional" mesh with lods

nanite7uja3.png
 
it's really a testament to how effective their compute shader setup is at culling triangles. Something that the hardware path cannot do.
It doesn't seem to be very effective at culling, culling time is the same with both r.NaniteComputeRasterization and r.NanitePrimShaderRasterization, but rather compute rasterization part is way faster at rasterizing triangles with sizes less than 4 pixels with r.NaniteComputeRasterization and roughly equal to HW rasterization at r.Nanite.MaxPixelsPerEdge=4, then it starts losing to r.NanitePrimShaderRasterization at larger triangles, which is supposedly rasterization path.
That's all for the 2560x720 rendering resolution (slightly below 1080p in pixel count), default setting for this resolution is r.Nanite.MaxPixelsPerEdge=2, this would translate into 8 pixels per edge in 4K, so raster still can beat it in this resolution with such settings. Not sure whether classic rasterizer part is optimized in UE5 to the same extent as compute rasterizer. I have tried NVIDIA's mesh shaders Asteroids demo recently (without some fancy optimizations) and it was capable of rasterizing 300 millions of polygons of overlapping asteroids meshes at 4K with 30 FPS, in wireframe mode there were a few meshes with holes, but otherwise the screen was mostly opaque (close to per pixel density)
 
Did someone run it at 720p upscaled to 1080p to see how their upscaler upscaler looks, and how the performance scales?
 
Nanite vs lod0 Mesh from close up and further away

lod0vsnanite8vkdr.png

lod0vsnanitefarypkt5.png


Very impressive! Even for games that don't want to use billions of polys this will be nice as you can just use lod0 and let nanite handle the rest (for static meshes).
 
That propagation update delay is going to be noticeable all generation.

It so annoys me I'm already trying to think of clever ways around it :nope:

Course in this case I haven't even figured out how lumen does its updating, so...

I'm getting really noticeable artifacting around the character when in motion. Looks like ghosting from the TAA or motion blur implementation.

Has anyone been able to package the demo as a standalone executable so that it’s runnable outside the editor?I’m getting an error that the SDK isn’t installed properly whenever I try.

As a guess this could be the screenspace GI they use for filling in holes and detail. The same can happen in Days Gone when that's enabled. Little pools of darkness going along with moving objects maybe, or am I totally off?
 
This engine does make me wish that CIG went with unreal instead of cryengine. This stuff would have worked great in space giving a ton of detail to the ships which is mostly what you see out there and the ability to get really detailed on the planets too.
Oh well

I await the first games using this
 
This engine does make me wish that CIG went with unreal instead of cryengine. This stuff would have worked great in space giving a ton of detail to the ships which is mostly what you see out there and the ability to get really detailed on the planets too.

SC is all about wishes. ;-)

It does make me wonder how you'd build planetary tech to make the most of Nanite though.
 
Back
Top