Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

The downplaying by some is hilarious. To quote DF again

This is next-gen: see Unreal Engine 5 running on PlayStation 5

A genuine generational leap that must be seen to be believed.

Article by Richard Leadbetter

We've seen the specs, we've heard the pitches - but what we haven't experienced is any demonstration of a genuine next-gen vision. That changes today with Epic Games' reveal of Unreal Engine 5, accompanied by an astonishing tech demo confirmed as running in real-time on PlayStation 5 hardware. The promise is immense with the quality and density of the visuals on display almost defying belief. Imagine a game world where geometric detail is unlimited, with no pop-in and huge draw distances. Now picture this unprecedented level of fidelity backed up by real-time global illumination that's fully dynamic. It sounds too good to be true, but watch the video on this page and that's what's on display. This is next-gen and it's enormously exciting
 

So for each pixel you cast a primary visibility ray. The scene is made up of a BVH. You have a volume for each model in the scene. You find the nearest-hit intersection with a volume. Then how do you determine which piece of the mesh to load? Keep a hierarchy of volumes for each meshlet for that model/mesh? In the distance you could have a meshlet that's one pixel in size, and then I guess you cull that meshlet and place the remainder in the geometry cache?
 
So for each pixel you cast a primary visibility ray. The scene is made up of a BVH. You have a volume for each model in the scene. You find the nearest-hit intersection with a volume. Then how do you determine which piece of the mesh to load? Keep a hierarchy of volumes for each meshlet for that model/mesh? In the distance you could have a meshlet that's one pixel in size, and then I guess you cull that meshlet and place the remainder in the geometry cache?
I honestly don't know (but we will know soon enough once more info is shared). Ironically, Renderman being a REYES render is one of primary reason why Pixar took so long to fully adopt Ray Tracing (REYES was finally dropped in version V21.0 in 2016 in favor of Path Tracing with RIS).
 
Last edited:
1440k/30 instead of 4k/120, no raytracing, lousy last gen character model/animation, no color, boring materials/shading, derivative Star Wars landscape ... other than that though, awesome and exciting stuff!
emoji1.png

Yes i have thinking the same. Ps5 Memoryarchitecture is highly bandlimited the same cheap Construction from the Ps4 , where CPU Acces reduce the GPU Performance or create Stalls. So no improvements here and then is no surprise you have only 1440K/30 and lots of Artifacts. For me personally i dont need 4K i'm happy with 1080p but with 60 FPs. And the Hype of billions of Polygons reminds me all these Megapixel Hype of Digitalcams where other things are more important. We need better Pixels, Colordynamic , Shaderquality this is more important than an unrealistic Polygoncount. Polygons alone doesent make a good Graphicc.
 
Last edited:
I honestly don't know (but we will know soon enough once more info is shared). Ironical Renderman being a REYES render is one of primary reason why Pixar took so long to fully adopt Ray Tracing (REYES was finally dropped in version V21.0 in 2016 is favor of Path Tracing with RIS).

I’m curious which zbrush format they’re using. They talk about the statue having 33 million polygons, so I’m assuming it’s not a movie format that uses quads or something but I guess it could be.
 
I’m curious which zbrush format they’re using. They talk about the statue having 33 million polygons, so I’m assuming it’s not a movie format that uses quads or something but I guess it could be.
it's either OBJ or FBX (there's no such thing as movie format). Quads are automatically triangulated by the engine so it doesn't matter if the model is using quads or tris. Quads are used mostly for non static model (animated meshes like characters etc) BTW all Megascans models are tris (and the High poly version which are included in their packages are usually max 1M polys & are simply decimated models of the higher poly source models created though photogrammetry which is often 60M plus in OBJ or more but this time in PLY cleaned up in ZBrush.
 
Friendly reminder, this is still a Technical discussion thread, so keep the baseless rumors out of it. Apologies to some of the technical driven responses that got caught up in the clean up of aisle five.
 
So no improvements here and then is no surprise you have only 1440K/30 and lots of Artifacts.
What artifacts? Care to point them out?

The downplaying by some is hilarious.
Indeed.

They’re using a compute-driven approach so gpu width matters as well. It’s not clear whether this scales better with clock speed or width.
Yes and no?
They're saying they're using both mesh shaders and other hand-written compute shaders of their own, but AFAIK that's only for discarding hidden or sub-pixel triangles by defining LODs. In the end it still needs a geometry processor / tesselator for the triangles that aren't discarded, and that's dependent on clock speed.
Of course, we don't know if the SeriesX has the same amount of geometry engines, or the same amount of triangle throughput per clock as the PS5.

But again, more weight on mesh shaders and compute throughput for effective geometry throughput puts the validity of Lockhart even more into question.
 
What artifacts? Care to point them out?


Indeed.


Yes and no?
They're saying they're using both mesh shaders and other hand-written compute shaders of their own, but AFAIK that's only for discarding hidden or sub-pixel triangles by defining LODs. In the end it still needs a geometry processor / tesselator for the triangles that aren't discarded, and that's dependent on clock speed.
Of course, we don't know if the SeriesX has the same amount of geometry engines, or the same amount of triangle throughput per clock as the PS5.

But again, more weight on mesh shaders and compute throughput for effective geometry throughput puts the validity of Lockhart even more into question.
Arent they saying that virtual geometry uses hardware where available ( i understand here they talk about primitive/mesh shaders) and compute where not ( android, current gen...)?.

What is clear is that their real time lighting is compute based.
 
Free a lot of resources? IT WILL FREE A LOT OF PRECIOUS TIME AND PAIN DEVELOPERS SPEND JUST TO MAKE THEIR ASSETS WORK IN-GAME!
I cant stress this enough!!!!
Have we really reached that point? It is a God-sent!
In the case of photogrammetry based assets and static meshes which is what Nanite seems to be aimed at, it takes less that 10 secs to bake a normal map...so..no. Many developers already use "one click" solutions like InstaLOD:

 
Yes and no?
They're saying they're using both mesh shaders and other hand-written compute shaders of their own, but AFAIK that's only for discarding hidden or sub-pixel triangles by defining LODs. In the end it still needs a geometry processor / tesselator for the triangles that aren't discarded, and that's dependent on clock speed.
Of course, we don't know if the SeriesX has the same amount of geometry engines, or the same amount of triangle throughput per clock as the PS5.

But again, more weight on mesh shaders and compute throughput for effective geometry throughput puts the validity of Lockhart even more into question.


I actually think this solution sounds like it would be good for Lockhart. It sounds like they can do fine-grained loading of geometry from disk instead of coarse loading of full models that require huge amounts of culling. With 1 polygon per pixel even geometry will scale linearly with resolution. Lockhart would still be full featured rdna2 in theory so it may work out.
 
Arent they saying that virtual geometry uses hardware where available ( i understand here they talk about primitive/mesh shaders) and compute where not ( android, current gen...)?.
On this demo running on a PS5, they're using a mix of primitive shaders and hand-written compute shaders:

However, subsequent to our interview, Epic did confirm that the next-gen primitive shader systems are in use in UE5 - but only when the hardware acceleration provides faster results than what the firm describes as its 'hyper-optimised compute shaders'.

https://www.eurogamer.net/articles/...xt-gen-unreal-engine-running-on-playstation-5


In the case of photogrammetry based assets and static meshes which is what Nanite seems to be aimed at
I'm pretty sure I heard them talk about using Nanite for destruction and transformation, and you actually see a lot of that during the demo (entire walls turning into debris while falling down).


I actually think this solution sounds like it would be good for Lockhart. It sounds like they can do fine-grained loading of geometry from disk instead of coarse loading of full models that require huge amounts of culling. With 1 polygon per pixel even geometry will scale linearly with resolution. Lockhart would still be full featured rdna2 in theory so it may work out.
But if the geometry is the same, it still needs compute for discarding triangles. What you're suggesting is lower resolution will need less tessellation performance which might be true in this context (because they're discarding all sub-pixel triangles), but that's not compute-based. The amount of triangles they need to discard is the same if the geometry is the same, and that's using precious compute throughput.
If anything, a lower resolution needs to be discarding more triangles because there will be more sub-pixel triangles to discard.
 
I was wrong about RTRT. If they can achieve this kind of GI without RT engine this is more than enough to my eyes.
But with this insane number of poly per frame reaching per pixel precision... how will they achieve foliage and vegetation? Will they still use transparency or will it be freaking insane number of polys?
 
I was wrong about RTRT. If they can achieve this kind of GI without RT engine this is more than enough to my eyes.
But with this insane number of poly per frame reaching per pixel precision... how will they achieve foliage and vegetation? Will they still use transparency or will it be freaking insane number of polys?

You'll still need RTRT for any reflections that can't be done through screen-space reflections (i.e. anything that's being reflected from an object not present on screen?).
They were very careful in not needing it for the demo, though.
 
I'm pretty sure I heard them talk about using Nanite for destruction and transformation, and you actually see a lot of that during the demo (entire walls turning into debris while falling down).
Those are still static non-rigged meshes (unlike an animated character for example).
 
But if the geometry is the same, it still needs compute for discarding triangles. What you're suggesting is lower resolution will need less tessellation performance which might be true in this context (because they're discarding all sub-pixel triangles), but that's not compute-based. The amount of triangles they need to discard is the same if the geometry is the same, and that's using precious compute throughput.
If anything, a lower resolution needs to be discarding more triangles because there will be more sub-pixel triangles to discard.
Discarding triangles I think is not the issue here.

I actually think the bottleneck is the aim for 1 triangle per pixel, that should have been a red flag right away for us.
(white paper here: https://www.amd.com/system/files/documents/rdna-whitepaper.pdf)
RDNA rasterizer only output 1 triangle and emit 16 pixels per clock cycle. But if you're doing 1 triangle per pixel, you've reduced your output dramatically. You're heavily primitive bound, you're 1/16 slower. Rasterization efficiency is going to be super low. And it doesn't matter how much you cull your triangles, if you're using fixed function in this way, you're bound. I don't know how they got around this. If the aim is for 100% effective rasterization, each triangle should cover 16 pixels. Among many things that need consideration, I'm not sure if you can render 1 triangle to take up 3 pixels and use the individual vertices to represent 1 pixel. So you claw back 3 pixels as opposed to 1.

https://frostbite-wp-prd.s3.amazonaws.com/wp-content/uploads/2016/03/29204330/GDC_2016_Compute.pdf
Graham W, who wrote this gpu culling article while with DICE (now with Epic) wrote a lot about GPU based culling and GPU based workloads.
 
Back
Top