Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

nanite is one aspect of UE, devs can add foliage over a nanite rendered terrain, don't understand the false concerne here, they even stated it in the video.
so if nanite portion of rendering took only 4,5ms of frame time, there plenty of ressource left for other graphical features.

Of course you can, but not with the same level of details. The goal here is not to dismiss Nanite, it seems like a great technique I had just a problem with the idea that geometry is a solved issue while there are still many shortcomings at the moment.

It always felt weird to me watching those demo's. Like the scene was beautiful but felt dead. And the I found out why, because nanite rendering billions of triangles could really only be applied to rigid structures.

Solve rendering richer billion geometry scenes with grass, trees, sand, water etc and ill be impressed..

Nanite is not rendering billions of polygons, the source assets have billions of polygons but Nanite only renders what's needed for a given frame so in this demo this is at most 3.5 millions of polygons since rendering polygons smaller than one pixel is useless.
 
Nanite is not rendering billions of polygons, the source assets have billions of polygons but Nanite only renders what's needed for a given frame so in this demo this is at most 3.5 millions of polygons since rendering polygons smaller than one pixel is useless.

my bad ... not "rendering", but has "access to" billions of polygons ..
 
They said this is not useful for foliage but if it is only staying foliage, hair and grass we begin to see the end of the problem, And the best it seems the system is not a performance hog. But we will wait for future iteration of Nanite technology.

Which was fully predictable, but my question they didn't answer is lighting. I'd still consider lighting far from solved, how long does the shadow pass take? Can you have dozens of shadow casting lights, hundreds? It's certainly possible, I suspect they're doing some sort of BVH crawl and splat, which could add shadow maps fairly cheaply, though not soft area light shadows.

Of course you can, but not with the same level of details. The goal here is not to dismiss Nanite, it seems like a great technique I had just a problem with the idea that geometry is a solved issue while there are still many shortcomings at the moment.



Nanite is not rendering billions of polygons, the source assets have billions of polygons but Nanite only renders what's needed for a given frame so in this demo this is at most 3.5 millions of polygons since rendering polygons smaller than one pixel is useless.

Technically more than one poly per pixel is needed, as screen pixels represent discrete view frustums, rather than discrete infinitely small points.

Thus nanite still needing normal maps, which you can compact down into things like scratch maps and brdfs to get sub pixel detail. It's also what leadr mapping does for you.

Interesting that it doesn't do that for you.
 
Which was fully predictable, but my question they didn't answer is lighting. I'd still consider lighting far from solved, how long does the shadow pass take? Can you have dozens of shadow casting lights, hundreds? It's certainly possible, I suspect they're doing some sort of BVH crawl and splat, which could add shadow maps fairly cheaply, though not soft area light shadows.



Technically more than one poly per pixel is needed, as screen pixels represent discrete view frustums, rather than discrete infinitely small points.

Thus nanite still needing normal maps, which you can compact down into things like scratch maps and brdfs to get sub pixel detail. It's also what leadr mapping does for you.

Interesting that it doesn't do that for you.

Ligthing is far from been solved funny they did not talk about the virttual shadow maps system where they use sometimes 16k textures.
 
i would like...like the good old times of PS1 demo discs, that such demos would be released on the PS store (and other platforms stores), so we could play with them.
 
I honestly do not know how they will get around light leaking or offer up mirror reflections at all (that seems at odds with using screen-space, voxels, or SDFs). Perhaps their final version will combine the sdf and voxel tracing with some form of triangle ray tracing to clean up light leaking.
Never the less, their method of compute GI delivers good results regardless, IMO the only way RTX GI can compete with this, is if it provides faster performance through hardware acceleration.

This also means RTX will be mostly relegated to Reflections and Shadows in the near future, or complete path tracing models as in Quake 2 RTX and Minecraft RTX.
 
Is there obvious light leaking in the demo? It wasn't obvious to me. There's presumably some, if they mention it in their presentation.

Since they're pimping Lumen as a general solution, I wouldn't have thought they'd be designing around light leaks.
 
Is there obvious light leaking in the demo? It wasn't obvious to me. There's presumably some, if they mention it in their presentation.

Since they're pimping Lumen as a general solution, I wouldn't have thought they'd be designing around light leaks.
They do not have many smaller features in the demo or thinner facades, but if you look in between the statues or stalactites and stalagmites you can see light leaking. They try and clean it up in screen space, but that only goes so far. It sould show up more in other content with smaller portals and smaller assets (indoor light coming in through human sized window kind of example). Similar problems to SVOGI or SDF GI seen elsewhere.
 
Never the less, their method of compute GI delivers good results regardless, IMO the only way RTX GI can compete with this, is if it provides faster performance through hardware acceleration.

This also means RTX will be mostly relegated to Reflections and Shadows in the near future, or complete path tracing models as in Quake 2 RTX and Minecraft RTX.
Well with the 3000 Series just around the corner(Sept?), that power might be available soon if the rumours are to be believed.
 
Never the less, their method of compute GI delivers good results regardless, IMO the only way RTX GI can compete with this, is if it provides faster performance through hardware acceleration.

This also means RTX will be mostly relegated to Reflections and Shadows in the near future, or complete path tracing models as in Quake 2 RTX and Minecraft RTX.
DLSS is what somewhat equalizes the performance gap for Nvidia here. I think RT will enable the developers to perform higher quality GI and everything else, but the costs are going to be large, penalization to resolution is likely here. Though, having said that, UE5 only got away with 1440p30 before having to upscale. Super RT heavy games run at 1080p30 without DLSS on 2080TI, so there's room there to surpass what UE5 has done with vendor specific setups.
 
I just can't agree with people saying DLSS is a solution for anything. They've come a long way from first titles but still they're full of ringing artifacts and other anomalies which are just inexcusable at least for me
 
Super RT heavy games run at 1080p30 without DLSS on 2080TI,
That's actually 1440p @40fps for pathtraced games like Minecraft or Quake 2, 1080p will be locked to 60fps on 2080Ti. Metro Exodus runs 1440p 50fps to 60 fps depending on the region and it has far more AI, physics and action than the UE5 demo.

they're full of ringing artifacts and other anomalies which are just inexcusable at least for me
TAA is full of blur, shimmering and instability during motion, so at the end of the day they equalize.
 
That's actually 1440p @40fps for pathtraced games like Minecraft or Quake 2, 1080p will be locked to 60fps on 2080Ti. Metro Exodus runs 1440p 50fps to 60 fps depending on the region and it has far more AI, physics and action than the UE5 demo.


TAA is full of blur, shimmering and instability during motion, so at the end of the day they equalize.
It depends of the TAA implementation.
 
TAA is full of blur, shimmering and instability during motion, so at the end of the day they equalize.
As pointed out, depends on TAA implementation. And I'll take jaggies at 32" 1440p any day over DLSS* or bad TAA

* (Until they fix ringing artifacts and preferrably more)
 
Back
Top