Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

So they're aiming for 60fps for console on release, assuming around 1440p? That means the console could handle native 4k or very close to it at 30fps upon release then?
 
Ec4zOrPWkAEguTY


Ec4zPTlWoAMgHs2


Ec4zPyzXsAA5oTN


EDIT:
Ec4zQgdWAAA6_er


Ec43DlVUMAE2bx-


Ec43FdgU4AIvojd


Ec43GbPUMAQ5jO2
 
Unrolled Twitter Thread: https://threadreaderapp.com/thread/1283057629506367488.html?refreshed=yes

Yes, Nanite draws the gbuffer in 4.5ms on average! Many assumed this amount of detail only can be had at 30fps. Not true! This is well within typical 60hz budgets. That doesn’t even count optimizations I’ve made since.

Also memory isn’t insane. This is super WIP and was immature for the demo’s release timeframe but it a top focus for us right now. It’s already not as bad as you think and it will get significantly better over the next year.

Definitely check out the last third of the video where @Feaneroh covers the art production of the demo. The last bit where she flies 100mph through a city? Yeah, that was all just a detailed as everything else, you just couldn't tell.



The statue we say is 33M triangles? This shows just what that means. Now I wouldn't recommend or expect this for a common game asset due to the unnecessary size it would take on disk. This is a stress test to show off that the tech scales like we say it does.



That doesn't mean you can't still import your sculpts straight from zbrush. Just like with textures you author high and as you get closer to ship Nanite will make it easy to decide to drop a mip or 2 where needed to manage your shipping package size. Perf was good regardless.

When disk size isn't a concern, say film or enterprise use cases, you can do this and more. Honestly I expect data delivery to be one of the biggest constraints in game graphics for next gen. Virtualization tech like Nanite, VT and fast SSDs make the run-time side a nonissue.

The kitbashing in the cave sections was nuts. There is literal archaeology to be done here. I've counted 15+ onion layers of mesh. The video doesn't even do it justice as many of the meshes are grouped so when he hides them many go away at once.



I'm sure there are bits that are covered many times over such that no part of that mesh are seen at all. Normally this would be shameful sloppy env art. They should optimize it right? Well only if it costs meaningful frame time. With Nanite it's negligible.

No one would say to a painter, go back and delete those early paint strokes you completely covered later on. That's unnecessary and the request sounds ridiculous. Why shouldn't it be the same for set dressing?
 
So I'm guessing they'll still find a polygon density that matches something like 1 polygon per pixel at a reasonable viewing distance. I wonder how they'll figure that out at time of authoring, or if that comes back to some engine tool that will automatically scale for you if you want to reduce the asset for your scene.
 
I'm sure there are bits that are covered many times over such that no part of that mesh are seen at all. Normally this would be shameful sloppy env art. They should optimize it right? Well only if it costs meaningful frame time. With Nanite it's negligible.
Does this hint they have hidden surface removal build in? Pretty likely...

I wonder about what this means for HW RT. They still don't mention any plans to support it. Even they work on compute traced mirror reflections.
hmmm...
 
Does this hint they have hidden surface removal build in? Pretty likely...

I wonder about what this means for HW RT. They still don't mention any plans to support it. Even they work on compute traced mirror reflections.
hmmm...

I would expect it's still in there. It's obvious they're putting their eggs mostly in the Lumen basket.
 
Does this hint they have hidden surface removal build in? Pretty likely...

I wonder about what this means for HW RT. They still don't mention any plans to support it. Even they work on compute traced mirror reflections.
hmmm...

Triangle ray-tracing does not scale down well to less capable platforms. Lumen is perfect for a general purpose 3d engine.
 
Here are some tidbits apparently from the latest Edge Magazine

The tech goes for beyond backface culling (which detects with polygons are facing away from a view, and doesn't draw them, saving on processing power).
"It's in the form of textures," Karis explains. It's actually like, what are the texels of the texture that are actually landing on pixels in your view? So, it's in the frustum......It's a very accurate algorithm, because when you're asking for it, it's requesting it. But because it's in that sort of paradigm, that means as soon as you request it, we need to get that data in very quickly."
- Brian Karis (Nanite Inventor)

Sounds like software analogue of sampler feedback.
 
Not really. SF is reactionary, querying the GPU on what its sampling and you can respond to that. An algorithm is predictive, computing texture use before-hand. This is what virtual- texturing has relied upon to select which tiles to load; the better the algorithm, the fewer tiles you need pre-cache.
 
Back
Top