Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

Most of the demo is rocks, right? Presumably it'd be possible to program some procedural AI tech that'd do a fine job of randomising these types of structures.

If I were the guys that made No Man's Sky, I'd be rushing around creating some middleware for UE5 that creates procedural structures in the millions of polygons range.

I can't understand why choosing from a library of objects (especially rocks!) is in any way efficient.

I can only imagine they'll be a big rush to create decent procedural objects for this engine now.
 
I can only imagine they'll be a big rush to create decent procedural objects for this engine now.
Yeah, i actually work on terrain generation since quite some time now.
HQ hightmaps as seen in the Gaea tool is quite easy to do. But as we zoom in more closely, the surface of rocks is formed mainly from fracture, which is much harder to simulate than erosion.
The modular composition workflow as proposed by UE5 adds another obstacle: Real world terrain is formed from global relations - you can not easily resemble the resulting flows and large scale shapes from repetive small scale modules.
The alternative is to simulate at global scale, not using modules. But then detail becomes difficult because storage limits.

Generating Quixel quality assets procedurally surely is hard. ML does quite good here (seen many papers), though it needs samples (expensive) so it limits creativity if we want to create non earth like worlds.

So, i'm into the simulation approach, because it needs no input and can generate from zero. (Knowing nothing about ML i have no choice anyway)
So far my experience is this is very promising, mainly because computers became so powerful. Traditional works (also NMS) were restricted to things like perlin or voronoi noise, but simulation can generate those same building blocks at much higher quality, and having that realistic flow which was missing before. It feels there are no limits on one side, but on the other side i'm constantly lacking ideas on what to do exactly to get expected results.
But mainly i feel like we are at a paradigm shift where procedural generation is ready to finally take off, solving the obstacle of boring and uniform results we've seen in the past. We don't have an option anyways - scanning and compositing is still too limited and too much effort on the long run i think.

For UE5, automatic placement of quixel assets over a hightmap really seems necessary, but even if detail is 'unlimited', results won't be convincing in many cases.
 
It looks like Hardware-RT is broken currently. I was testing it in a different project and it does absolutely nothing.
I caught some of their show yesterday, and it sounded like it's something they want to get in before it's release.
 
It looks like Hardware-RT is broken currently. I was testing it in a different project and it does absolutely nothing.
It seems it enables RTAO, but you need to disable indirect diffuse lighting to see it - https://forum.beyond3d.com/threads/...ailability-2022-q1.61740/page-94#post-2206198
Otherwise RTAO is being completely overridden by the Lumen indirect lighting (which doesn't seem to use HWRT for now).
As we discussed with joej before, the main performance hog with HW RT is the nanite proxies, you can disable them via the r.RayTracing.Geometry.NaniteProxies command and perf will increase 10 times (RT will still work for non Nanite geometry).
The issue is that NaniteProxies are separate instances of simplified Nanite meshes with tons of overlapped geometry (hence the low performance because such overlapping is very unoptimal for BVH creation and tracing), these separate instances of geometry that form the Nanite clusters have to be merged into larger meshes for appropriate performance with RT.
Epic already did such merging for SDFs, so i am expectiong them to adress the same problem with HWRT soon. It really should not be that difficult, they have to merge Nanite clusters into less number of individual proxy meshes with a few LODs.
 
Really cool demo.
I think last year's demo, especially with the flying scene at the end, was more impressive. This demo is more dev-centric of course, since they wanted to show off all the tools and UI. The sound composition tool looks awesome.

Are they releasing the "Lumen in the land of Nanite" demo with this UE5 early build?



I was arguing just a few pages back in this thread (and for months before) that additional system RAM could be used to mitigate the need for an ultra fast IO in many scenarios and this seems to prove that point (with which not everyone agreed). Its not ideal obviously from a cost, or initial load point of view, but its clearly a viable solution for those without fast SSD's or until DirectStorage is available. DDR5 hitting the market later this year with double the density of DDR4 should make these RAM capacities much more commonplace moving forwards too.

These specs scream lack of Direct Storage, high RAM to mitigate the slower IO and additional CPU cores to mitigate the lack of a hardware or GPU based decompressor.

I'd love to hear what kind of SSD utilisation the demo uses both on initial load and then once the RAM is fully loaded up. My assumption would be that on high RAM systems, once the RAM is filled the IO requirements are pretty reasonable. But the less RAM you have, the higher they get.

I think must of us agreed that gobbling up RAM to create a large buffer of pre-decompressed assets was going to be the most realistic method until Windows PCs' I/O catch up with new-gen consoles. And what happens when there's not enough RAM is detailed geometry turning into blobs.
Bad news is loading times aren't going away anytime soon on the PC, but at least we're getting the eye-candy.


Either this is a bug or Epic single handedly made Triangle Based Raytracing and dedicated RT acceleration obsolete. Because Lumen looks damn good.
Perhaps RT will make more sense to use on non-Nanite models? IIRC the characters aren't made using Nanite, as Epic is still working on getting Nanite to work on deformable geometry.
We're also only seeing RT shadows in there, right? Perhaps on UE5 the RT acceleration will only make sense for reflections, for example.
 
This engine does make me wish that CIG went with unreal instead of cryengine. This stuff would have worked great in space giving a ton of detail to the ships which is mostly what you see out there and the ability to get really detailed on the planets too.
Oh well

I await the first games using this

Queue Duke Nukem style engine switch in 3... 2...

Regards,
SB
 
Generating Quixel quality assets procedurally surely is hard. ML does quite good here (seen many papers), though it needs samples (expensive) so it limits creativity if we want to create non earth like worlds.

Wouldn't it be possible to train the ML engine by using handcrafted high quality non-earth-like samples? I guess the question would be how many hand crafted samples would be needed to get decent procedural asset creation via ML?

Regards,
SB
 
Wouldn't it be possible to train the ML engine by using handcrafted high quality non-earth-like samples? I guess the question would be how many hand crafted samples would be needed to get decent procedural asset creation via ML?

Regards,
SB

This question is one of the very critical open ML questions. How is it that humans learn from very few samples and computers require a lot and more samples.

I could see a game engine render in a ML friendly way and then applying a style. Styles could be learnt from samples artists make. How many artists and how long time to create samples, who knows.
 
UE5 is using the SM5 by default with an optional "SM6 Experimental" mode hidden in the settings. It also works with both D3D11 and D3D12 (both nanite and lumen!).

When you select the H/W RT for Lumen it defaults to SM6 so probably explains why it recompiles shaders and requires a restart. Still haven't spent too much time exploring those options but I'm surprised they are supporting so many configs with this :yes:
 
Unless I'm doing something wrong or not clicking certain developer's options, but the demo is so visually underwhelming. The terrain/canyons texturing isn't any better than certain areas in RDR2 Austin areas. And truth be told, the Quixel texturing looks like it has that poor/rough/stretched/uneven photography look at times. Mind you, I'm not bad-mouthing the tech (not yet, anyhow), but the demo overall presentations as it stands.
 
Wouldn't it be possible to train the ML engine by using handcrafted high quality non-earth-like samples? I guess the question would be how many hand crafted samples would be needed to get decent procedural asset creation via ML?
I share the question. But content creation seems a very promising application of ML, guess we all have high hopes here.
Also, human artists have no idea how alien worlds should look like either. Photos from Moon, Mars and Venus all show the same rocks. So maybe some Deep Dream in 3D can create some stuff even beyond human imagination... :O

Edit: I realize i'm already bored about Nanite detail. Ready for the next hype! :D
 
I think last year's demo, especially with the flying scene at the end, was more impressive. This demo is more dev-centric of course, since they wanted to show off all the tools and UI. The sound composition tool looks awesome.

Absolutely this one's more impressive then last years, by default since we can actually play it, instead of a corridor scripted event on a video of the internets. Even purely technically its sitll more impressive on many aspects.
 
Back
Top