Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

I'm not sure it is a more proper translation as he says "Did not watch the whole 135min lol."
we need more translations

anyway this is a lot interesting as 970 evo is a PCIE ex.3.0, not even the 4.0
adecuated disk placement memtioned could be like Spiderman, assets 200 times duplicated to reduce random accesses time.
 
...1M tris meshes are not dense enough to retain all micro-details.
I think the problem there is actually in the sampling. If the sampling sampled all 1 million (or whatever's visible) trianlges and drew them, it'd look like the normal mapped version, but only the pixel-sized triangles are being drawn ignoring all the variance at the sub pixel level. It could also be an issue with the Quixel scan lacking depth accuracy in the vertices that the normal map captures. In principle, sufficient res geometry won't been normal maps, but of course for games that's not going to be true as they are a far more efficient way to embody that data.
 
I'm not sure it is a more proper translation as he says "Did not watch the whole 135min lol."
we need more translations

anyway this is a lot interesting as 970 evo is a PCIE ex.3.0, not even the 4.0
i didn’t watch the whole thing because a lot of it was not Lumen or Nanite related. But I did watch the more relevant parts, especially at 53:00 mark.
 
This is exactly the type of things Cerny wants to avoid in the future.
Virtualised geometry does that without the need for streaming those assets in full. Virtualised geometry is the same as virtualised textures in intent. It allows you to have a unique objects (texture) per pixel but only storing the sub-mesh/texture that the pixel-data is part of.
 
Virtualised geometry does that without the need for streaming those assets in full. Virtualised geometry is the same as virtualised textures in intent. It allows you to have a unique objects (texture) per pixel but only storing the sub-mesh/texture that the pixel-data is part of.

The engineers in that live cast implied that they avoided high speed SSD partly through careful disk layout. I have no idea what they meant specifically there.
 
Apparently it runs very well on an RTX 2080 laptop GPU (about 2070 Super performance) with a Samsung 970 Evo SSD, performance was in the range of 1440p, 40+fps.

Since you mentioned Tutomos, he also said this :

Someone asked about is 8gb bandwidth on PS5's SSD true, the developer said that's a question to ask Sony. And then he said high-performance SSD will definitely help streaming, because Nanite and Lumen require good IO but the requirement to run the demo they showed doesn't require the specs as high as PS5's SSD.

https://www.resetera.com/threads/ti...-awesome-on-both.206223/page-10#post-34178337

So, XSX would run maybe this demo at higher res, but Nanite textures and Lumen lighting tech would be scaled down somehow?
 
The engineers in that live cast implied that they avoided high speed SSD partly through careful disk layout. I have no idea what they meant specifically there.
Like with Spiderman where a bin is duplicated in 200 places in the disk to access it as soon as possible, with SSD I supposse you can make something similar to reduce latency accessing the required data through the flash controller. Thats why the 12 channels Sony's custom flash controller come in handy.
 
Nobody in its right mind will use such a dense model in a real game anyway. The bloody thing is a tech demo.
The whole "we don't need normal maps" is also marketing bollocks (to their credit they only mention this when talking about the 33M statue) given that 1M tris meshes are not dense enough to retain all micro-details.
Here's a 1M tris Quixel rock without & with the normal map applied.
Annotation-2020-05-16-182651.jpg
And what about the textures?
The statue was also textured with color and variable roughness. Have they applied UVs on those super high models? Or did it just read the color info as a polypaint?
The environment looked highly detailed. The shadows are probably popping out the geometric detail. The quixel example you show above lacks shadows and probably thats why the detail is not as visible. The normal is also a bit over applied hence why the top part of the rock doesnt look very natural (a problem I get often trying to pop some details out with my renders when I have no height maps).
 
And what about the textures?
The statue was also textured with color and variable roughness. Have they applied UVs on those super high models? Or did it just read the color info as a polypaint?
The environment looked highly detailed. The shadows are probably popping out the geometric detail. The quixel example you show above lacks shadows and probably thats why the detail is not as visible. The normal is also a bit over applied hence why the top part of the rock doesnt look very natural (a problem I get often trying to pop some details out with my renders when I have no height maps).

Reddit post: Pixels? Triangles? What’s the difference? — How (I think) Nanite renderes a demo with 10¹¹ tris


I think this explanataiton is the good one for UV. It is long but read everything.
 
Last edited by a moderator:
with SSD I suppose you can make something similar to reduce latency accessing the required data through the flash controller.
That makes no sense to me. SSD's access data directly, so position within the NAND won't affect seek time. With HDDs, or spinning disks, you have to move the head to the data with a significant influence on latency, so physically having the data closer by reduces seek times.

There should be zero reason to duplicate data on SSDs, unless I don't understand some underlying intricacies.
 
Back
Top