Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

I believe all SVT systems, SFS or not, will contain a contingency plan of guessing the colour of the texture it hasn’t arrived in time. It would be for a frame or two maximum; but the alternative is stalling the whole process waiting for that texture to arrive.

By my understanding, SFS will just use the resident lower LOD tile until the required one is present.
 
By my understanding, SFS will just use the resident lower LOD tile until the required one is present.
A technique called Sampler Feedback Streaming - SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes. Bespoke hardware within the GPU is available to smooth the transition between mips, on the off-chance that the higher quality texture arrives a frame or two later.

This probably isn't available on PC. Yes. I would agree. It should use the resident lower quality mip data. This would only apply in the situation that you're doing MIPs offline as opposed to runtime. I'm unsure if this applies to what UE5 did. It is possible that they could be running some hybrid SVT/RVT system.
 
Last edited:
A technique called Sampler Feedback Streaming - SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes. Bespoke hardware within the GPU is available to smooth the transition between mips, on the off-chance that the higher quality texture arrives a frame or two later.

This probably isn't available on PC. Yes. I would agree. It should use the resident lower quality mip data. This would only apply in the situation that you're doing MIPs offline as opposed to runtime.

Yep. And that's the point of sampler feedback: there is no guesswork required as you know exacly what was sampled and where. Sampler feedback is already available in some turings card. It's only the method to update residency maps and the bespoke texture filters that are patented MS technology.
 
Yep. An that's the point of sampler feedback: there is no guesswork required as you know exacly what was sampled and where.
so if it doesn't arrive in time, it's going to stall.

But I don't worry about this happening on PC, much larger reality on console with the shared memory pool.
Once again, not sure if this will apply to UE5.

Some fo what we saw out there and the 'discrepancies' people are identifying might be the RVT system here.

You should watch this video here on RVT.
Combine this with SVT, and you can sort of see how the geometry might be dividing.

You can use a hybrid RVT with SVT together to mitigate each others sort of drawbacks.

***
Streaming Virtual Texture Build
When an RVT covers a large world with many Actors, rendering to the low resolution mips of the RVT can be a slow operation. Also, in this scenario, world Actors need to be permanently resident to be available to render to low mips that represent distant parts of the world, which can be expensive for memory.

In this situation, it's more efficient to bake and stream the low resolution mips of an RVT. The higher resolution mips can still be rendered at runtime. In this way, a single virtual texture can make the best use of both Streaming Virtual Texturing and Runtime Virtual Texturing approaches.

Enabling SVT with RVT
  1. To add streaming virtual texture support to an RVT, set the number of low mips that you would like to stream.

  2. In the RVT Asset, set the number of low mips that you would like to stream with the Number of low mips to stream to the virtual texture property.
 
Last edited:
@Shifty Geezer
Unreal also sets limits on it's tile pools I can see here:

[/Script/Engine.VirtualTexturePoolConfig]
+Pools=(SizeInMegabyte=36, TileSize=136, Format=PF_DXT1)
+Pools=(SizeInMegabyte=72, TileSize=136, Format=PF_BC5)
+Pools=(SizeInMegabyte=72, TileSize=136, Format=PF_DXT5)
+Pools=(SizeInMegabyte=34, TileSize=264, Format=PF_DXT1)
+Pools=(SizeInMegabyte=68, TileSize=264, Format=PF_BC5)
+Pools=(SizeInMegabyte=68, TileSize=264, Format=PF_DXT5)
This configuration translates to pools of 64x64 128-size tiles, or 32x32 256 size tiles at a cost of approximately 100MB.

So I think to ensure your texture pools aren't spilling out of control, they are defined in UE, so you can have a pool of 64x64 128-size tiles, or 32x34 256 size tiles. That pool would be 100MB.
If we think geometry is doing something similar, despite how fast your SSD is, you will still want to define your pool maximums to ensure that the GPU has enough room to do it's rendering. And this might be why you saw what you saw with the textures/geometry changing from 1 frame to the next. Something else needed to get out before it was allowed in.

@Graham where art thou. Please give info on VT on UE ;) Fix my inaccuracies....
 
Agreed
if a short tech demo weights hundreds GB, then we will not see those kind of assets in real game, because they ship games, not techdemo

The way they are propping up their Nanite tech reminded me of the first tech demos of UE3. There, they often talked about individual models with Millions of polys of source geometry each. What they really meant is the High Poly mesh had millions of polys, and thar U3 had some magic sauce that made their low poly meshes look as if those millions of polygons were in there, even if technically they weren't. That magic sauce was normalmaps.

It's the same thing here. They are talking of source geometry numbers to generate buzz and get artists impressed. Does not necessarily mean all rhat data is present in the final game. Just that artists feed that large data to Unreal, and it takes care of the rest (sort of kinda, I assume)
 
  • Like
Reactions: JPT
The way they are propping up their Nanite tech reminded me of the first tech demos of UE3. There, they often talked about individual models with Millions of polys of source geometry each. What they really meant is the High Poly mesh had millions of polys, and thar U3 had some magic sauce that made their low poly meshes look as if those millions of polygons were in there, even if technically they weren't. That magic sauce was normalmaps.

It's the same thing here. They are talking of source geometry numbers to generate buzz and get artists impressed. Does not necessarily mean all rhat data is present in the final game. Just that artists feed that large data to Unreal, and it takes care of the rest (sort of kinda, I assume)
They say it outright that it's scaled down ingame
 
so if it doesn't arrive in time, it's going to stall.

But I don't worry about this happening on PC, much larger reality on console with the shared memory pool.
Once again, not sure if this will apply to UE5.

Some fo what we saw out there and the 'discrepancies' people are identifying might be the RVT system here.

You should watch this video here on RVT.
Combine this with SVT, and you can sort of see how the geometry might be dividing.

You can use a hybrid RVT with SVT together to mitigate each others sort of drawbacks.

***
Streaming Virtual Texture Build
When an RVT covers a large world with many Actors, rendering to the low resolution mips of the RVT can be a slow operation. Also, in this scenario, world Actors need to be permanently resident to be available to render to low mips that represent distant parts of the world, which can be expensive for memory.

In this situation, it's more efficient to bake and stream the low resolution mips of an RVT. The higher resolution mips can still be rendered at runtime. In this way, a single virtual texture can make the best use of both Streaming Virtual Texturing and Runtime Virtual Texturing approaches.

Enabling SVT with RVT
  1. To add streaming virtual texture support to an RVT, set the number of low mips that you would like to stream.

  2. In the RVT Asset, set the number of low mips that you would like to stream with the Number of low mips to stream to the virtual texture property.

No? The whole point of adding texture filters is to address this particual scenario.
 
It's the same thing here. They are talking of source geometry numbers to generate buzz and get artists impressed. Does not necessarily mean all rhat data is present in the final game. Just that artists feed that large data to Unreal, and it takes care of the rest (sort of kinda, I assume)

They say that nanite is processing billions of triangles and converting it down to 20Million triangles that are then somehow sampled to generate around one triangle per pixel in most of the screen. So the assets do have their tens of millions of polygons, it is just that in a reyes like manner they are converted to what the screen can actually display, as if you have 100 polygons per pixel it doesn't matter since the single pixel is the fundamental unit, the minimum unit that can be seen.
 
They say that nanite is processing billions of triangles and converting it down to 20Million triangles that are then somehow sampled to generate around one triangle per pixel in most of the screen. So the assets do have their tens of millions of polygons, it is just that in a reyes like manner they are converted to what the screen can actually display, as if you have 100 polygons per pixel it doesn't matter since the single pixel is the fundamental unit, the minimum unit that can be seen.
It would be opposite of Reyes.

Reyes is all about tesselating polygons/patches until they are pixel sized.
Nanite sounds like it is decimating object until polygons are around pixel sized.

Most likely Nanite also has ability to load objects in very efficiently so full detail version doesn't have to be in memory all the time, or it is compressed so much that memory use is not a problem. (really doubt the latter.)

It will be interesting to see what the software rasterizer will output.
My quess would be just I'd, some barycenric/UV coordinates and such.
Seperate pass would be responsible for texturing and material etc.
 
Last edited:
They say that nanite is processing billions of triangles and converting it down to 20Million triangles that are then somehow sampled to generate around one triangle per pixel in most of the screen. So the assets do have their tens of millions of polygons, it is just that in a reyes like manner they are converted to what the screen can actually display, as if you have 100 polygons per pixel it doesn't matter since the single pixel is the fundamental unit, the minimum unit that can be seen.

Just like in 2004 epic said their engine was taking millions of triangle assets and converting them to lower poly normal mapped models. And it wasn't a lie. Their engine did do that. Just not all in realtime, which is smart.
I don't doubt nanite is getting billion tris meshes and converting them to something that can be rendered on screen. They said that is what it does, and I believe them. But part of it may be done offline as a pre-process. They never specified the details of how it works, including how much of the magic is done in real time and how much is pre-processed. Stay open-minded.
 
They never specified the details of how it works, including how much of the magic is done in real time and how much is pre-processed.

Whatever format assets are held in, part of the pipeline is surely converting them as they're imported to the editor. Magic done! There's then little difference between the in editor and in-game performance characteristics, including the amount of storage being used.

Since they're "inspired by" but not the same as Dream's engine, I'm half expecting assets to be stored as creation steps instead of a compressed model. The difference being they're automating working through the steps, using the imported asset as a guide template.
 
There's no way you could recreated the creation steps on the fly. You'd need to recreate the entire model rather than just part of it, at which point you need all the triangles in RAM. I think the 'inspired by' part is lighting and SDFs. Dreams has nothing in common with the triangulated assets of Quixel which are the massive datasets being drawn here.
 
There's no way you could recreated the creation steps on the fly. You'd need to recreate the entire model rather than just part of it, at which point you need all the triangles in RAM. I think the 'inspired by' part is lighting and SDFs. Dreams has nothing in common with the triangulated assets of Quixel which are the massive datasets being drawn here.

I wasn't suggestion they store the megascan creation process. More that UE5's native asset format is akin to Dreams. Dreams 'stores' assets as creation steps which gives them a very low footprint for their complexity. The difference with UE5 could be that it derives it's creation steps by 'matching' an existing asset, rather than an artist doing it manually. Some sort of wizzy shape fitting algorithm.

Just an idea. Or SDF lighting as you say. :D
 
Last edited:
Back
Top