Unreal Engine 5, [UE5 Developer Availability 2022-04-05]

How could sampler feedback help here?

By only loading textures that are viewed by the player and streaming them much more efficiently than ever before. Virtual Texturing sucks in comparison, SFS does so much more.

UE5 is still built with old texture streaming in mind.

1366_2000.jpg


xbox-series-s-28-720x405.jpg
 
Oh, i have overlooked one thing: Overlaps of many objects. This means BLAS will overlap like cracy, and this might thrash DXR performance.
(Though this applies to SDF too!)

...but for SDF, you can just choose the one with minimum distance and reject all other candidates.
With BVH, you have to traverse all of them, and only after all hits are known you know the closest.

So that's a true SDF advantage. Have not thought on this earlier - interesting. UE5 + HW RT might never do well?

Still, IMO the proper solution is to remove overlaps offline already, so they don't hurt runtime perf. at all. That's what i do, but it has the big downside of breaking easy instancing.
 
By only loading textures that are viewed by the player and streaming them much more efficiently than ever before. Virtual Texturing sucks in comparison, SFS does so much more.
But this does not help RT - you need offscreen and occluded stuff anyways for that.
Notice this makes the whole idea of streaming based on visibility a lot less attractive, and for streaming on distance you don't need sampler feedback which has a cost too.

EDIT: Instancing also adds argument to the latter. With many instances of the same data, you likely have use for it somewhere else, even if one instance is completely occluded.
 
But this does not help RT - you need offscreen and occluded stuff anyways for that.
Notice this makes the whole idea of streaming based on visibility a lot less attractive, and for streaming on distance you don't need sampler feedback which has a cost too.

EDIT: Instancing also adds argument to the latter. With many instances of the same data, you likely have use for it somewhere else, even if one instance is completely occluded.

SFS is just about textures, nothing else! You don't need to trace rays against textures, but object geometry so offscreen and occluded stuff can still be traced, remember we are just talking about textures here. And SFS helps RT indirectly, when you have less VRAM occupied by textures, you have more space for RT. But you are right about streaming in the long distance, there Epic invented a solution, I believe it's called world partition?
 
SFS is just about textures, nothing else! You don't need to trace rays against textures, but object geometry so offscreen and occluded stuff can still be traced, remember we are just talking about textures here. And SFS helps RT indirectly, when you have less VRAM occupied by textures, you have more space for RT. But you are right about streaming in the long distance, there Epic invented a solution, I believe it's called world partition?
??? maybe I misinterpret your post, but you need Texture for RT. You need to know if the texture is transparent, what color it has (at that position etc). Else you wouldn't get colored light, ... and you would need to do things multiple times. If you just do a hit-detection, than you might not need the textures, because you might only wanne know what is in the viewspace at the end. But here alpha textures might get a problem again.

But yes, SFS is strictly about textures while the geometry thing is more about mesh-shaders etc.

btw, I don't think that the unreal engine demo uses that much texture data for the terrain. Looks more or less like the same texture to be used all over the place (just morphed a bit over the terrain structure). And all in the same color space so texture-compression should be very efficient.

What is really missing in this demo are cities and ... well different stuff in the scene.
 
??? maybe I misinterpret your post, but you need Texture for RT. You need to know if the texture is transparent, what color it has (at that position etc). Else you wouldn't get colored light, ... and you would need to do things multiple times. If you just do a hit-detection, than you might not need the textures, because you might only wanne know what is in the viewspace at the end. But here alpha textures might get a problem again.

But yes, SFS is strictly about textures while the geometry thing is more about mesh-shaders etc.

btw, I don't think that the unreal engine demo uses that much texture data for the terrain. Looks more or less like the same texture to be used all over the place (just morphed a bit over the terrain structure). And all in the same color space so texture-compression should be very efficient.

What is really missing in this demo are cities and ... well different stuff in the scene.

 
??? maybe I misinterpret your post, but you need Texture for RT. You need to know if the texture is transparent, what color it has (at that position etc). Else you wouldn't get colored light, ... and you would need to do things multiple times. If you just do a hit-detection, than you might not need the textures, because you might only wanne know what is in the viewspace at the end. But here alpha textures might get a problem again.

But yes, SFS is strictly about textures while the geometry thing is more about mesh-shaders etc.

btw, I don't think that the unreal engine demo uses that much texture data for the terrain. Looks more or less like the same texture to be used all over the place (just morphed a bit over the terrain structure). And all in the same color space so texture-compression should be very efficient.

What is really missing in this demo are cities and ... well different stuff in the scene.

You are right, I completely forgot about that.

Wouldn't it be possible to store that information in some sort of dummy texture that is extremly low res so it's not using memory? There should be some ways to make SFS work efficiently with Raytracing, albeit some clever programming would be needed.
 
But you are right about streaming in the long distance, there Epic invented a solution, I believe it's called world partition?
That's no new invention. It's the most trivial and obvious way to stream open worlds, used for ages in many games. (But IDK what UE4 did here - maybe manual partitions.)
but you need Texture for RT.
For GI they probably use the 'surface cache' over the full res textures. I guess also for the reflections. Can't imagine they would execute material shaders for hit points either, which feels inefficient. Likely the surface cache has some lower res textures and standard PBR model.
btw, I don't think that the unreal engine demo uses that much texture data for the terrain. Looks more or less like the same texture to be used all over the place (just morphed a bit over the terrain structure). And all in the same color space so texture-compression should be very efficient.
What is really missing in this demo are cities and ... well different stuff in the scene.
The textures are per model and not shared across multiple models. Sharing would be hard to achieve with photogrammetry.
Instead they use many instances of those scanned models and composite a scene from that. Thus it's necessary to have similar colors, so the patchwork does not become visible. Overlapping many models is also necessary to achieve good compositions.

Yeah, would have been nice to show something else than rocks this time. But even owning Quixel, the available models are still limited. And modeling such detail manually is lots of work.
Curious how this evolves to games. Not everyone might be able to utilize those insane detail levels. I want to see a detailed spaceship. :D
 
I liked this video about photogrammetry workflow and terrain modeling:
It's a bit old, and some things can be automated. E.g. Unitys delighting tool seems good.
But it's still a lot of work to make those assets.
 
I have some Nanine assets now. Awesome :D

upload_2021-5-27_13-55-11.png
Looking at debug view of triangle clusters i see the hierarchy when moving forth and back the camera.
It seems cluster boundaries are shared with the next lower LOD, but not with the one after that. So the red X edges are no longer relevant after switching to the green LOD. But all red clusters within the green island have to switch at once to prevent cracks.

At least on my Vega the resolution is not insane. Looks like HW rasterizer would handle this quite well. Triangles are mostly still quite 'large'.
Culling is on cluster level. I can imagine they sort clusters by distance, then draw a batch of clusters, then make Z pyramid for compute culling the next batch.
Not sure if there is indeed drawing within compute shaders going on.

Can't wait for somebody doing a frame analysis with gfx debugger...
 
At least on my Vega the resolution is not insane. Looks like HW rasterizer would handle this quite well. Triangles are mostly still quite 'large'.

Haven't seen anyone play with them yet, but it looks like there are console commands for how much nanite is rendered in SW vs HW. Best performance probably depends on platform?


Edit: to others, regarding raytracing, lumen is great but has enormous drawbacks. It doesn't do sharp, high quality bounces from offscreen (maybe not noticeable in most cases), the shadow maps are amazing but nowhere near as good as rt shadows, the bounces accumulate sloooooooooowly,etc. Lumen is great because it looks amazing, and it's a bundle of older techniques that are easily adaptable to work with nanite, it doesn't "kill" RT though.

I think If there's a RT killer, it will be nanite, if everyone switches over to rendering schemes that value tris over datastructures that are easy to raytrace against.
 
Tbh Lumen looks a lot like tech aimed at running on PS4/XBO gen h/w which explains nicely why it does everything this way with such drawbacks.
I'm pretty sure that we'll see Lumen 2.0 down the line which will drop most of DX11 compatibility in favor of more modern DX12/RT approaches. They kinda started implementing these features already but they aren't really a prime path in Lumen yet.
 
Back
Top