How could sampler feedback help here?Another reason to finally implement Sampler Feedback, it would free up a ton of VRAM for Raytracing.
How could sampler feedback help here?
Oh, i have overlooked one thing: Overlaps of many objects. This means BLAS will overlap like cracy, and this might thrash DXR performance.
(Though this applies to SDF too!)
But this does not help RT - you need offscreen and occluded stuff anyways for that.By only loading textures that are viewed by the player and streaming them much more efficiently than ever before. Virtual Texturing sucks in comparison, SFS does so much more.
But this does not help RT - you need offscreen and occluded stuff anyways for that.
Notice this makes the whole idea of streaming based on visibility a lot less attractive, and for streaming on distance you don't need sampler feedback which has a cost too.
EDIT: Instancing also adds argument to the latter. With many instances of the same data, you likely have use for it somewhere else, even if one instance is completely occluded.
??? maybe I misinterpret your post, but you need Texture for RT. You need to know if the texture is transparent, what color it has (at that position etc). Else you wouldn't get colored light, ... and you would need to do things multiple times. If you just do a hit-detection, than you might not need the textures, because you might only wanne know what is in the viewspace at the end. But here alpha textures might get a problem again.SFS is just about textures, nothing else! You don't need to trace rays against textures, but object geometry so offscreen and occluded stuff can still be traced, remember we are just talking about textures here. And SFS helps RT indirectly, when you have less VRAM occupied by textures, you have more space for RT. But you are right about streaming in the long distance, there Epic invented a solution, I believe it's called world partition?
??? maybe I misinterpret your post, but you need Texture for RT. You need to know if the texture is transparent, what color it has (at that position etc). Else you wouldn't get colored light, ... and you would need to do things multiple times. If you just do a hit-detection, than you might not need the textures, because you might only wanne know what is in the viewspace at the end. But here alpha textures might get a problem again.
But yes, SFS is strictly about textures while the geometry thing is more about mesh-shaders etc.
btw, I don't think that the unreal engine demo uses that much texture data for the terrain. Looks more or less like the same texture to be used all over the place (just morphed a bit over the terrain structure). And all in the same color space so texture-compression should be very efficient.
What is really missing in this demo are cities and ... well different stuff in the scene.
??? maybe I misinterpret your post, but you need Texture for RT. You need to know if the texture is transparent, what color it has (at that position etc). Else you wouldn't get colored light, ... and you would need to do things multiple times. If you just do a hit-detection, than you might not need the textures, because you might only wanne know what is in the viewspace at the end. But here alpha textures might get a problem again.
But yes, SFS is strictly about textures while the geometry thing is more about mesh-shaders etc.
btw, I don't think that the unreal engine demo uses that much texture data for the terrain. Looks more or less like the same texture to be used all over the place (just morphed a bit over the terrain structure). And all in the same color space so texture-compression should be very efficient.
What is really missing in this demo are cities and ... well different stuff in the scene.
That's no new invention. It's the most trivial and obvious way to stream open worlds, used for ages in many games. (But IDK what UE4 did here - maybe manual partitions.)But you are right about streaming in the long distance, there Epic invented a solution, I believe it's called world partition?
For GI they probably use the 'surface cache' over the full res textures. I guess also for the reflections. Can't imagine they would execute material shaders for hit points either, which feels inefficient. Likely the surface cache has some lower res textures and standard PBR model.but you need Texture for RT.
The textures are per model and not shared across multiple models. Sharing would be hard to achieve with photogrammetry.btw, I don't think that the unreal engine demo uses that much texture data for the terrain. Looks more or less like the same texture to be used all over the place (just morphed a bit over the terrain structure). And all in the same color space so texture-compression should be very efficient.
What is really missing in this demo are cities and ... well different stuff in the scene.
At least on my Vega the resolution is not insane. Looks like HW rasterizer would handle this quite well. Triangles are mostly still quite 'large'.
Yes, yes, yes, yes !!!! Tell them!We need the ability to partially update BVHs to efficiently handle this scale.