Curious...whatever is out of screen is supposed to be culled to save performance. But with ray tracing, reflective surfaces receive information from whatever is out of screen.
A funny bug due to the fact they use occlusion data to animate or not some character
Curious...whatever is out of screen is supposed to be culled to save performance. But with ray tracing, reflective surfaces receive information from whatever is out of screen.
Also often the reflections appear to have lower LOD models instead of whats actually on screen
Like in this photo. See trees, shadows on cars, geometry and normal map detail on Spiderman.
So how does this work exactly?
Is there some kind of "cheating" going?
The BVH structure from which RT information is taken is normally at a lower LOD than the rest of the scene that isn’t reflected. Think of it as a second world in a parallel dimension (at lower LOD) which is only visible in the reflections. You can choose how many objects are in this BVH world, how far you want your reflection to look into etc etc to optimise performance.
This is why the trees have less detail in that example, they are referenced from a lower LOD just for the RT reflex.
Or you can play The Medium.That kind of shit could be a hole new kind of adventure/horror game. Make it in vr and will forgive the whole idea has been done in scifi before. Even flash had a season in similar vein.
But what about the culling? That means the application if culling is also limited as the engine must also render whats out of screen to reflect it on surfacesThe BVH structure from which RT information is taken is normally at a lower LOD than the rest of the scene that isn’t reflected. Think of it as a second world in a parallel dimension (at lower LOD) which is only visible in the reflections. You can choose how many objects are in this BVH world, how far you want your reflection to look into etc etc to optimise performance.
This is why the trees have less detail in that example, they are referenced from a lower LOD just for the RT reflex.
But what about the culling? That means the application if culling is also limited as the engine must also render whats out of screen to reflect it on surfaces
Of course you spent all day looking at your reflection!I just can’t believe how you can see the whole city in the reflection of a building, far far away in the distance, with pedestrians and cars moving around all the way into the distance! I’ve played with that a lot today.
Of course you spent all day looking at your reflection!
And speaking of the reflections, I must say that this is by far the most next gen thing I’ve experienced so far.
I just can’t believe how you can see the whole city in the reflection of a building, far far away in the distance, with pedestrians and cars moving around all the way into the distance! I’ve played with that a lot today.
But what about the culling? That means the application if culling is also limited as the engine must also render whats out of screen to reflect it on surfaces