Be careful what you wish for …. I just realized that 120fps is also a multiple of 24 …
The true Sony cinematic experience...
Be careful what you wish for …. I just realized that 120fps is also a multiple of 24 …
Not to mention it is 5 times 24Be careful what you wish for …. I just realized that 120fps is also a multiple of 24 …
Bounces after the first one trace back from the hit Position to World space probes in the area to see if they are lit by them. First bounce is full per Pixel to other objects. The probes are drive by ray traces like those in rtxgi.Doesn't Metro Exodus:Enhanced Edition use probes to accumulate global illumination?
Bounces after the first one trace back from the hit Position to World space probes in the area to see if they are lit by them. First bounce is full per Pixel to other pixels. The probes are drive by ray traces like those in rtxgi.
phone typing - it is supposed to say objects and not pixelsWhat do you mean by “to other pixels” on the first bounce. Is it a screen space trace or is it doing one reflection bounce into the BVH before tracing into the probe grid. If it’s the latter then it’s not really to other pixels.
The grid itself of course is casting rays into the BVH and tracing recursively into itself to emulate infinite bounces and accumulating color into each probe.
What relationship / opportunity do you see from texture space shading and importance sampling?I'm intrigued by texture space lighting driven by ray tracing. I suspect that beyond aliasing (solved with multiple samples per texel and denoising) it can drive importance sampling.
We could say 'tracing a ray gives us the same result than rasterizing a 1x1 pixel framebuffer'. This is true, and because of that, we can not say RT includes a better model of scene or lighting.
Rasterization works under a few assumptions which are all about efficiency, it's good for rasterizing large triangles onto large planar surfaces, that's it.We could say 'tracing a ray gives us the same result than rasterizing a 1x1 pixel framebuffer'. This is true, and because of that, we can not say RT includes a better model of scene or lighting. The difference is only about efficiency
I would not say it's so efficient for that either, because basic rasterization lacks hidden surface removal. If depth complexity of the scene is high enough, heavy overdraw would make it slower than RT using acceleration structure. To add one more assumption.Rasterization works under a few assumptions which are all about efficiency, it's good for rasterizing large amounts of pixels onto planar surfaces, that's it.
To get subpixel detail form rasterization you would just increase frame buffer resolution and downscale. The pinhole advantage is still there, so likely it would still beat RT with primary rays for the same count of samples.It doesn't make a lot of sense to use it even for the rasterization of subpixel geometry (because it won't be able to beat RT here)
Read the many LODs paper for an example of doing just that, and surprise - it works.On the other hand I find your mirror ball example not representative at all of how an actual GI implementation would work in practice.
Many LODs paper again would be an example of this being practical.That would only require processing scene geometry a few million times I'm sure you will acknowledge that this is nonsensical from any practical point of view.
Agree, rendering at larger resolutions would certainly help (HW rasterization particularly), but what would be the benefits of downscaling the larger frame buffer without shading it in the higher resolution space?To get subpixel detail form rasterization you would just increase frame buffer resolution and downscale.
Agree, explaining the monte carlo integration would be perfect, but this might be a tough content for a wide audience.To fix that, we would start to explain monte carlo integration, and then people already throw would eggs at us claiming 'using randomness can't work, you liars!'
Oh, so you had some adaptive method in mind, like using RT to do AA only where needed. Agree then.Shading the additional pixels would have large negative performance implications (even if these are only samples on edges, i.e. MSAA style) and filling in higher res frame buffer would require more bandwidth (not good either) even if processing time is the same in the higher resolution.
That's something you can already try in Quake RTX for example since primary visibility is done with RT there.Would simplify some things. And i'd like some fish eye for high fov but low distortion
Would be awesome to properly scan the monitor surfaces and fit projection to them. "Especially if one would track viewer and create the 'window' effect."That's something you can already try in Quake RTX for example since primary visibility is done with RT there.
I tryed cylindrical projection on curved 32 to 9 monitor and it looked great at 27 FOV, lol (for orthographic projection without distortion on cylindrical surface)
I view this as being able to drive IS via camera-to-texel distance. Informs sample density and bounce count.What relationship / opportunity do you see from texture space shading and importance sampling?
I see a potential TSS advantage for denoising (if we can resolve all adjacent texture which is close in world space, but not in texture space), but it wont affect IS at all?