Watching the Pascal demo/announcement, I was pretty impressed by simultaneous multi-projection. I think it's the most interesting and compelling feature Pascal offers by far, outside a simple jump in performance. Even though I do not multi-monitor game, as soon as they started talking about fixing the side projections it was pretty obvious why it was a big deal there, and then applied to VR to reduce oversampling it is a very clever solution.
After reading NV's white paper and looking over their discussion of how their SMP feature can be used to reduce the inherent oversampling when rendering with a lens distortion, it raises a few questions and possibilities.
Could this be used for foveated rendering? It seems it could induce undersampling selectively in parts of the frame buffer, and with simple careful placement of the projections could save significant render times on the pixels outside the center of the view. Imagine running 4K where only the center of the screen was really the 4K dot pitch, and the outside edges smoothly scaled to being effectively 1080p. Imagine this being something that could simply be tweaked, adjust the size of the center full res projection. Or even in fact to induce supersampling in the center but leave the outsides a bit more aliased, or in concert with MSAA.
Also, if NV claims up to 16 projections, times 2 for stereoscopic, why is their VR implementation demo only using [2,2]*2? Why 3x4 or 4x4? I thought a bit more and the immediate answer is that the left and right eyes do not actually have the same projections as the FOV for each eye is not identical (it's mirrored with each eye getting more FOV towards the outside). Perhaps I'm just misunderstanding. Makes me wonder why they have 16 projections then, seems like a silly number if 18 would allow them to bump it up to 3x3 for each eye. Makes me wonder if the VR case for SMP was something that was not an initial goal of the feature.
After reading NV's white paper and looking over their discussion of how their SMP feature can be used to reduce the inherent oversampling when rendering with a lens distortion, it raises a few questions and possibilities.
Could this be used for foveated rendering? It seems it could induce undersampling selectively in parts of the frame buffer, and with simple careful placement of the projections could save significant render times on the pixels outside the center of the view. Imagine running 4K where only the center of the screen was really the 4K dot pitch, and the outside edges smoothly scaled to being effectively 1080p. Imagine this being something that could simply be tweaked, adjust the size of the center full res projection. Or even in fact to induce supersampling in the center but leave the outsides a bit more aliased, or in concert with MSAA.
Also, if NV claims up to 16 projections, times 2 for stereoscopic, why is their VR implementation demo only using [2,2]*2? Why 3x4 or 4x4? I thought a bit more and the immediate answer is that the left and right eyes do not actually have the same projections as the FOV for each eye is not identical (it's mirrored with each eye getting more FOV towards the outside). Perhaps I'm just misunderstanding. Makes me wonder why they have 16 projections then, seems like a silly number if 18 would allow them to bump it up to 3x3 for each eye. Makes me wonder if the VR case for SMP was something that was not an initial goal of the feature.