DEATH TO FIXED FUNCTION!

Ostsol

Veteran
Well, not really. . . ;)

I'm just wondering if anyone has any guesses as to when we'll see some things such as stencil tests and depth tests moved into the pixel shader. Framebuffer reads in the pixel shader would be nice, too. :) The reason I ask is that such things (specifically, framebuffer reads and depth testing) would make it much easier to perform effects such as depth-independant transparency, rather than potentially having to render in several passes.
 
Ostsol said:
Well, not really. . . ;)

I'm just wondering if anyone has any guesses as to when we'll see some things such as stencil tests and depth tests moved into the pixel shader. Framebuffer reads in the pixel shader would be nice, too. :) The reason I ask is that such things (specifically, framebuffer reads and depth testing) would make it much easier to perform effects such as depth-independant transparency, rather than potentially having to render in several passes.
Manipulating depth values in the pixel shader means you can't use early Z or hierarchical Z...

Just things to consider.
 
Ah, true. . . but if the application of such manipulation is for transparancy/blending, it could be quite useful for certain instances. In such cases, one would not be utilizing the benefits of heirarchical-Z and early-Z anyway.
 
Ostsol said:
Ah, true. . . but if the application of such manipulation is for transparancy/blending, it could be quite useful for certain instances. In such cases, one would not be utilizing the benefits of heirarchical-Z and early-Z anyway.
Sure you would! You wouldn't want to do blending in cases where the pixels are Z rejected, right?
 
While I would like to see a pixel shader that can read from color, Z and stencil buffers and do Z/stencil tests and operations, there are a few problems with it that I can see:
  • You will break optimizations like Hierarchical Z/earlyZ and similar tricks for the stencil buffer. There may be other performance implications as well, due to locking of a large number of in-flight pixels or other issues.
  • The appropriate result of reading frame/Z/stencil-buffer into the pixel shader when doing multisampling AA is non-obvious - while you can probably get away with averaging color values, doing so with Z/stencil values will produce very visible glitches.
And how is this extra flexibility supposed to give order-independent transparency? AFAICS, for that you need either multiple rendering passes or a per-pixel fragment list - and given such a fragment list could become arbitrarily long, it would require more or less a malloc() available to the pixel shader ...
 
I was thinking that it could be used to dynamically decide on the blending op to be used, depending on if the z-test passed or failed. Basically, blending is done as normally if the z-test passes (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), but if it fails you blend under the assumption that the existing fragment is the new one.

Thinking further on this, I do suppose that this would be potentially problematic if more than two levels of transparent polygons were involved, though. . .
 
Ostsol said:
Framebuffer reads in the pixel shader would be nice, too. :)

It would be great! Don't hold your breath... it got voted out of OGL2.0 because the IHVs lined their pants when they thought through the performance implicaitions.
 
Well, the performance hit might be slightly lower with ILDPs, if you got big caches.
Because that'd mean the Vertex Shaders would have many more of the FPUs at its service ( actually, that's an abstract view, because it wouldn't have them nonstop, it's more like using them a lot more during latency times and stuff ) - and that means that you aren't actually wasting many transistors that way - most of the FPUs are still operational, which is not true for traditional architectures.

So once we got good ILDP implementations... ( NV50/R500 being the first real ones - actually I'm not sure for the R500 - they might not have that feature yet. )
I find it quite possible indeed for the NV60 or the R600.


Uttar
 
Uttar said:
Well, the performance hit might be slightly lower with ILDPs, if you got big caches.
Once again,what's "ILDP"? Integrated Linear Dot Product? :)
Because that'd mean the Vertex Shaders would have many more of the FPUs at its service ( actually, that's an abstract view, because it wouldn't have them nonstop, it's more like using them a lot more during latency times and stuff ) - and that means that you aren't actually wasting many transistors that way - most of the FPUs are still operational, which is not true for traditional architectures.
Are we talking of having a shared resource that runs both VS and PS programs?

Anyway, getting back to the original topic...
..While I would like to see a pixel shader that can read from color, Z and stencil buffers and do Z/stencil tests and operations...
I could see being able to read these values being supported maybe in the next generation of DX/GL, but writing just opens up too big a can of Annelids.
 
Frankly, you can already implement the Z test in the PS if you want to. It would be very stupid to do so, but you can do it...
 
Dio said:
Frankly, you can already implement the Z test in the PS if you want to. It would be very stupid to do so, but you can do it...
Yep, though that takes multiple passes. Also, my suggestion was for special cases. I agree that it is obviously something one wouldn't want to do for every pixel.
 
Back
Top