RoOoBo said:Would make sense to perform the perspective divide and viewport transform in the vertex shader? As they are both just per vertex operations.
And how would affect that to frustum clipping?
What I'm asking is if it would make sense or be useful to avoid clipping in clip space (after projection transform) and go directly to screen space. The output from the vertex shader (vertex position) would be in screen space coordinates rather than in clip coordinates as is currently defined in OpenGL and D3D. And what kind of problems would you face using that approach.
RoOoBo said:From what I know what you show is just the 'official' geometry 3D pipeline part
Colourless said:The reason not to do it is simple enough. You do the homogenous divide after doing near clip plane clipping. You don't want to be dividing by 0.
Colourless said:The reason not to do it is simple enough. You do the homogenous divide after doing near clip plane clipping. You don't want to be dividing by 0.
Colourless said:The reason not to do it is simple enough. You do the homogenous divide after doing near clip plane clipping. You don't want to be dividing by 0.
DeanoC said:Colourless said:The reason not to do it is simple enough. You do the homogenous divide after doing near clip plane clipping. You don't want to be dividing by 0.
Whats wrong with dividing by 0? Perfectly acceptable floating point number is produced (+ or -Infinity).
Some hardware (at least NVIDIA) does indeed clip post-perspective. i.e on exit from the vertex shader it does the homogenous divide and then clips. On Xbox this is explicitly controlable and you can write screen space vertex shader (very handy for distortion type effects), doesn't make no difference to the clipping.
DeanoC said:Whats wrong with dividing by 0? Perfectly acceptable floating point number is produced (+ or -Infinity).
Some hardware (at least NVIDIA) does indeed clip post-perspective. i.e on exit from the vertex shader it does the homogenous divide and then clips. On Xbox this is explicitly controlable and you can write screen space vertex shader (very handy for distortion type effects), doesn't make no difference to the clipping.
Colourless said:Most interesting. I didn't know that... but Nvidia hardware is (or was, GFFX might be different) known to not really like clipping planes at all. This could probably be called a 'peculiarity' of their architechture.
Of course if you do it this way, you don't even need to implement a near clipping plane. You can use your per-pixel depth compare to clip pixels that are too close for triangles that are potentially clipping. Could this be what nvidia is doing.... hey, I wouldn't want to speculate, but it really wouldn't surprise me.
RoOoBo said:Colourless said:Most interesting. I didn't know that... but Nvidia hardware is (or was, GFFX might be different) known to not really like clipping planes at all. This could probably be called a 'peculiarity' of their architechture.
Of course if you do it this way, you don't even need to implement a near clipping plane. You can use your per-pixel depth compare to clip pixels that are too close for triangles that are potentially clipping. Could this be what nvidia is doing.... hey, I wouldn't want to speculate, but it really wouldn't surprise me.
For user clip planes I think they just use normal interpolation between the triangle vertices (interpolation of the distance to the clip plane). So they are checking in every pixel if the pixel is inside or outside the clipping plane. May be that is the reason of the slow down, or because it needs additional interpolated values and there is a limited number of interpolators in the hardware. Of course the other reason could just be that with geometric clipping you don't have to rasterize all those pixels.
RoOoBo said:However as I theorically shouldn't know about that ...
Colourless said:I'll add this, you don't even need to check per pixel. You could discard per scanline too. If the beginning and the end are clipped, then the entire scan line can be clipped too.
Simon F said:There is an interesting discussion on the implementation of "user" clip planes (i.e. clipping in addition to the standard {near, far, top, left, etc} planes) on the DirectX developer mailing list. It was quite amusing to see the standpoints of the developer relations reps from ATI and NVidia.
Gosh it was a few weeks ago and I don't really have time to dig it up. (You might be able to find it on google.)darkblu said:Simon F said:There is an interesting discussion on the implementation of "user" clip planes (i.e. clipping in addition to the standard {near, far, top, left, etc} planes) on the DirectX developer mailing list. It was quite amusing to see the standpoints of the developer relations reps from ATI and NVidia.
would you quote some by your discretion for the not subscribed/unsubscribed among us?
Simon F said:IIRC, in summary someone asked if the user clip planes still cost texture slots on NV hardware and how to control it in the API. I think that an ATI rep then said that there was no way to control that in the API anyway and that this way of doing clipping was bad.
An NV employee responded saying that was a over-broad generalisation and that there were some advantages of texkill (?) based clipping - although he said something that probably wasn't correct.
It was just the polite debate that I found amusing.
Huddy(ATI) said:It's a good way (and may very well be the best way) to emulate clip planes if your graphics card does not have direct hardware support for clip planes - but it's a rotten technique to use on more advanced hardware.
Everitt(NVIDIA) said:Before we make broad generalizations about what constitutes a "rotten technique", consider your intended use. If you need depth invariance and texture coordinate interpolation invariance, and your hardware doesn't support invariant user clip planes, then you may well need to use "texkill" clipping.