Ive been asking this a couple of times, and gotten different answers. As neither DX9 nor OGL2 seem to answer this .. thus id like to know peoples opinions.
Tessellation in 3D pipeline is required for(but not limited to): displacement mapping, curved surfaces. So if those are to be widely used, tessellation needs to be performed somewhere in 3D pipeline. question is, ideally, where ?
To clarify poll options a little:
1) In model space, Before vertex shaders, as ATI is supposedly doing with truform. This raises the issue of how to generate such data components for tessellated vertices, that are not interpolatable(?), skinning matrix indices for instance.
2) After vertex shaders ( in clip space ). Because of view transform, this space is "nonlinear" and interpolation would give strange results. But, every vertex component is already well defined and interpolation functions can be uniformly specified. Could be doable if tesselation unit "knew" projection transform matrix for backtransforming vertices.
3) "between" VS. in camera/eye space ( before projection transform ). Again, all vertex components are specified and easily interpolated, but then we'd need to change existing VS model that does all vertex transformations in one sequence.
4) no tessellation to triangles at all, evaluate HOS and displacement mapping Per Pixel(TM) This would essentially be the same as 2 ( in clip space ), with "adaptive tessellation" specified so that "adaptive" means "fragment sized"
Tessellation in 3D pipeline is required for(but not limited to): displacement mapping, curved surfaces. So if those are to be widely used, tessellation needs to be performed somewhere in 3D pipeline. question is, ideally, where ?
To clarify poll options a little:
1) In model space, Before vertex shaders, as ATI is supposedly doing with truform. This raises the issue of how to generate such data components for tessellated vertices, that are not interpolatable(?), skinning matrix indices for instance.
2) After vertex shaders ( in clip space ). Because of view transform, this space is "nonlinear" and interpolation would give strange results. But, every vertex component is already well defined and interpolation functions can be uniformly specified. Could be doable if tesselation unit "knew" projection transform matrix for backtransforming vertices.
3) "between" VS. in camera/eye space ( before projection transform ). Again, all vertex components are specified and easily interpolated, but then we'd need to change existing VS model that does all vertex transformations in one sequence.
4) no tessellation to triangles at all, evaluate HOS and displacement mapping Per Pixel(TM) This would essentially be the same as 2 ( in clip space ), with "adaptive tessellation" specified so that "adaptive" means "fragment sized"