poll:tessellation

Where in 3D pipeline should tessellation occur with a good(TM) immediate-mode API ?

  • "Within" vertex shaders

    Votes: 0 0.0%
  • PerPixel

    Votes: 0 0.0%
  • All of above are bollocks, we should use micropolygons or something even more l33t

    Votes: 0 0.0%

  • Total voters
    170

no_way

Regular
Ive been asking this a couple of times, and gotten different answers. As neither DX9 nor OGL2 seem to answer this .. thus id like to know peoples opinions.
Tessellation in 3D pipeline is required for(but not limited to:p): displacement mapping, curved surfaces. So if those are to be widely used, tessellation needs to be performed somewhere in 3D pipeline. question is, ideally, where ?
To clarify poll options a little:
1) In model space, Before vertex shaders, as ATI is supposedly doing with truform. This raises the issue of how to generate such data components for tessellated vertices, that are not interpolatable(?), skinning matrix indices for instance.
2) After vertex shaders ( in clip space ). Because of view transform, this space is "nonlinear" and interpolation would give strange results. But, every vertex component is already well defined and interpolation functions can be uniformly specified. Could be doable if tesselation unit "knew" projection transform matrix for backtransforming vertices.
3) "between" VS. in camera/eye space ( before projection transform ). Again, all vertex components are specified and easily interpolated, but then we'd need to change existing VS model that does all vertex transformations in one sequence.
4) no tessellation to triangles at all, evaluate HOS and displacement mapping Per Pixel(TM) This would essentially be the same as 2 ( in clip space ), with "adaptive tessellation" specified so that "adaptive" means "fragment sized"
 
I voted for All of above are bollocks, we should use micropolygons or something even more l33t

It is obviously the most sensible solution.. the fact that I have no clue with regards to the actual pole did not affect my vote. :?
 
P10 takes care of tesselation on the vertex shader, so I would assume that thats how their initial proposels for OGL2.0 would have done it. OGL2.0 does have capabilities for creation and deletion of vertices, but how these are defined/achieved I don't know.
 
One more vote for "within" the vertex shader.

I cant see how tessellation would made much sense without having the ability to create and delete vertices, so why not put that into the context of things, e.g. in the vertex shader where the vertices are manipulated? You would have to do that anyway for a truely powerful displacement mapping (as far as I understand it).
 
LeStoffer said:
One more vote for "within" the vertex shader.

I cant see how tessellation would made much sense without having the ability to create and delete vertices, so why not put that into the context of things, e.g. in the vertex shader where the vertices are manipulated?
Correct. The ability to create and delete vertices was supposed (but later removed, due to either 1) transistor savings or 2) no use of it right now, as it's not exposed by dx9...) to make it's appearance in NV30 (and was indeed implemented in the initial design of it) in the form of the per-primitive processor...
Rumours say we might see it in NV35, or NV40...
Nobody doubts it's usability though, so the sooner we get this kind of flexibility and would be able to manipulate the vertices by creating and destroying them, the better! :)
 
LeStoffer said:
One more vote for "within" the vertex shader.

I cant see how tessellation would made much sense without having the ability to create and delete vertices, so why not put that into the context of things, e.g. in the vertex shader where the vertices are manipulated? You would have to do that anyway for a truely powerful displacement mapping (as far as I understand it).

i concur. leave tessallation in the VS; eventually 'standard' shader libraries with be provided implementing it.
 
What's the point of having tessellation done in vertex shader? Vertex shader operates on vertices and thus doesn't have access to any data about triangles (and thus can't delete or create vertices)?
This doesn't mean that vertex shader units on GPU couldn't be used to do this.
 
I think there should be a new kind of shaders (primitive shaders?) where new triangles can be generated. These can create triangles for things like tessalation, shadow volume generation and procedural geometry. The generated triangles are then fed into the vertex shaders where the actual vertex positions can be applied ( for displacement mapping - apply the displacement, for curved surfaces - find the actual vertex positions from the U/V coordinates generated by the primitive shaders, for shadow volume generation - extrude the vertices, etc. ).
It would be nice to be able to send the final triangles to a new primitive shader + vertex shader. This will be needed for instance to generate shadow volumes for displaced, tessalated or proceduraly generated meshes. If the hardware does not have enough units to do this - the temporary results can be saved into the card memory.
 
MDolenc said:
What's the point of having tessellation done in vertex shader? Vertex shader operates on vertices and thus doesn't have access to any data about triangles (and thus can't delete or create vertices)?
This doesn't mean that vertex shader units on GPU couldn't be used to do this.

i agree that some connectivity information, more like 'closure derivatives' information need be provided to the vertex shader, along with op_create and op_kill for the shader to be able to carry out effectively different tessalation algorithms. point is, where else would a screen-metric tessalator reside if not in the shader? we know the "classic" approach in this case is: one pass with vertex shaders, tessallate, second pass with shaders on tessallated data - but doesn't that seem a bit awkward to you?
 
Well, though I did vote for "within," I do think that the best way to do this would probably be to have a primitive processor before the vertex shader. I voted within the vertex shader because I was trying to leave the impression that tesellation should be programmable.

As a side note, it might be interesting to see if upcoming graphics architectures manage to find ways for programs to span across all of the different processor levels (of which, apparently, there will soon be three), moving data back and forth. I really don't know how this would be implemented, but it does seem like something that you'd want to be able to do.
 
darkblu said:
i agree that some connectivity information, more like 'closure derivatives' information need be provided to the vertex shader, along with op_create and op_kill for the shader to be able to carry out effectively different tessalation algorithms. point is, where else would a screen-metric tessalator reside if not in the shader? we know the "classic" approach in this case is: one pass with vertex shaders, tessallate, second pass with shaders on tessallated data - but doesn't that seem a bit awkward to you?
If you create or kill vertex you'll also have to modify triangles (add them, remove them) which might not be that easy modification to do within vertex shader model. Read what _GeLeTo_ has said, since this is what it's going to happen.
 
In my opinion, the most obvious solution would be a repeatable 2 stage process. The input to the shader would be a triangle, and it's three verts.

The first stage of the shader would create the new vertices and new triangles. It would probably contain a create_vertex function that could take a number of arguements that are passed to stage 2. The create_triangle func would then take 3 values which would be the vertices to use for that triangles.

The second stage would then be used to interploate the data for the new vertices that were created with a create_vertex call.
 
My main question is: what would the programmeable "primitive shader" look like? I guess what I mean is - assuming a programmeable unit - what would the instruction set/programming model look like?

This paper implements adaptive subdivision for Catmull-Clarke subdivision surfaces on the Imagine stream processor. They have this to say:

Investigating what functional units and operations would allow stream hardware to better perform subdivision would be an interesting topic for future research. Alternatively, our pipelines are implemented in programmable hardware, but due to its large computational costs and regular computation, subdivision may be better suited for special purpose hardware. Hybrid stream-graphics architectures, with high performance programmable stream hardware evaluating programmable elements such as shading and special-purpose hardware performing fixed tasks such as subdivision, may be attractive organizations for future graphics hardware.
 
My main question is: what would the programmeable "primitive shader" look like? I guess what I mean is - assuming a programmeable unit - what would the instruction set/programming model look like?

Well, I think a vague answer is really obvious: crappy.

That is, the first one to put out a programmable primitive processor will, in all liklihood, put out one that sucks. I'm really hoping that it is good enough for developers to start making use of it, though. Once game developers start using a primitive processor, the flaws in its design will become obvious, and the second iteration should be decent.

But, right away, I have a few ideas for a primitive processor:

1. Optimizations for polynomial calculations for appropriate interpolation.
2. Optimizations for attemps at "evening out" the area of each triangle of a tesellated surface.
3. Optimizations for properly interpolating vertex attributes.

All that's really needed, I feel, is to find math that is common among as many of today's HOS techniques as possible, and make a processor that is very good at that type of math. Currently I haven't been exposed to much in the way of HOS, but I believe most interpolation techniques revolve around some sort of polynomical scheme.

As for the structure of the shader, here's an obvious way to do it:

1. Start with a primitive form. Triangles or quads will do, as a triangle can be represented as a degenerate quad. Have this "patch" allow for enough attributes to do some of the more complex tesellation techniques plus some (Who knows what other uses these attributes might be used for? Better to give the programmers freedom, if possible, than not). These primitives should be passable in strips for optimization purposes.

2. Calculation could possibly focus in two areas: calcualtion of pre-perturbation positions (very simple, possibly unprogrammable at first), then calculation of the perturbation amount of each vertex. May need to include a way to transform the patch into world space for rendering, depending on implementation.

3. Output should generally be a triangle or quad strip that is made to have an output that will match properly with the input of the next patch in the "patch strip." This should allow for optimal rendering.

Anyway, I'm itching for flexible HOS to be available in a widely-usable form as soon as possible (personally I don't feel displacement mapping is all that great...but we'll see).
 
My opinion:

We need to make the vertex shader the ability to read and write large tables, just like pixel shaders. The output of a vertex shader can be put into a buffer, and to be used by another vertex shader, just like a pixel shader can write its output into a texture, and the texture can be used by another pixel shader later.
 
We need one shader to rule them all, vertices go in ... fragments come out (and an explicitly parallel shader language of course, compilers arent that good).
 
I'd go for "minimal reconfiguration" of both major APIs pipelines.
Currently, VS outputs clip-space vertices to primitive assembly stage. Lets assume, that applicatin has capability to reconfigure pipeline so, that VS output is sent to tessellator instead, with assertion that VS doesnt do projection transform.
So such tessellator would have batch of eye-space vertices as inputs with their connectivity information. It would do tessellation AND viewport transform, and output clip space vertices with new connectivity information to regular primitive assembly stage.
So in respect to current pipeline configuration, it would "intercept" vertices and their connectivity data before primitive assembly stage. This could even be incorporated into OGL1.4 as vendor extension, without changing the structure of whole API.
This would basically fall into 3rd category. Benefit: stream processing nature of current vertex shaders would be preserved. Also depth-adaptive tessellation in eye-space is trivial. No problems with skinning etc.
Also, computing resources would be sparingly used ( we dont run vertex shaders on all tessellated vertices )

EDIT:
Very good read on overall modern 3D pipeline design
http://graphics.stanford.edu/papers/jowens_thesis/jowens_thesis.pdf
 
MfA said:
We need one shader to rule them all, vertices go in ... fragments come out (and an explicitly parallel shader language of course, compilers arent that good).

Hmm... Does that mean everything (including fragment assembly, color/texture coordinate/... interpolation/computation) is programmable? Is that feasible in relatively near future?
 
Back
Top