poll:tessellation

Where in 3D pipeline should tessellation occur with a good(TM) immediate-mode API ?

  • "Within" vertex shaders

    Votes: 0 0.0%
  • PerPixel

    Votes: 0 0.0%
  • All of above are bollocks, we should use micropolygons or something even more l33t

    Votes: 0 0.0%

  • Total voters
    170
pcchen said:
MfA said:
We need one shader to rule them all, vertices go in ... fragments come out (and an explicitly parallel shader language of course, compilers arent that good).

Hmm... Does that mean everything (including fragment assembly, color/texture coordinate/... interpolation/computation) is programmable? Is that feasible in relatively near future?

Isn't this what the goal of DX10 is (more or less)? I remember some dev slide talking about how in the future one should think of vertex and pixel shaders being much more alike in ins/ops. (It might even have been directed at DX9. I cant remember).
 
I think there should be:

vertex-shader1 -> tesselation -> vertex-shader2

VS1 could be where most of the transformation would be. (inlcuding skinning.) This should also calculate the per-vertex tesselation factor.
Tesselation should be able to handle the different tessalation levels of vertices, without cracking. (At least Parhelia already does that.)

VS2 could do the projective transformation, lighting, texgen.
This setup already exists on hardware which have adaptive tesselation, except the VS1 stage is fixed function, not programmable.


The closest choice was "Within" so that's what I voted for.
 
The longer you stretch out the pipeline the lower the utilization of function units within each one will get ... this at a point where almost every part of the pipeline is working with single precision floating point values, all doing the same type of calculations.
 
I would say as far as the conceptual view of the API is concerned, tesselation should be an operation which occurs before vertex shading.

Whether the act of Tesselation and the act of Vertex Shading share the same hardware resources...well... that depends on the performance / area trade-off of that feature.

Seems to me though, if you're tesselating a primitive you're creating a huge number of vertices which all need to be "shaded". This will be enough work for the vertex shaders without loading them up with the job of generating the vertices in the first place. It's also a nice conceptual split.
 
Putting tesselation in vertex shaders is a very bad idea.

The whole point of vertex shading is to shade the input vertex however you want and output 1 other vertex. There are several very good reasons for why vertices can't be created or destroyed in vertex shader (the person who said that it was only done that way to save transisters was so incredibly off-base it isn't even funny).

Tesselation, high-order surfaces, and displacement mapping (granted displacement mapping is a type of high-order surface.. actually couldn't pretty much every high-order surface be done as a displacement map?) are all very different subjects and a lot of people here seem to be thinking of them all as one in the same. Tesselation doesn't change geometry - tesselating a quad is still a quad just with more triangles.

Vertex shader-based displacement is easy (can be done on VS1.1 on up) and the design for vertex shaders is all but completely finalized and standardized (there will likely be a few minor changes to the functionality over the years). Vertex shaders should not have anything to do with tesselation, in the same way that they should not, and do not, have anything to do with fragments.

The design most people (by people I mean graphics companies) are going for now is exactly what they should do (and is really the only sane choice) - tesselate before the vertex shaders, then shade the vertices normally. Whenever the proposed implementation of a feature requires changing ANYTHING else there should be a very loud alarm going off in your head that says "BAD IDEA!". Dropping in a (possibly programmable) tesselator before the vertex shader is the only way to assure the most painless integration with the greatest flexability.

Perhaps what was meant by 'do it in the vertex shader' was just that you wanted programmable tesselation? Well, that should obviously be done with a seperate program/processor.
 
They can always put a StraightJacketAPI on top of the hardware, no matter how they implement it ...
 
LeStoffer said:
Isn't this what the goal of DX10 is (more or less)? I remember some dev slide talking about how in the future one should think of vertex and pixel shaders being much more alike in ins/ops. (It might even have been directed at DX9. I cant remember).

To my understanding, one of the goals of DX10 is to make a unified instruction set for both pixel shader and vertex shader. However, pixel shader and vertex shader are still seperate units.

A "one shader rules them all" solution will have much higher flexbility... basically you are not limited to triangles anymore. However, the practical value of programmable fragment assembling is not very high IMHO.
 
A one shader approach wouldn't be very ideal either since it would mean huge amounts of re-produced redudant code.

What if you wanted to use a different vertex shader with the already selected pixel and tesselator (theoretical case, of course) shaders, this would mean writing a brand new shader to change one small section.

I can think of many many cases in which a different tesselator might be wanted for different geometry but with the same vertex and pixel shaders, if the one shader approach were to be used that would mean writing an entire 'one shader' when only half a dozen instructions change.

The number of shaders would expload, wasting the most important resource of all - the developer's time.
 
This problem (large number of shaders) can be solved by a good linker, especially considering the shaders are able to do subroutine calls.
 
I like the one shader to rule them all approach.

That doesn't mean you wouldn't have dedicated units for things like rasterization, texture fetch/filter, interpolation, Heirarchical Z, FSAA or what have you. It just means that data could be fed to and recieved from each of these units from a "central" processing location. Basically, the focus of the chip would be a large, flexible, and fast array of programmeable elements, with dedicated hardware for tasks common and/or beneficial to most rendering pipelines.
 
Back
Top