*spin-off* Tessellation

There has been some work for compute tesselation.
Fractional Reyes-Style Adaptive Tessellation for Continuous Level of Detail
D3D11 software tesselation (talk, pdf)
It seems hardware tessalation is not up to snuff for most applications, is it a waste of transistor space? I once asked in some thread whether some future gpus could have hardware specific to re-projecting and re-using information from prior buffers, and the common stance was that using that transistor space for general computation was more feasible. Will we see hardware tesselators go the way of dodo (like fixed T&L)?
 
Well it's more used than the Geometry thingy ^^ ;)

We should see with this generation of games/consoles how much it ends up being used to find whether it was wasted silicon or not...
 
Well it's more used than the Geometry thingy ^^ ;)

We should see with this generation of games/consoles how much it ends up being used to find whether it was wasted silicon or not...

I mean if one can do it through compute units more efficiently and in more flexible ways (I'm betting GT6 type smooth transitioning tesselation may not be(?) possible on hardware tesselation units, but I'd be glad if someone could inform us here), why would you want to use hardware tesselation?
 
I mean if one can do it through compute units more efficiently and in more flexible ways (I'm betting GT6 type smooth transitioning tesselation may not be(?) possible on hardware tesselation units, but I'd be glad if someone could inform us here), why would you want to use hardware tesselation?
Hull/domain shaders (pre and post tessellation shaders) use exactly the same compute units as pixel/vertex/geometry/compute shaders. These stages of tessellation are fully programmable. The only thing that is not programmable is the calculation of the barycentric coordinates of the amplified geometry (and the connectivity of the amplified geometry - it is fixed). Smooth (continuous) tessellation is supported by the fixed function tessellation block (domain shader invocation barycentric coordinates are interpolated from triangle edges to center).

With DX11 tessellation pipeline you can program many different kinds of tessellation schemes. Some schemes are not as efficient as others. Hopefully in the future, the GPUs support flexible user defined pipelines, with custom data flow and fine grained (programmable) thread dispatch.
 
A real NURBS setup submits patch-data to the tessellator, not triangles, and it's pretty much standard. .
Is NURBS really that popular these days? I would have thought that (AFAICR) given the it needs a quad-based topology which, when combined with non-uniform detail, it's surely a bit of a nuisance to use. I would have though subdivision surfaces, would be a better option.
 
Is NURBS really that popular these days? I would have thought that (AFAICR) given the it needs a quad-based topology which, when combined with non-uniform detail, it's surely a bit of a nuisance to use. I would have though subdivision surfaces, would be a better option.

Depends. I worked with industrial designers, and there are pure parametric surface CSG trees (Rhino, SolidEdge). I myself, as a hobby, developed ship-bodies with spline-patches, which is ofc basically how real ship-bodies have been developed a hundred years ago.
Parametric modelling and sculpting are two very different things. In general I prefer parametric modelling because it is non-destructive, progressive, and it's easy to make interactive models, because you can control parameters programatically and then just reevaluate the mesh. LOD is also inherently available, your visible base model is in fact already a level of the "real" model.
You can still sculpt on top of parametric models and also rig either, but my experience is that most artists prefer going full free-style, instead of doing models like a programmer. :)
 
Subdivision surfaces are effectively parametric surfaces but just allow much more flexible topology.
 
Back
Top