Actually, today’s graphics processors can create new triangles and, in fact, must do so in cases where line or point sprite primitives are used. Most consumer graphics processors are only capable of rasterizing triangles, which means all lines and point sprites must, at some point, be converted to triangles. Since both a line and point sprite will end up turning into two triangles, which can be anywhere from two to six times as many vertices (depending on the indexing method), it’s best if this is done as late as possible. This is beneficial because, at the essence, these are the exact same operations required for shadow volumes. All that’s required is to make this section of the pipeline programmable and a whole set of previously blocked scenarios become possible without relying on the host processor; Microsoft calls this the “Topology Processorâ€, and it should allow shadow volume and fur fin extrusions to be done completely on the graphics processor, along with proper line mitering, point sprite expansion and, apparently, single pass render-to-cubemap.
Logically, the topology processor is separate from the tessellation unit. It is conceivably possible, however, that a properly designed programmable primitive processor could be used for both sets of operations.
Higher-order surfaces were first introduced to DirectX in version 8, and at first a lot of hardware supported them (nVidia in the form of RT-Patches, ATI in the form of N-Patches), but they were so limited and such a pain to use that very few developers took any interest in them at all. Consequently, all the major hardware vendors dropped support for higher-order surfaces and all was right in the world; until, that is, DirectX 9 came about with adaptive tessellation and displacement mapping. Higher-order surfaces were still a real pain to use, and were still very limited, but displacement mapping was cool enough to overlook those problems, and several developers started taking interest. Unfortunately, hardware vendors had already dropped support for higher-order surfaces so even those developers that took interest in displacement mapping were forced to abandon it due to a lack of hardware support. To be fair, the initial implementation of displacement mapping was a bit Matrox centric, so it is really of no great surprise that there isn’t too much hardware out there that supports it (even Matrox dropped support). With pixel and vertex shader 3.0 hardware on its way, hopefully things will start to pick back up in the higher-order surface and displacement mapping realm, but there’s still the problem of all current DirectX higher-order surface formulations limitations.
It’d be great if hardware would simply directly support all the common higher-order surface formulations, such as Catmull-Rom, Bezier, and B-Splines, subdivision surfaces, all the conics, and the rational versions of everything. It’d be even better if all of these could be adaptively tessellated. If DirectX supported all of these higher-order surfaces, there wouldn’t be much left in the way to stop them from being used – you could import higher-order surface meshes directly from your favorite digital content creation application without all the problems of the current system. Thankfully, this is exactly what Microsoft is doing for DirectX Next. Combined with displacement mapping, and the new topology processor, and there’s no longer any real reason not to use these features (assuming, of course, that the hardware supports it).
DaveBaumann said:
Dual, cascaded vertex shaders
JonPeddie.com said:Slides from Computer Graphics World webcast, July 2004: "Professional Graphics"
A queue sounds right.Ailuros said:Anyone care to speculate, how one could place a GS in a pipeline with unified PS/VS units?
Chalnoth said:I don't know if it'd be anything special. You'd just make it so that there is an option in the instruction set to allow the output of the combination vertex/pixel shader to enter into the geometry shader, which would then generate data to be sent back to the vertex/pixel shader (though there may also be some other hardwired stuff inbetween, like triangle setup).
But the tessellation of higher order surfaces is an optional part. Although, judging by the description from the WinHEC slides, It could very well be possible to use the geometry shader for tessellation.Chalnoth said:Well, I was finally able to download the slide (the download kept stopping right before finishing when I attempted at home). Anyway, what seemed interesting to me was that the geometry shader, which was apparently not optional, was set between the vertex shader and the rasterizer.
This would seem to indicate a system whereby the vertex shader deals with higher order surface data directly, and all transformation and texture coordinate generation and whatever else the vertex shader is tasked with doing is done before the surface is divided into its component triangles which are then sent to the rasterizer.
From what I can tell, the geometry shader could be used as a tessellator, but such an architecture would be incapable of any per-vertex computation afterwards: the only available per-vertex data would be interpolated in the geometry shader.Xmas said:But the tessellation of higher order surfaces is an optional part. Although, judging by the description from the WinHEC slides, It could very well be possible to use the geometry shader for tessellation.
Just have the geometry shader do the per vertex calculations after it's done it's geometry shading.Chalnoth said:From what I can tell, the geometry shader could be used as a tessellator, but such an architecture would be incapable of any per-vertex computation afterwards: the only available per-vertex data would be interpolated in the geometry shader.