Scali said:
The way I understood it, the subdivision is done first, then when a triangle is < 0.25 pixels large (a so-called micropolygon), it is rendered as a single pixel with the shader (using the triangle normal, so essentially it performs flat-shading). This means that the vertices aren't used in the shading process at all, they are just temporary data during subdivision (which is a form of interpolation).
Let me quote Tony Apocada from the Advanced Renderman book:
"Dicing converts the small primitive into a common data format called a grid. A grid is a tesselation of the primitive into a rectangular array of quadrilateral facets known as micropolygons (because of the geometry of the grid, each facet is actually a tiny bilinear patch, but we call it a micropolygon nonetheless). The vertices of these facets are the points that will be shaded later... generally, the facets will be in the order of one pixel in area.
...the ShadingRate of an object refers to the frequency with which the primitive must be shaded (actually measured by sample area in pixels). For example, a shadingrate of 1.0 specifies one shading sample per pixel.
(BTW a shadingrate of 0.5 means not 2, but 4 samples! And it's not adaptive, but it's stochastic sampling. -LY) In the Reyes algorythm, this constraint translates into micropolygon size. During the dicing phase, an estimate of the raster space size of the primitive is made, and this number is divided by the shadingrate to determine the number of micropolygons that must make up a grid. "
In short, PRMan first tests the bounding box of each object, and if they're too smal, they get split into smaller parts, usually by parametric edges; if they're unseen, they're culled.
Once the splitting loop is done, each primitive is processed independently. They're diced up into grids, and then PRMan shades one grid at a time, using SIMD rendering, to conserve memory. Grids are first displaced, then the surface shader gets evaluated and surface color and opacity is assigned to each vertex. Hidden surface removal follows, by tearing the grid apart to individual micropolygons; then they get bounded and visibility-tested and perhaps culled. Each micropoly vertex is individually tested for the predetermined sampling points of every pixel that might contain it, and the color for the samples may be interpolated or not. Sample points gather visible point data and combines them for the final pixel using a reconstruction filter.
But displacement looks ugly if your polygons are larger then a pixel...
I'm not sure if I understand what you mean. Displacing vertices won't change anything drastic, right? A continuous mesh will still be continuous, it would just have a more detailed surface. Or perhaps you mean sampling problems? But that would depend more on the vertex-to-displacementmap-elements ratio?
First, think about how you're displacing: with a texture map. It's obvious that if you have less vertices than used pixels in the texture, then you loose data. So for the 2K textures commonly used in UE3, you'd require 2048*2048*.9 (UV space efficiency) = 3.77 million vertices to display all the data. And this might not be an ideal distribution of the geometry; and several parts of the model might be using the same UV space, thus requiring more vertices to display the displacement properly.
But even such a monstrous amount of geometry might not be enough for closeup views when you have high frequency data in the displacement map. We're talking here about wrinkles, pores, scales etc. which can already be displayed with normal maps. For such detail you have to subdivide to micropolygons; and for proper antialiasing you need more than 1 sample per pixel.
Of course there are uses for displacement mapping that does not require detailed geometry, like simulating an earthquake on a ground plane or such. But to add detail to models, you're pretty much required to tesselate to at least 1 vertex per texel.
(Another solution might be to combine displacement and normal mapping, by using a low frequency 256 or 512 displacement map and a high frequency 2K normal map - but there are surely some complications, at least during the content creation phase. And even this solution requires a few hundred thousand polygons at least...).