I worked on the Xenomorph demo. So I could probably answer a few questions. And since my boss gave a talk at GDC about it, it is probably even safe for me to do so
So here goes...
What makes the Xenomorph demo interesting is that the artists creating the creatures did not have to work under the symmetry requirements normally associated with creating morphable characters. The creatures were built with completely different vertex counts, triangle counts, materials and skeletal structures.
Here's a quick review of how morphing is normally accomplished:
1) Two characters are modeled to have exactly the same number of vertices and with the triangles connecting the vertices exactly the same way. This is usually done by modeling one character, then copying it and pushing the copy's vertices around to form the second character.
2) The two vertex buffers of the two characters are sent to the hardware in parallel. Each vertex is presented to the vertex shader as the union of the data in both buffers. In other words, what the vertex shader sees is a double-size vertex with 2 position values, 2 normal values etc... Although the hardware can read from 2 vertex buffer simultaneously, it can only read from 1 index buffer at a time. Thus the 2 vertex buffers must be ordered completely symmetricaly because they are going to be indexed identically to form the polygons.
3) The vertex shader takes the 2 position values and linearly interpolates them according an shader parameter that goes from 0.0 to 1.0 over the timeline of the morph. Rinse and repeat for the other morphing attributes.
4) The interpolated position and normal is animated by the skeleton -thus the skeletal structure symmetry requirement.
5) The pixel shader blends the characters's textures before applying lighting.
Instead, for Xenomorph we allowed the artists to build the characters however they wanted. Then we came up with a (somewhat...) automated process for taking a standardized mesh and shrink-wrapping it onto the characters. The vertices of the shrink-wrap mesh copied the values from the character vertices they landed on. That resulted in set of meshes that fullfill the vertex and triangle symmetry restrictions without restricting the artists. Yeah its kinda cheating, but graphics is about appearances not reality.
When drawing a character that was morphing between two different creatures, we actually animated both skeletons and sent both complete sets of matrices (bones) to the vertex shader constants. Instead of interpolating base poses then animating, we effectively animated both creatures separately then interpolated the animated vertices. Thus we negated the common-skeleton requirement by simply using twice as many vertex shader constant registers and doing twice as much work in the vertex shaders. This was obviously only a viable option because of the increased capabilites of the new hardware.
We wrote the pixel shaders in HLSL. Interpolating pixel shaders in HLSL is simply a matter of being willing to do the work of both shaders in every pixel.
ShaderA(texture a,b,c) return SomethingComplicated(a,b,c)
ShaderB(texture a,b,c) return SomethingEquallyComplicated(d,e,f)
ShaderMorphAtoB(texture a,b,c,d,e,f, number amount)
return interpolate(SomethingComplicated(a,b,c), SomethingEquallyComplicated(d,e,f), amount)
As bonus items we wrote a little system to scale and fade the fur so that we could morph between furry and fur-less creatures. We also encoded a per-vertex scale and bias that was applied to the morph factor so that we could do things like have the legs morph before the body or the head morph last. We did soft shadows on the fur via multi-sampled depth buffer shadowing. The soft shadow on the ground was done by rendering a gray-scale "distance to the ground plane" texture where black was close and white was far. Gaussian blurring that texture gave a convincing fake of global-illumination-style shadow focal blur.
It wasn't perfect, but it was good enough for a demo. If you guys want to start "xenomorphing" as a new buzzword, it would certainly make me happy!
The 360's tesselator unit can't directly solve the symmetry restrictions. It can take a triangle and dice it up into lots of triangles in a regular pattern, but it can't arbitrarily mix triangles from different meshes. What would be interesting though would be to use the displacement mapping to do Hughes Hoppe's "geometry images"
http://research.microsoft.com/~hoppe/ -which can be trivally extended to do morphing- then use the tessellator to get continous LOD. Basically, start with an 8-triangle octahedron that can be mapped with a square texture. Use the tesselator to pump it up to an arbitrary number of triangles then sample the texture in the vertex shader to displace the positions and form the character. BTW: Hoppe's "Consistent spherical parameterization" is probably a better way to accomplish what we did with the shrink-wrap mesh.