Xenomorphing on the XBOX 360

Laa-Yosh said:
Goodness, don't try to invent new technology when there's none :)

It's called xenomorphing because it's morphing (a traditional, age-old technology) between alien creatures, also called xenomorphs.
Morphing is used for many things, most often for facial animation (the most famous example is Gollum in the LOTR movies).
Transitioning between different creatures is, however, quite hard because both models need to have the exact same number of vertices. This is a very serious limitation; and then there's the topology of the model (the layout of polygons) that can be quite restrictive. Say, where should a 4-legged creature put all the vertices that build up the other 2 legs of a 6-legged one? Or how to hide a second head?
No wonder that most VFX companies prefer image morphing tricks for such visual effects, it's a lot more flexible.

To me, facial animations and morphing one character model into another completely different one, aren't the same beast at all. :?

Why would both models need to be comprised of the exact same number of polys? If that's true, you're right, it would be an annoying limitation. But I don't see why that has to be the case. I see no reason why different levels of geometry can't be rendered on the fly in realtime as an LOD system would.
 
I worked on the Xenomorph demo. So I could probably answer a few questions. And since my boss gave a talk at GDC about it, it is probably even safe for me to do so :) So here goes...

What makes the Xenomorph demo interesting is that the artists creating the creatures did not have to work under the symmetry requirements normally associated with creating morphable characters. The creatures were built with completely different vertex counts, triangle counts, materials and skeletal structures.


Here's a quick review of how morphing is normally accomplished:
1) Two characters are modeled to have exactly the same number of vertices and with the triangles connecting the vertices exactly the same way. This is usually done by modeling one character, then copying it and pushing the copy's vertices around to form the second character.
2) The two vertex buffers of the two characters are sent to the hardware in parallel. Each vertex is presented to the vertex shader as the union of the data in both buffers. In other words, what the vertex shader sees is a double-size vertex with 2 position values, 2 normal values etc... Although the hardware can read from 2 vertex buffer simultaneously, it can only read from 1 index buffer at a time. Thus the 2 vertex buffers must be ordered completely symmetricaly because they are going to be indexed identically to form the polygons.
3) The vertex shader takes the 2 position values and linearly interpolates them according an shader parameter that goes from 0.0 to 1.0 over the timeline of the morph. Rinse and repeat for the other morphing attributes.
4) The interpolated position and normal is animated by the skeleton -thus the skeletal structure symmetry requirement.
5) The pixel shader blends the characters's textures before applying lighting.


Instead, for Xenomorph we allowed the artists to build the characters however they wanted. Then we came up with a (somewhat...) automated process for taking a standardized mesh and shrink-wrapping it onto the characters. The vertices of the shrink-wrap mesh copied the values from the character vertices they landed on. That resulted in set of meshes that fullfill the vertex and triangle symmetry restrictions without restricting the artists. Yeah its kinda cheating, but graphics is about appearances not reality.

When drawing a character that was morphing between two different creatures, we actually animated both skeletons and sent both complete sets of matrices (bones) to the vertex shader constants. Instead of interpolating base poses then animating, we effectively animated both creatures separately then interpolated the animated vertices. Thus we negated the common-skeleton requirement by simply using twice as many vertex shader constant registers and doing twice as much work in the vertex shaders. This was obviously only a viable option because of the increased capabilites of the new hardware.

We wrote the pixel shaders in HLSL. Interpolating pixel shaders in HLSL is simply a matter of being willing to do the work of both shaders in every pixel.
ShaderA(texture a,b,c) return SomethingComplicated(a,b,c)
ShaderB(texture a,b,c) return SomethingEquallyComplicated(d,e,f)
ShaderMorphAtoB(texture a,b,c,d,e,f, number amount)
return interpolate(SomethingComplicated(a,b,c), SomethingEquallyComplicated(d,e,f), amount)


As bonus items we wrote a little system to scale and fade the fur so that we could morph between furry and fur-less creatures. We also encoded a per-vertex scale and bias that was applied to the morph factor so that we could do things like have the legs morph before the body or the head morph last. We did soft shadows on the fur via multi-sampled depth buffer shadowing. The soft shadow on the ground was done by rendering a gray-scale "distance to the ground plane" texture where black was close and white was far. Gaussian blurring that texture gave a convincing fake of global-illumination-style shadow focal blur.


It wasn't perfect, but it was good enough for a demo. If you guys want to start "xenomorphing" as a new buzzword, it would certainly make me happy! :D


The 360's tesselator unit can't directly solve the symmetry restrictions. It can take a triangle and dice it up into lots of triangles in a regular pattern, but it can't arbitrarily mix triangles from different meshes. What would be interesting though would be to use the displacement mapping to do Hughes Hoppe's "geometry images"
http://research.microsoft.com/~hoppe/ -which can be trivally extended to do morphing- then use the tessellator to get continous LOD. Basically, start with an 8-triangle octahedron that can be mapped with a square texture. Use the tesselator to pump it up to an arbitrary number of triangles then sample the texture in the vertex shader to displace the positions and form the character. BTW: Hoppe's "Consistent spherical parameterization" is probably a better way to accomplish what we did with the shrink-wrap mesh.
 
corysama said:
Then we came up with a (somewhat...) automated process for taking a standardized mesh and shrink-wrapping it onto the characters. The vertices of the shrink-wrap mesh copied the values from the character vertices they landed on. That resulted in set of meshes that fullfill the vertex and triangle symmetry restrictions without restricting the artists. Yeah its kinda cheating, but graphics is about appearances not reality.

Aw, so you haven't actually found a way to morph between different vertex counts... I suppose you also have to do some texture baking?

Then again, it's a pretty good idea to overcome the limitations. For example in Spawn (what a bad movie :), ILM modelers had to make sure that the Violator's two forms were sculpted out from the same geometry...

Anyway, thanks for sharing!
 
Laa-Yosh said:
Aw, so you haven't actually found a way to morph between different vertex counts... I suppose you also have to do some texture baking?
Actually he has found the way to get to the goal. Like he mentioned it's about the illusion. There is nothing really special about morphing different vertex counts, what's special is the final image and the reduced work it took the artist to get there by not having to model from the same mesh.

Cool to hear talk about the tessallator. Looks like MS designed their hardware around what their software researchers requested.
 
Back
Top