Shifty Geezer said:
Hey, don't go! I like hearing your take on matters.
Heh, okay, no need to play the primadonna
How do you think animation will be handled in a simple one on one fighting game like Tekken 6? Do you think Inverse Kinematics can be a major component, or will it still be based on sections of mocap pieced together?
Well, Inverse Kinematics are like this... you calculate the position/rotation of individual bones in a chain backwards, from knowing the position/rotation of the end of the chain. So you position for example the foot in space, and the upper and lower leg's orientations are calculated from this information, with maybe some additional constraints (for example it's common to use another positional information from an animation controller to define the plane for the rotation of the knee). So this can tell you how the limb will look.
What it can't tell you is how the limb should move from one position to another... For example, you can use IK to define where a kick should hit, but how would the actual kicking movement look like?
In offline CGI, we're using IK to make it easier for the animator to set the pose of the character for the keyframes; but IK in itself can not provide animation. What it could do is to further process the keyframed animation (either manually created or mocap) but I'm not sure how much you could use it. An example of easy IK is to 'aim' an NPC's head towards the player character, so that it will follow the player with its gaze, and use canned animation for the rest of the body. No blending required, just a straight mix ie. this bone uses the animation clip, this one the IK.
But how could you change a canned kicking animation? Well, one could probably do progressive blending of the IK-driven end position and the canned animation but I wonder if it could be generalized enough to work well with every playable character's every kicking motion.
I'm thinking as an example of Lee (the Chinese Five Arts Tai Chi Cop) from Tekken...3 or whatever it was. He'd move through several Tai Chi positions with a rigid, unnatural transition between them. What techniques do you think can be used to generate more natural animations in next-gen?
The approach in the crowd simulation for the LOTR movies was that they've laid out a graph of the possbile movements of a character, and identified every transition that could be required. Then they've recorded mocap for every one of them, resulting in hundreds of small clips. So in theory, every possible blend was pre-recorded and not calculated by the software.
Now you can imagine how many clips it would require to do every blend for 20-30 different characters' every fighting move. It'd quite likely be more than what was required for LOTR. It'd probably not fit into memory for a realtime application either.
The other method I know about, used by MPC on Troy, was more R&D intensive:
The basic concept behind EMILY (MLE, motion library editor) is unique and has never before been used in crowd simulation software. MPC did not invent the idea, but implemented it in a very unique way. In fact, the person who did invent this concept said it would never work in a crowd simulation environment, which we thankfully proved wrong. The foundation idea is to take an arbitrary amount of motion capture data consisting of many different movements and actions. EMILY then breaks this down into tiny clips of motion – each 8-12 frames long. With thousands of tiny clips, EMILY looks at the pose of the skeleton and compares it with every other clip and then decides which it could blend with. Once it finds clips that can blend or merge together, the system automatically creates the blend and writes them out to disk. All of this information is stored in a giant graph indicating compatible clips.
Our AI system ALICE defines the route each character will take, for example, the path from point A to B to C. Using the Motion Library Editor EMILY, it can automatically work out which clips can be used and blended to create the animations of the soldiers. If a character needs to make a right turn, the Library Editor automatically loads up a clip that makes a right turn. The more data in the library, the larger the selection of clips and the more natural the simulations look.
(
http://features.cgsociety.org/story_custom.php?story_id=2239&page= )
This might be possible to implement to some extent in realtime. It's mostly a question of the game's budget and the available CPU power. A fighting game would obviously justify the cost, and sports games too, but I think that shooter and action games would only benefit from it in larger studios (like EA), and only to a lesser extent because their focus is on other things like large enviroments, special effects, AI etc etc.
But as I've said, the problem is more complex than movie VFX because of the interactivity. The engine should probably react to the controls immediately, which would limit the calculation time, and the results as well - sometimes it may have to do impossible things which just can't be made to look good. For example, what if you want the character to turn to the right when it's up in the air between two steps? If you wait until it plants its leg, then the player will think that the controls are sluggish... Hasn't Thief3 tried to do something like that?