Rumble Roses developer on PS3 & X360

Laa-Yosh said:
You really have to disagree with me as it seems. Fine, another thread that's not worth to reply to.

You shouldn't take it so badly.You have great offline CGI knowlege ,and your presence on these boards brings usefull POV and insight.

But you just seem to underestimate realtime dev stuff.

Waiting for enough hardware power to do things exactly like offline CGI does is not the mindset of realtime gaming .So they have to be creative.And they are.

It's also good to un-learn somethings when switching from Offline CGi to realtime graphics.
 
Shifty Geezer said:
Hey, don't go! I like hearing your take on matters.

Heh, okay, no need to play the primadonna :)

How do you think animation will be handled in a simple one on one fighting game like Tekken 6? Do you think Inverse Kinematics can be a major component, or will it still be based on sections of mocap pieced together?

Well, Inverse Kinematics are like this... you calculate the position/rotation of individual bones in a chain backwards, from knowing the position/rotation of the end of the chain. So you position for example the foot in space, and the upper and lower leg's orientations are calculated from this information, with maybe some additional constraints (for example it's common to use another positional information from an animation controller to define the plane for the rotation of the knee). So this can tell you how the limb will look.
What it can't tell you is how the limb should move from one position to another... For example, you can use IK to define where a kick should hit, but how would the actual kicking movement look like?
In offline CGI, we're using IK to make it easier for the animator to set the pose of the character for the keyframes; but IK in itself can not provide animation. What it could do is to further process the keyframed animation (either manually created or mocap) but I'm not sure how much you could use it. An example of easy IK is to 'aim' an NPC's head towards the player character, so that it will follow the player with its gaze, and use canned animation for the rest of the body. No blending required, just a straight mix ie. this bone uses the animation clip, this one the IK.
But how could you change a canned kicking animation? Well, one could probably do progressive blending of the IK-driven end position and the canned animation but I wonder if it could be generalized enough to work well with every playable character's every kicking motion.


I'm thinking as an example of Lee (the Chinese Five Arts Tai Chi Cop) from Tekken...3 or whatever it was. He'd move through several Tai Chi positions with a rigid, unnatural transition between them. What techniques do you think can be used to generate more natural animations in next-gen?

The approach in the crowd simulation for the LOTR movies was that they've laid out a graph of the possbile movements of a character, and identified every transition that could be required. Then they've recorded mocap for every one of them, resulting in hundreds of small clips. So in theory, every possible blend was pre-recorded and not calculated by the software.
Now you can imagine how many clips it would require to do every blend for 20-30 different characters' every fighting move. It'd quite likely be more than what was required for LOTR. It'd probably not fit into memory for a realtime application either.


The other method I know about, used by MPC on Troy, was more R&D intensive:


The basic concept behind EMILY (MLE, motion library editor) is unique and has never before been used in crowd simulation software. MPC did not invent the idea, but implemented it in a very unique way. In fact, the person who did invent this concept said it would never work in a crowd simulation environment, which we thankfully proved wrong. The foundation idea is to take an arbitrary amount of motion capture data consisting of many different movements and actions. EMILY then breaks this down into tiny clips of motion – each 8-12 frames long. With thousands of tiny clips, EMILY looks at the pose of the skeleton and compares it with every other clip and then decides which it could blend with. Once it finds clips that can blend or merge together, the system automatically creates the blend and writes them out to disk. All of this information is stored in a giant graph indicating compatible clips.

Our AI system ALICE defines the route each character will take, for example, the path from point A to B to C. Using the Motion Library Editor EMILY, it can automatically work out which clips can be used and blended to create the animations of the soldiers. If a character needs to make a right turn, the Library Editor automatically loads up a clip that makes a right turn. The more data in the library, the larger the selection of clips and the more natural the simulations look.
(http://features.cgsociety.org/story_custom.php?story_id=2239&page= )

This might be possible to implement to some extent in realtime. It's mostly a question of the game's budget and the available CPU power. A fighting game would obviously justify the cost, and sports games too, but I think that shooter and action games would only benefit from it in larger studios (like EA), and only to a lesser extent because their focus is on other things like large enviroments, special effects, AI etc etc.

But as I've said, the problem is more complex than movie VFX because of the interactivity. The engine should probably react to the controls immediately, which would limit the calculation time, and the results as well - sometimes it may have to do impossible things which just can't be made to look good. For example, what if you want the character to turn to the right when it's up in the air between two steps? If you wait until it plants its leg, then the player will think that the controls are sluggish... Hasn't Thief3 tried to do something like that?
 
Okay, let's try another round in this discussion...

It seems to me that you're mixing up two different topics here. One is about what's possbile and feasible in realtime systems, the other is our smaller debate of what's a simulation.

_phil_ said:
What you call real is a mathematical approach of a problem.there is a lot of differents approaches for hair ,for exemple.
They are not less fake.just more heavy on mathematical modeling.
So i don't see why one would be called fake or real .none is real.In fact ,the "real" one have good chances to propel us right in the uncanny valley.

Now I consider a simulation to be what it is: a somewhat simplified but relatively accurate modeling of real life processes. In the case of a muscle/skin simulation, it takes into account things like gravity, inertia, volume and elasticity of the various tissues, etc. etc. The solving obviously involves math just as anything else in 3D.
See this definition here:
"a simulation should imitate the internal processes and not merely the results of the thing being simulated" (http://dict.die.net/simulation/).
You can have a varying degree of complexity and accuracy in your simulation. For example, skip collision tests between muscles; simulate only 5% of the hair strands and create the motion for the rest through blending, etc. etc. But you'd still build a simulation model based on real life.

Other approaches can not be called a simulation because they use different mechanics than reality. We call these fake because of the way they work, but it has nothing to do with how the results look. Emulation is also a good name for them. They might even end up looking better than a simulation. But you cannot call them a simulation, because it's not what they're doing, right?

So please don't try to go in the face of the whole 3D industry that I've heard about and call lattices and helper bones a simulation...


Now, for what and where to use in realtime applications - or any kind of applications - , that's a different topic. If you've read my posts than you should know that I'm absolutely against muscle simulation for facial animation, for example. It's an unneccessary complication, slows down the animation process and so on. For the rest of the body, you can still get away in most cases without simulation. But creating a realistic horse would be very hard if not impossible without fully simulating it's muscles and skin.
Then there's cloth. There is no real way to get dynamically moving, folding, wind-blown cloth without simulation, period. I'd like to see anyone rigging up just a realistic handkerchief falling down with bones or morphs... all the time of the world would not be enough to get a convincing result.

I absolutely agree that game developers will be creative and find many different solutions. My point was that a real simulation will probably be too computationally intensive for most cases like muscle systems, complex cloth, hair. Then you came in and started to argue with me that simulations will be possible - because you think that the fake methods can be called that as well... that is what I was arguing with. So, can we settle this debate about semantics and get back to the original topic please?
 
Back
Top