The technology of Alan Wake *spawn

If the mo-cap is as the dev says (no human intervention), then it may simplify/skip the parts that need careful human tweaking. As I recall, the HR animations are not so consistent, some have eye movements, some don't. The best is the storekeeper, which is amazing.

[size=-2]What is HeavyRain doing here ? :)[/size]
 
Ha ha, he's trying to sell his service. Have a heart !

You have the game and you can see the outcome. It wouldn't be surprising to find flaws in AW's mocap since Laa-Yosh firmly believed that human tweaking is needed (and he's in the line of work). And once human treatment is involved, inconsistency will find its way into the game unless they have a lot of time to polish. Even in Heavenly Sword which has consistent eye and expression mocap, I liked it but some of us find the action exaggerated. So there is a lot of human factors/value-adds here.

We should be able to discuss AW in the context of its merits. The devs will need the room for experiments/growth instead of getting compared out-of-context, or always getting capped by other titles' visions. 540p vs 720p is another example. I personally don't mind 540p if they can create a unique visual experiences, or even 3D vision.
 
They probably didn't want to spend the resources on localizing the lips animation to multiple languages ?

I hope eye motion capture becomes standard too.
 
Hasn't this been discussed already in another thread?

To the death...

On the other hand, neither Alan Wake's nor Heavy Rain's face mocap can be called good...
I still think Mass Effect 2's facial animation looks the best so far (for a conversation heavy game), at least with most of the main characters. But that's not mocap at all...

Facial animation is a tough nut to crack.
 
You can't really mocap eyes... Sony Imageworks went with electrodes on top of the eyelids for Beowulf and even that looked dead.

Eyes will only work if they're in sync with the actual environment as well, ie. the character's eyelines should follow actual people or objects in the scene, stay in sync with (or more precisely, follow/lead) head animation and so on. A procedural system using various IK constraints is usually way better for games then mocap.
 
Wait a minute, if done well, mo-cap should offer more realism right ? One of the problems is they have too much data to clean up using current technology.

For syncing eye movement, don't they capture multiple characters on the set at the same time ? So the actors' eye movements are already directed at the right position/people. I remember seeing multiple actors in the same mocap stage for Heavenly Sword.

If they capture individual actors one at a time, then I agree the eye position may be wrong w.r.t. the entire scene.

EDIT: Ok, in Heavenly Sword, they may have captured the facial expressions and then add the eye animations manually based on the other concurrent characters' movement.
 
I found the lip syncing in Assassin's Creed 2 quite good! Well, I did play on PC, so the quality is a bit higher than on consoles (the graphics as a whole), but the cutscenes (and in-game for that matter) lipsyncing was pretty good, for being an open world game and being multiplatform too.

Yes, the eyes are dead too, and the characters aren't the best we've seen either, but the animations are pretty good.

In Heavy Rain, I didn't notice the eyes. Which should mean, to me, they were made well. Especially in the load screens (higher res shaders and such, I'd guess), but some characters movements are TOTALLY off, especially since the rest is very well done, they stand out even more. I'd guess even the bad movements in HR could pass as good in other games.
 
Wait a minute, if done well, mo-cap should offer more realism right ?

Well, here are some of my own personal thoughts on this...

Mocap in general can offer two important aspects of realistic animation:

- Realistic movement dynamics. The time between the various poses, the shape of the intensity curve (acceleration/deceleration), the reaction times etc. are all important in creating the illusion of life, and animating from the ground up can never really match mocap data in these ways.
There are some rules or rather guidelines, created by decades of experimenting with traditional cell animation and analyzing human movement and poses. Like your head will always turn in a curve instead of a straight line, and you usually blink when you look at something else after the move. Or how there's anticipation before the motion, or which phonemes are the ones you blink more on while speaking. But these can never replace real people, and animators also tend to exaggerate a bit too much, which is good for creating characters larger then life - but sometimes subtlety is the key (this part is also true for the second point).

- Realistic poses. An actor or even an everyday human being will usually activate far more muscles then what's necessary for a certain movement, which gives it an individual character, even if it's near unnoticeable. Sometimes it's completely subconscious, or it's a deliberate result of acting. For facial animation it can be as subtle as a 1mm difference!

Now it's also worth knowing that proper video reference can provide both of the above, but it also takes a lot of manual work to rotoscope it all. Nevertheless, even Avatar has used this "old" method (developed for Disney's Snow White as far as I know) for many many scenes, despite all the super high end mocap stuff they had. Sometimes they had up to 10 HD cameras shooting reference video footage while also recording the actual mocap!

Also, mocap is not a 100% reliable tool, you always trade flexibility for precision. The majority of the systems are optical, because you can adjust marker placement and numbers, and also use as many actors as you can manage; and you're free from any electromagnetic interference (which even standard cables in an office building tend to create).
But tracking multiple people adds noise and errors; not enough stabilization and separation can affect camera precision (the actor jumps and the cameras tremble a bit) and so on, resulting in bad data. Filtering tends to remove both the noise and the tiny imprecisions that would add life - overlfiltered mocap is very disturbing usually.


So, in my opinion, mocap is a very cool thing, but it's never enough on its own and human interaction is always required. An animator will be able to identify and differentiate between noise and worthy details, and can also modify or replace the performance if the shot requires it. But completely realistic human motion is nearly impossible without mocap in most cases... then again, very few would notice the difference between mocap and a very good and talented animator either.


Games are, however, not shot driven, the camera can sometimes roam around freely, and the amount of data to process is several orders of magnitudes higher. Which is why semi-procedural approaches like Mass Effect's stuff is usually more consistent and easier to tweak then working with hours of mocap data - per character. It also allows for localized speech and even character customization as well.
Processing dozens of hours of mocap for both body and face is an incredibly time consuming task and can dramatically increase budgets IMHO. But if done well, it'll always be a lot better then the procedural approach... so it's a question of money in the end and the tech is secondary in most cases.

And games are also a lot more forgiving, it's still pretty common to have intersecting limbs, heads and clothing/armor and noone cares about it. However facial animation can still ruin the immersion pretty quickly.

One of the problems is they have too much data to clean up using current technology.

I'm not sure what you mean by that :)

For syncing eye movement, don't they capture multiple characters on the set at the same time ? So the actors' eye movements are already directed at the right position/people. I remember seeing multiple actors in the same mocap stage for Heavenly Sword.

It's not as simple; characters might have completely different proportions and such compared to the actors; for example I think Andy Serkis's HS character was considerably differently built, taller and more robust which instantly made all the mocap inherently wrong. Also, the capture volume might not be big enough, character placements might not be correct and so on.
And curiously, sometimes even 100% correct eyelines and such won't look correct from the camera's view - even movies cheat a LOT (it's also true for lighting).
So, capturing multiple actors is usually more important for their chemistry and reactions to each other then for correct eyelines and physical interactions.
 
Heh :)

One final note is that in my opinion the trouble with facial mocap in today's games is that they only manage to get the dynamics of the motion itself right; but they fail utterly at the poses, creating distorted and scary faces. This is because facial deformations are extremely complex and simple transformations copied over from mocap markers just don't cut it.
Building the kind of elaborate face rigs and data processing that's getting common in movie VFX is just not a viable option as it seems, so games forfeiting mocap are the ones that usually work out better. Like the Uncharted and Mass Effect games for example; although the mocap in GTA4 was also quite good.
 
Aye, but for certain kind of games, mocap has to eventually provide a more realistic performance than hand crafted expression and animations. "We" are just in the process of growing up.
 
http://www.videogamer.com/news/remedy_alan_wake_dlc_to_ditch_fugly_faces.html

Apparently, the facial animations are a bit wonky, particularly during cut scenes.

But they'll be better in upcoming DLC, though, development director Markus Mäki has promised.

"We have a few different methods of doing facial animation - in-game using FaceFX and motion capture used in cinematics," he said in a post on Remedy's forum.

Hmm, I don't think they'd update the existing cutscene data so the improvements are more likely constrained to the new content.
From what I've seen so far, the faces are kinda wooden and there's very strange eye movement in the cutscenes.
Edit: just look at her eyes, her gaze is all over the place...

View 6MB animated gif

Also, FaceFX is the tech used in the Mass Effect games, a 3rd party program that uses small individual gestures (manually fine tuned, either from mocap or keyframe animation), driven by scripting and with voice-based lipsync.
 
Last edited by a moderator:
I'll say that eye movements go a long way into making a realistic face expression, do it wrong & you have an awful facial expression. I realized this after I saw that gif of Alan's wife.
 
Back
Top