Next-gen character modelling (GDC 2013)

I personally still prefer hand-made assets, with a distinct art style, like the characters in Last of Us...

But I thought that the core of this tec is to compress the data such that you have a handful of "basis mimics" left which you can now use to reconstruct all sorts of real face expression by linear combination of those basis face mimics.

I saw (funnily enough while participating a math conference) an invited talk by the tec heads of the Avatar movie: Mathematics in Avatar or something like this. They showed exactly this and also how those basis faces look like...was super funny :)

So once you have those basis face expressions...couldn't you use this in a cheap manner to reconstruct facial animations in all sorts of games?
 
You could, but facial expressions are part of distinction as well. People move their face in all kind of ways and it makes them memorable, or sympathetic or unlikeable. Some move the mouth left down, some have a split-face mouth kind of way to speak, some can't raise eye-brows well in a curve etc.
It may appear not very important in the context of computer-models, I'm not sure artist go into such length to make a unique character, but it's a part of our everyday interaction with other persons for sure. You probably remember that more, than the exact shading of someones skin, or form of skull.
Jorge's head already produced emotional reactions, someone didn't like his distinctive style of expression, how strange if it'd be used in every second game because it's in some library.

Avatar does have some problems with facial expressions btw. Sometimes they look like the structure of the face is like from soft plastic, just doesn't bulge right and skin-movement appears to be absent, not causing wrinkles.
 
Sony showed a similar tech demo for PS2 and aside from the odd game ( Silent Hill 3 comes to mind ) not a single game in the machines life came close to matching it.
They also showed with that demo that it used all the vertex and setup capabilities of the system.
No one expected games to use same amount of processing power for characters.
 
You could, but facial expressions are part of distinction as well. People move their face in all kind of ways and it makes them memorable, or sympathetic or unlikeable. Some move the mouth left down, some have a split-face mouth kind of way to speak, some can't raise eye-brows well in a curve etc.
It may appear not very important in the context of computer-models, I'm not sure artist go into such length to make a unique character, but it's a part of our everyday interaction with other persons for sure. You probably remember that more, than the exact shading of someones skin, or form of skull.
Jorge's head already produced emotional reactions, someone didn't like his distinctive style of expression, how strange if it'd be used in every second game because it's in some library.

Avatar does have some problems with facial expressions btw. Sometimes they look like the structure of the face is like from soft plastic, just doesn't bulge right and skin-movement appears to be absent, not causing wrinkles.

Yeah, that is true. But I wonder if they could use those as a base and tweak from there: eg by hand or introduce technics from procedural content creation, maybe augment with some slight stochastic imperfections?
 
But I thought that the core of this tec is to compress the data such that you have a handful of "basis mimics" left which you can now use to reconstruct all sorts of real face expression by linear combination of those basis face mimics.

So far, yes, but where's the artistry in that?

I saw (funnily enough while participating a math conference) an invited talk by the tec heads of the Avatar movie: Mathematics in Avatar or something like this. They showed exactly this and also how those basis faces look like...was super funny :)

Yes, and that's where both the creature heads were hand made, and all the thousands of elemental expressions (FACS Action Units).

So once you have those basis face expressions...couldn't you use this in a cheap manner to reconstruct facial animations in all sorts of games?

Yeah, that's the idea, but again, where's the artistry in that?

They scan the subject, they process the data, they capture facial performance and use that data to drive the facial rig... all are technical tasks.
 
But I thought that the core of this tec is to compress the data such that you have a handful of "basis mimics" left which you can now use to reconstruct all sorts of real face expression by linear combination of those basis face mimics.

A good knowledge of facial anatomy can give you that already and actually is not even necessary because experienced animators knows already how to re-create/produce this or that expression.
IMO a dev should use facial mo-cap if he wants that very person/actor face in the game, if he looks for the physique du role.

@Laa-Yosh

I think there is still much art with facial mo-cap, simply actors will be the ones to use their art instead than animators.
 
Last edited by a moderator:
Yeah, that is true. But I wonder if they could use those as a base and tweak from there: eg by hand or introduce technics from procedural content creation, maybe augment with some slight stochastic imperfections?

That's what I was thinking too. Then with enough heads captured you might have 'enough' to work with in providing variety with a little help from tweaks and tweening.
 
A good knowledge of facial anatomy can give you that already and actually is not even necessary because experienced animators knows already how to re-create/produce this or that expression.
IMO a dev should use facial mo-cap if he wants that very person/actor face in the game, if he looks for the physique du role.

This may be true, I don't know. But as far as I understand it, it is really expensive having everything hand made. You probably need expensive experts for many hours...I don't know this, maybe Laa-Yosh knows something about the costs, which approach is more expensive.

But as suggested above, I am thinking about generating a huge data base with many different faces (also for each category like at least maybe boys, girls, women, men). Then use compression techniques to find a basis for say facial animation of a boy (mean values) and use the database to generate imperfections (fluctuations around this mean).

If not used for the important main characters, maybe something like this can help for NPCs?

What tec did they use in Crysis3 btw, it looks fantastic!
 
This may be true, I don't know. But as far as I understand it, it is really expensive having everything hand made. You probably need expensive experts for many hours...I don't know this, maybe Laa-Yosh knows something about the costs, which approach is more expensive.

Doing all hand made is usually more time consuming but it's not necessarily more expensive than doing facial mo-cap.

But as suggested above, I am thinking about generating a huge data base with many different faces (also for each category like at least maybe boys, girls, women, men). Then use compression techniques to find a basis for say facial animation of a boy (mean values) and use the database to generate imperfections (fluctuations around this mean).

If not used for the important main characters, maybe something like this can help for NPCs?

It think that would be interesting but I am pretty sure that devs already have database of hand made faces at their disposal so to not start form zero every time.

Edit;

For the record I am not against mo-cap.
I think it's an incredibly useful tech and that games benefit greatly from it but I prefer Naughty Dog approach: body mo-cap + animation for face & hands.
 
Last edited by a moderator:
A good knowledge of facial anatomy can give you that already and actually is not even necessary because experienced animators knows already how to re-create/produce this or that expression.


Realistic facial animation has always had a divide: the facial modeling/sculpting team studied the anatomy of the face and how the various expressions should look like, how the skin and the underlying tissues should move and squash and stretch.
Animators then focused on timing and range and such, but never on the looks. One of the reasons to use blend shapes is that there can be several animators working on the same face and still have consistent results, as a smile shouldn't have a dozen different versions in a dozen different shot.

IMO a dev should use facial mo-cap if he wants that very person/actor face in the game, if he looks for the physique du role.

Exactly. Halo 4 is actually a precedent for that, most of the human cast is based on scan data, but Cortana and even the Librarian has also been based on scans of actresses. 343 has clearly been gearing up for the new Xbox with that.

I think there is still much art with facial mo-cap, simply actors will be the ones to use their art instead than animators.

That's a different aspect, it's only the timing and range; and even that is edited and modified many many times. But the actual expressions won't be hand sculpted, the face itself won't be hand sculpted...

To give you an example, in the Spartan Ops CG series, all the characters were scanned humans. Except for Dr. Halsey - she was hand sculpted by my co-worker Karoly Porkolab. I believe she can easily stand up to the scanned actors; depending on personal taste she might be even better looking than the 'real' humans. I'm obviously biased, having worked with Karoly for like 8 years and always being a fan of his work, but I consider his stuff more impressive and more artistic than 'just' scanning someone (and that's no simple business either, mind you).
 
This may be true, I don't know. But as far as I understand it, it is really expensive having everything hand made. You probably need expensive experts for many hours...I don't know this, maybe Laa-Yosh knows something about the costs, which approach is more expensive.

It does take a lot of time, but the results are more pleasing, at least to me.

Depending on your artistic direction, it may be very hard to find the right cast. There's a reason Tom Cruise can ask for small fortunes in any movie, there's a reason even he gets Photoshopped for a magazine cover. Do it all by hand and you don't have to pay him and you don't have to use Photoshop on the results either.

Also keep in mind that movies usually don't need CG humans, because they have actors for that. It's either alien/fantasy creatures or non-existing people. But resurrecting dead actors is still a divisive topic - we've had a 2Pac hologram and an Audrey Hepburn commercial in the past year and it's not clear yet if there's a place for things like 'them'. So in movie VFX you have to use artists as what you need does not exist.

But as suggested above, I am thinking about generating a huge data base with many different faces (also for each category like at least maybe boys, girls, women, men). Then use compression techniques to find a basis for say facial animation of a boy (mean values) and use the database to generate imperfections (fluctuations around this mean).

There's research on that and it looks horrible. If you're scanning, get a different actor for each role and then you'll have the best results possible.


What tec did they use in Crysis3 btw, it looks fantastic!

Scans of real people and bone based face rig with animated normal maps, driven by facial capture data.
Haven't played the game but as far as I know there aren't that many people in there, though.
 
Face mocap is not an end-all solution either... We had some very interesting experiences in Halo 4 and then in AC4 as well.

Even in the case of using the same actor for the CG likeness and the performance, there may be issues. The director may not like parts of the performance, or may like a part of one take and another part of a different take. Or there may be small gestures or twitches that worked fine on the capture stage but look silly in the edit, from the camera's point of view.

It's even more complex if the CG character's look and the actor's look is different.
Case in point: Jen Taylor performed Dr. Halsey. She's about 38 but her face looks and moves like she's 28, and she's playing a 60+ years old woman. So we needed to do a LOT of keyframe animation to get results that weren't freaking everyone out...
 
The shader isn't really that complex or expensive, and as others have already mentioned the demo runs very fast on not exactly top notch hardware already. Which is why Activision instantly hired this Jorge guy who had absolutely nothing else in his CV - but he did write and probably own this shader.

This is one of the best examples of production ready tech and it will appear in games in this console generation (even if not necessarily with scanned heads). Count on it.

Well he also has work on SMAA and MLAA to his name but yeah that and this these shaders he has been working on for years seem like his main accomplishments.
 
@Laa-Yosh

Subtlety, exaggeration, makeup/mask, credibility, naturalness, timing.
I dare to say that I could use these words to "describe", in part, what acting is about as well as animation.
In fact I personally think that many animators indeed possess a deep understanding of acting.

Do you agree?

Edit
Oh and Dr Halsey looks natural, looks alive, looks human.
You co-worker is a real artist...at least to my eyes.
 
Last edited by a moderator:
You're talking about performance art, I'm talking about fine art like sculpture and painting. Different things.
 
A little old, but the full GDC 2013 slides (300 pages!) are available from Jiminez's blog. Tons of pics within.

http://www.iryoku.com/projects/nextgen/downloads/Next-Generation-Character-Rendering-v5.pptx

XxgUjjk.png
 
Back
Top