L.A. Noire from Rockstar

The amazing thing is, it doesn't really suffer from the whole hidden valley effect. It's really startling how lifelike it looks, but it never seems creepy.

It does feel a little creepy for me. But given the advancement, I'm willing to overlook it.
 
As far as I know, there is driving and shooting, from the previews.

http://ps3.ign.com/articles/113/1135448p1.html

"Though L.A. Noire is an open-world game (explore L.A. if you like, see the sights, admire the pedestrians), there aren't mini-games to be played or side quests to complete or pigeons to shoot. L.A. Noire is a far more linear game, that puts an emphasis on the journey, moreso than the destination, and one where the narrative and the characters take a central role, with far fewer distractions. It's different, but that's what makes it look so promising. "

"All this talky talk and detective work is fine and dandy (and looks really cool), but this is a Rockstar game so expect plenty of action. You'll tail unreliable witnesses, chase down suspects, and get into GTA-like shootouts where you kill an inexplicably high number of enemies. And if someone dares shoot off your hat, you can stroll over and pick it up. After putting a slug between their eyes, of course."


Part GTA (or Mafia if you prefer), part Mass Effect and part 90s adventure game.
 
Yet the gameplay it offers is actually a generational progression for once. Or is it? I mean, you could video-capture 2D video and have a brancing selection from lots of pre-recorded choices. As this technique is just prerecorded acting and not created on the fly, it's in essence a choose-your-own-adventure story, like Dragon's Lair, and not a real open-ended game. Like many story based games, but still... - I'm suddenly questioning if this is something to get excited about in games!

Bah... it's the game mechanics I'm interested in. Dynamically generated lying face is probably years away. :p
 
That's a lot! 40 hours is almost 30 ninety minutes long movies, and if you assume characters talk 50% in a movie, you get an awful lot of dialogue compared to the medium where acting is far more crucial.

The main character has a male and a female version, and for many cases, 2 or 3 choices in response. Sure Shepard doesn't talk that much but it has to be multiplied like 4 times... maybe even more. And a lot of characters also have different responses depending on your actions, and there are some unique pieces of dialogue for each of the 12 squad members.

Although thinking about it, 40 hours might really be too much... but there's definitely a LOT of talking in these games. Which is why the lipsync is completely automatic, there's no way to do it all by hand.
 
The real question here is how much of the gameplay these interrogation scenes are. If there's driving, shooting, puzzle solving / evidence searching, then it should still be more then Dragon's Lair.
Oh, sure. I wasn't suggesting the whole game was just a Dargon's Lair experience! I was just thinking if this facial animation tech is something really new or not regards what it brings a game. If it's dynamic and a character can express any emotion at any moment, then the game could adapt organically. However if it's all precanned choices and dialogue trees, it can't. So although the characters animate beautifully, the gameplay is no different to using other animation methods. The inclusion of subtle detail makes choices about if the character is lying or not more important, but the whole game could have ditched the 3D format for interrogations and filmed actors directly. The end result would be the same gameplay, picking a path through the video tree.

I guess the question is how flexible their data is once captured, if they can create new content on the fly, or just play it back.
 
Recorded live video sequences would look different from the other kinds of gameplay (exploration/action) and could break the immersion. Also, the added cost of period clothing and sets/scenery (there are outdoor sequences too!) would probably still be far more than all this graphics and digitizing tech and the sessions.

And again, principally this is no different from recording the voice only. Games still can't synthetize new speech, can't even generate new text on their own. There is some simple level of procedural facial animation like tracking an object with the eyes but that shouldn't be a problem with this tech if it works with a face rig and not every frame of animation is unique.
So this is, at least in theory, no different from whatever any other conversation intensive game is doing, but the quality of the animation is at least an order of magnitude better. However it has somewhat inferior rendering quality IMHO - but the animation is the more important aspect of selling the performance, so they probably made the right choice.


I've already linked to tech papers that seem to be similar so just a short summary:
- the system captures both geometry and color, can generate meshes and displacement + normal maps from the mesh, and also color maps
- the workflow is to first capture basic expressions like blink, open mouth, furrowed brows, maybe even phonemes; then to record a complete performance
- basic expressions are a set of base geometry, color map, normal map (the system can produce specular and displacement maps too but these are too memory intensive for current consoles), which also means that every expression requires a separate color/normal map for at least the involved area of the face, and could also need in-between maps where simple blending isn't sufficient (like blinking)
- the underlying software then deconstructs the recorded performance to combinations of the basic expressions
- output is a facial rig capable of the basic expressions, and animation data to drive it in sync with the voice

If they are using some implementation of this tech, which is likely, then it should work quite well; and the bonus is that the capture system really is future proof, they'll be able to increase detail and realism a lot further on next gen systems with more memory and shading power.

Actors will definitely get more opportunities, however those who have good voice acting skills but don't look the part will hate this system. It's already been mentioned with regards to Mass Effect on the Bioware forums, but people don't yet realize that this would cause a lot of problems, for a start there'd be a complete lack of character customization...
 
No, not really - unlike ME3 it's for a completely new thing so I couldn't even hint at it if I wanted to.
 
It's uncanny as hell to me ;) Plastic dolls acting...

Yup... I dig it but see a lot of creepies. The faces seem slightly unconnected to the hats, and move more than the clothes or the other shadows, so they look very "wrong". Also the proportions of certain body parts seem unrealistic given face shape. Regardless, the system is exciting and I'm used to video games looking weird. If you're out there taking chances, I'm probably lining up to buy.
 
Mind you, every game out there has uncanny characters :) LA Noire has the best facial animation so far, but the looks are still not there yet. I have great expectations about what this tech will be able to do with next gen hardware though.
 
those who have good voice acting skills but don't look the part will hate this system.

I'm curious about this; is it not possible to get the captured information and apply it to the same facial rig, but different geometry? Or is there too much data tied with the associated texture maps the system produces? Assuming this is the case, maybe down the line it would be possible for artists to provide a target mesh the system can take into account upon generation of geometry and textures? Apologies if these questions verge on stupidity.
 
That's the Benjamin Button approach I guess, but rather than CGing an older Brad Pitt they could use someone to be the look of the character and another to act it.
 
Theoretically you could drive a different model with the capture data, even the tech papers mention creating new animation and facial poses (again, these papers aren't about Rockstar's system though, just a very similar one). So you could use any method to create the performance itself.
However that model would still have to be based on a real, living person. Building a completely realistic human face manually from scratch would be a huge effort, even Digital Domain hasn't managed to do it properly for Tron's Clu character. And once you have to cast someone for the digitizing session, it makes sense to go all the way and hire someone who can also act out the scenes, it'd work better that way. Human facial motion is incredible subtle and complex and evolution turned everyone into an expert, but then again a game still doesn't require as much realism as a movie.

And yes, this system takes a lot of texture work and not just geometry - the eyes aren't separate little spheres sitting in the head, it's one continuous mesh and the eye motions are done with the textures too. Same with all the folds and wrinkles, even most of the inside of the mouth and the tongue. There's a few seconds in the behind the scenes movie where you can see just how rough the underlying geometry actually looks like, a very low resolution scan basically. There isn't much room for manual adjustments or corrections, the entire system is built with complete automation in mind.

It's perfect for the kind of application that they are doing, a cast of believable human characters.
Where it'd fail are anything that's stylized, larger than life, or non-human. I can't really imagine games like Uncharted, Mass Effect, Starcraft or even GTA to use tech like this, but on the other hand something like Killzone or Crysis could work - but on the other hand those games don't have enough emphasis on facial animation to warrant this kind of investment.
 
This technology would've worked wonders with Heavy Rain.

About the CG Jeff Bridges in the Tron movie....why did they have to completely make him from scratch ? why not just capture Jeff's facial movements ?
Though I must say that it had its 'Gotcha' moments but for the most part the reason why people despised CG Jeff Bridges is cause of the uncanny valley effect, atleast that's how I see it. :p
 
Last edited by a moderator:
Bridges has an aged facial structure now, more wrinkles, different tissue composition, and so on. The skin itself behaves in a completely different way, the shapes of the eyes, the mouth, the neck are all changed from his 35-year old self. You cannot just track how an eyebrow moves and apply the movement to the digital version because that would look very odd.
So you have to build all these shapes manually, of course referencing video material from the young Bridges, but still it's a long process of sculpting all the possible expressions for the digital model. They literally build about 150 different heads and the animation then combines these individual shapes into the actual performance.
From what I can tell, these shapes are the main reason why the CG character isn't convincing enough, with some shading/lighting related issues in some of the shots. But the very first appearance in the first trailer was utterly convincing, partially because there was very little movement. I still haven't managed to develop a theory on why Benjamin Button was better; it was the same team so they should have been able to get it right.


Also, it's not enough to just transfer the performance. The gestures and manners are mostly the same but there's still a lot of unconscious facial movement that comes with both the physical and mental aspects of aging. What Bridges has performed on the set is used as a starting point and a reference, but it's almost always modified and sometimes completely replaced.


And I really hate this uncanny valley stuff ;) it's not the cause, there is no such thing as an "uv effect", it's just an observation about the emotional response curve. The reason for the dismissal is that the face doesn't act in a way that the observer expects it to do based on its appearance.
 
Er...

What Bridges has performed on the set is used as a starting point and a reference, but it's almost always modified and sometimes completely replaced.

It's just not going to look convincing if they transfer the performance straight to the younger face.
Then again it didn't really look convincing anyway.
 
This technology would've worked wonders with Heavy Rain.

Would it really? Just wondering, but it seems like a lot of the finer facial details have been scaled back or omitted entirely to reach that level of sophistication in animation.
Basically all the little details that made Heavy Rain look special are absent here. Would probably end up being more of a trade off than an improvement. Well, In most people's eyes at least (I'd take a convincingly animated cartoon face over a stiff but realistic looking one any day of the week)
 
Should they ever make a sequel to Heavy Rain on the PS4, it'd definitely be a game as fit to this tech as possible. Realistic humans based on real life actors... PS3, well I don't know, it'd probably require some scaling back of the engine's other abilities.
 
Back
Top