Okay, so what you're saying is the Uncanny Valley model is a poor one, and it's not just a case of distance from approximation of reality that can lead to a negative response, as described in the theoretical plot?
Yeah, sort of... The way I'd put it is that based on our experiences with real life, we have an instinctive expectation of how certain things should be recreated. This can include facial animation, but also mass and gravity and dynamics of movement (for complete characters, vehicles, smoke, water etc). It also makes sense to apply it to puzzles in video games, as Naughty Dog seems to
think.
And robotics with realistic looking androids is just one possible appearance of this sort of phenomenon.
In the case of L.A. Noire, they are targeting the animation limits that affect other titles like, say, Heavy Rain, which can feel disjointed in its animation (to me at least!). Rockstar's intention is to capture and apply directly real-world acting to make the characters empathic.
My take on this is that if something's supposed to be a human being and it's not as stylized as, say, a Garfield comic strip, then there are several very important characteristics that it just can't miss.
Stuff like keeping the volumes of the lips intact during both stretching and compressing motion, sliding skin instead of stretching it whenever possible, maintaining the bony forms of the skull, and so on.
If LA Noire wants to succeed in believable facial animation, then they have to work really hard, based on the balance of realism vs stylization that the screenshots suggest. The characters are still very far from cartoons and our brain will still expect them to move realistically.
Some of these issues can be pretty complicated with motion capture, especially when there's not a full match between the actor's face and that of the CG character. Bones and skinning in itself is notoriously bad at maintaining volumes, and on top of that many games tend to re-use the same geometry for very different faces (although Heavy Rain isn't like that). Blend shapes can solve some of these problems but can't usually be driven by mocap, and the movement of individual vertices on their own is always linear (although it's possible to use in-between blendshapes and/or nonlinear transformations).
The Lightstage-based capture system can solve many problems here but I don't have any idea how it could be scaled to a realtime environment. It also only really works if its combined with animated normal maps at the least, which are also captured from the actor's face. So they don't have much freedom here, once they go with this method they'll have to follow it through all the way. Then again it's also just a possibility at this point, although it fits with the GI article's description.