Okay, so a short summary of the talk at FMX by Matt Aldridge.
It was mainly about the content creation workflow for the human faces, how they decided to rely on scanning to create realistic results.
They did a lot of casting in several steps, requiring 360 degree rotation videos and some facial expressions from the candidates. Reason for the latter is botox - one of the actresses they initially looked into was abusing it so much that she was barely able to move her face at all.
They've built their own setup for the scanning based on stereo photogrammetry and DLSR cameras. Referred to Disney research for the remeshing (in case you're surprised, ever since Disney Animation went all digital they've been doing some very serious work in many fields, worth googling it).
They've scanned a neutral pose with textures, and lots of facial expressions too. They've kept most of the faces asymmetrical except for Cortana who's not a living human. Lots of technical info on combining the separate scans and processing all the various expressions and refitting the lowres game mesh and so on, but I think no-one here is interested in that.
Not a word about their custom solver which interpreted the facecam data from the performance capture and translated it to drive the face rig. All we know is that Corinne Yu has developed it and that she had some twitter discussion about it with Carmack (id working on performance capture too?)
The face rig built in Maya was using a ~3500 polygon head mesh which was shared across all the characters - even the Librarian. They had about 150 face shapes with some corrective fixes pushing it to ~210, and also lots of bones - jaw, tongue, eyelids where necessary (the rigger assigned a bone to every vertex, calling this somewhat overkill method 'nuking it from space').
Matt said he was fighting to have blendshapes in the engine from basically the first week he was hired (couldn't agree with him any more on this one).
They could use such complexity because the Maya rig was never used in the game. They've used Principal Component Analysis to basically compress all the animation data. This created a predefined number of blendshapes and animation to reproduce the deformations, it's a sort of compression tech. They could set the system to use 8-16-32 or any number of shapes, but usually they could get away with 16 shapes for as much as 1000 frames of animation. This means the PCA data was using less memory than a complete bone based face rig, and look vastly better.
They also had a "tension" node, pretty standard in offline work, analyzing how much an area of the face has been stretched or compressed, to drive wrinkle maps. Interestingly they used not only a secondary normal map but a color map too, and the animators had the ability to tweak the strength of the effect, too.
They've also used PCA for a certain, very graphic, character death - Matt has been a proponent to include this scene from the beginning. He was surprised that the ESRB had no problems with it and was more concerned about Cortana looking too naked. Oh and sneaked in some cloth sim with PCA, too.
At the end they were able to get a new head into the game in about 2 days.
The shared mesh/UV data also allowed for a very special internal build, where they took the scan data from the 343 employee who volunteered for the scanning - and put that face on EVERY character in the game. Screenshots were hilarious.
Also, this approach meant that they could drive any character head with performance capture data from anyone. So a character with a likeness of actor A could be captured from actor B. Actually this was the case for almost every character, but in some cases it was decided from the start and in other cases they were changing voice/performance actors during production.
Matt was also asked in the Q&A about LA Noire and pointed out this as the main difference between the approaches. He was very diplomatic about it - but that type of capture/replay tech is a technological dead end IMHO and I believe he thinks the same
Pretty interesting info altogether. The scanning workflow they've developed is totally next gen ready - so expect Halo 5 to have even higher fidelity characters, reproducing more of the huge amounts of data their capture system can acquire.
The PCA stuff is very cool too, with the amount of memory on next gen systems the possibilities are near endless. Cloth sims, muscle deformation systems, whatever, it's all easy to display with even higher resolution meshes.
Also interesting to see how game developers have to get closer and closer to high end VFX approaches, tools and yeah, results.
Had a short chat with Matt after the talk too, thanked him for the info and turned out he was a big fan of our work. Actually the very first slide of the talk was not ingame content but the final frames from the Legendary ending, which I had the privilege to work on