I was kinda asking Milk though
Anyway, my interpretation is that this tech is meant for real time pre-visualization. This technology has been developed for a long time; for example there was some testing by Peter Jackson and Weta on FOTR for the cave troll battle. There it was just body movement, of course. Then the other major milestone was Avatar where James Cameron also had some sort of realtime facial solver, all plugged into Motion Builder.
You see, the results are still somewhat rough; but it's really good already for an instant preview in emotional scenes with fully performance captured characters. There's a lot of small nuances in timing, editing, whatever, that come from the facial performance of the actors, so it'd make the production even faster and more responsive; and we all know that the Avatar sequels are in full production by now.
Anyway, my interpretation is that this tech is meant for real time pre-visualization. This technology has been developed for a long time; for example there was some testing by Peter Jackson and Weta on FOTR for the cave troll battle. There it was just body movement, of course. Then the other major milestone was Avatar where James Cameron also had some sort of realtime facial solver, all plugged into Motion Builder.
You see, the results are still somewhat rough; but it's really good already for an instant preview in emotional scenes with fully performance captured characters. There's a lot of small nuances in timing, editing, whatever, that come from the facial performance of the actors, so it'd make the production even faster and more responsive; and we all know that the Avatar sequels are in full production by now.