Sigfried1977
Legend
Okay, now that I've seen the video I am at the very least impressed by the tech.
Interestingly for me the most impressive thing was the performance capture and voice acting.
The lighting, texturing...didnt matter that much to convey the emotion which was the main element of that demo to me. It could have been just as powerful if it was cel shaded as long as the performance capture was kept.
Good stuff eitherway.
Funnily enough I think it's the tech you don't see in here that's the most impressive - the mocap pipeline.
Anyway, very impressive result, but it's mostly because of the actress and the highly polished mocap. Interested to see where this is going, now that our studio has a PS3
Yesterday, Quantic Dream Founder and Heavy Rain developer David Cage debuted the 'Kara' short film. No, it wasn't a trailer for a new game but rather a trailer for the engine the development house is using on its next project. If you haven't watched the video, it's stunning -- using mo-cap, vocal and facial performances that were all captured at the same time. We grabbed Cage after screening the trailer, and got him to spill on a number of hot topics.
...
Gamersyde has a gallery with direct feed images
http://www.gamersyde.com/news_quantic_dream_introduces_kara-12572_en.html
Actually, it's the eye movement why I've probably jumped to facecam + image analysis. Our test with the Image Metrics software showed that it picked up small eye darts very, very well, whereas it's impossible to get such data with standard 3D mocap like vicon's system.
And yeah, we have one PS3 (and one Xbox with Kinect) - we don't build actual games so it's not a necessity, but for stress relief and such. We're also getting the pool table and darts soon, too, as I've been told
The body movement capture is pretty standard stuff, has been for over a decade. I've helped to set up a Vicon system back in 2001 and while cameras have more megapixels and the software has probably advanced a lot too, the principles are the same.
For facial stuff, you want a different system with a dedicated face camera that can be synced with the body mocap and the video is then analyzed to get facial pose information. You can then use that data to drive a deformation rig, so the capture process uses an indirect approach.
Translating straight 3D marker movement data doesn't really work. Realistic facial animation needs more sophisticated animation, so that's why more advanced software is required.
...
If I remember correctly, the engine was (is ?) only estimated 50% complete and the demo is one year old. The rendering won't be representative of the final result. I am most impressed by the natural expressions (including eye movement), and generally more organic feel of the demo.
...