Quantic Dream's Next Game

Wow, what an extremely impressive showing of a tech demo, it was like one of those really rare moments in gaming where my attention was 100% grabbed and true emotion was felt. The new performance capture tech was really something, good acting and dialogue also go a long way!
In my honest opinion I was more blown away by this than the Samaritan demo, not the brute tech prowess but purely by the excellent presentation, the strong emotion, the design aesthetics and the theme behind it. I QD can deliver a PS3 game with consistent visual and presentation quality of that caliber then honestly I would put it right at the VERY top of current gen games. Go Mr. Cage!
 
Interestingly for me the most impressive thing was the performance capture and voice acting.

The lighting, texturing...didnt matter that much to convey the emotion which was the main element of that demo to me. It could have been just as powerful if it was cel shaded as long as the performance capture was kept.

Good stuff eitherway.
 
Interestingly for me the most impressive thing was the performance capture and voice acting.

The lighting, texturing...didnt matter that much to convey the emotion which was the main element of that demo to me. It could have been just as powerful if it was cel shaded as long as the performance capture was kept.

Good stuff eitherway.

This is very true...

That was phenominal, and i really hope that this demo gives an indication towards the direction of their next game.

I love Sci Fi and this demo just ticks all the right boxes in terms of emotional provocation, direction, graphics, premise etc.

Just awesome! :D
 
Funnily enough I think it's the tech you don't see in here that's the most impressive - the mocap pipeline.

The face has low-res photo based textures and the detail is nowhere near what you can see in Uncharted games. I didn't see any details in the normal map either and the facial expressions don't produce any wrinkling or such. However, this may easily be corrected, isn't too memory intensive and I expect them to implement it if it's still not in the engine.
Color texture would need a different art pipeline though, they seem to prefer to just photograph the actors and use the images without much processing.

The facial animation seems to be a very highly polished bone based rig, and it's probably not straight transformations from the mocap markers. In fact I think it's not 3D mocap, but rather based on a facecam and 2D markers on the actor's face - same approach Avatar and Tintin used, but probably a bit less sophisticated. This seems to become industry standard, I'm aware of a few other games that appear to go this way. I'm not sure if they have their own processing software or if they're contracting Image Metrics, though.

The body deformations/poses are generally good too, but whenever she lowers here arms the shoulder/chest area looks bad, this seems to be a very hard challenge to get right. They're at least brave to still do naked bodies, a clothed person would work much better. But then there'd be the lack of proper cloth simulation... and did anyone notice the short hair? :)

Anyway, very impressive result, but it's mostly because of the actress and the highly polished mocap. Interested to see where this is going, now that our studio has a PS3 ;)
 
Nice, and I agree it's the performance and concept that's the most interesting here.

I still believe, as I did with Heavy Rain, that it would be a great idea if a proper film studio would be set to work with these tools to create an interactive movie using the same tech. They should try a few different genre's, or a TV series style setup with a few different pilots. The turnover with games that David Cage is developing is way too slow for good iterative progress to happen. These are some big steps they are taking here, but there's a lot of years going inbetween, and with the work the movie industry is already doing in CGI stuff, the time is ripe for a more integrated approach.

This is where we need a new Pixar style initiative, some Pixar people defecting from Pixar/Disney and setting up their own studio to create, who knows, even just interactive commercials using this type of technology, for Kinect and Move to start with, though touch-screen devices could work quite well with this type of stuff too (definite Vita potential as well).

And I agree with Cage that the adult gamer is grossly underserved, and proper acting can bring a lot to a game. Uncharted is a good example even if the tech there isn't perfect yet, but something like Ninja Theory's work in that area also really elevated the connection I had with the game. Good stories and performances can make a game world much more interesting and dramatic, and that can really help.

And only games can immerse you in a world to the level that you can actively interact and play a role in it.
 
Tech demo was great, looked excellent and the voice acting was powerful even if the writing was less than stellar. I can't wait to see their next game now.
 
Funnily enough I think it's the tech you don't see in here that's the most impressive - the mocap pipeline.

Yap, sometimes I wish the gaming press would talk more about the *real* "behind the scene" work for game technologies.

If I remember correctly, the engine was (is ?) only estimated 50% complete and the demo is one year old. The rendering won't be representative of the final result. I am most impressed by the natural expressions (including eye movement), and generally more organic feel of the demo.

Anyway, very impressive result, but it's mostly because of the actress and the highly polished mocap. Interested to see where this is going, now that our studio has a PS3 ;)

Cool, but just one ? ;-)


IGN has another interview with Cage:
http://ps3.ign.com/articles/122/1220242p1.html

Yesterday, Quantic Dream Founder and Heavy Rain developer David Cage debuted the 'Kara' short film. No, it wasn't a trailer for a new game but rather a trailer for the engine the development house is using on its next project. If you haven't watched the video, it's stunning -- using mo-cap, vocal and facial performances that were all captured at the same time. We grabbed Cage after screening the trailer, and got him to spill on a number of hot topics.

...
 
Actually, it's the eye movement why I've probably jumped to facecam + image analysis. Our test with the Image Metrics software showed that it picked up small eye darts very, very well, whereas it's impossible to get such data with standard 3D mocap like vicon's system.

And yeah, we have one PS3 (and one Xbox with Kinect) - we don't build actual games so it's not a necessity, but for stress relief and such. We're also getting the pool table and darts soon, too, as I've been told ;)
 
The body movement capture is pretty standard stuff, has been for over a decade. I've helped to set up a Vicon system back in 2001 and while cameras have more megapixels and the software has probably advanced a lot too, the principles are the same.

For facial stuff, you want a different system with a dedicated face camera that can be synced with the body mocap and the video is then analyzed to get facial pose information. You can then use that data to drive a deformation rig, so the capture process uses an indirect approach.

Translating straight 3D marker movement data doesn't really work. Realistic facial animation needs more sophisticated animation, so that's why more advanced software is required.
 

Well from these directfeed I can see the textures aren't blurry at all, in fact I'd wager her face texture is a bit more detailed than Elena's even. But yeah, obviously this is the difference between a highly compressed video capture to a directfeed capture. In any case this is only 50%, I can't wait to see what the engine looks like now.
 
Actually, it's the eye movement why I've probably jumped to facecam + image analysis. Our test with the Image Metrics software showed that it picked up small eye darts very, very well, whereas it's impossible to get such data with standard 3D mocap like vicon's system.

And yeah, we have one PS3 (and one Xbox with Kinect) - we don't build actual games so it's not a necessity, but for stress relief and such. We're also getting the pool table and darts soon, too, as I've been told ;)

Damn, keep up the good work and the might fix the water fountain and take the coinops off the john stalls. :LOL:
 
The body movement capture is pretty standard stuff, has been for over a decade. I've helped to set up a Vicon system back in 2001 and while cameras have more megapixels and the software has probably advanced a lot too, the principles are the same.

For facial stuff, you want a different system with a dedicated face camera that can be synced with the body mocap and the video is then analyzed to get facial pose information. You can then use that data to drive a deformation rig, so the capture process uses an indirect approach.

Translating straight 3D marker movement data doesn't really work. Realistic facial animation needs more sophisticated animation, so that's why more advanced software is required.

Ok, so after the initial capture session for both body and face (together), the datasets are fed separately into 2 different systems. The 3D marker movement data will by cleaned up by humans. The 2D facial animation will also be touched up with human help via different software (Image Metrics). The final processed data for both systems will sit in 2 different file sets since they work differently. At run-time, the game then pull these data into the memory and present them to the users. Is that it ?
 
You're getting it right patsu. We haven't fully developed such a pipeline because our clients provide the voice cast and we prefer to record our own mocap in our studio on site; but we've did work where our actors synced to the voice file and it seems to work sufficiently. We're not yet doing longer sequences but there's some pretty challenging stuff coming up, so I wonder how well we're going to do.
 
Wait, in Quantic Dream's case, the voice is also recorded together with the motion ? It should be possible to handle voice acting separately.

How do you keep all these different sets of data in sync ? In video files, you have tracks and timescale to keep data in lockstep. What happens in games ? What structures do you use to denote time progression ?
 
...

If I remember correctly, the engine was (is ?) only estimated 50% complete and the demo is one year old. The rendering won't be representative of the final result. I am most impressed by the natural expressions (including eye movement), and generally more organic feel of the demo.

...

Nah Patsu, they said that the demo is one year old (correct), so the demo only contains 50% of what they've currently implemented into their engine. Meaning that their current engine is vastly superior to the results seen in the "Kara" demo (superior by roughly "50%" ;-)).
 
Back
Top