Next-gen character modelling (GDC 2013)

Bagel seed

Veteran
A preview from Activision's tech and upcoming presentation at GDC.

Our talk in GDC 2013, Next-Generation Character Rendering, is a few hours away. On it, we will present what represents to us the culmination of many years of work in photorealistic characters.

We believe this technology will bring current generation characters, into next generation life. At 180 fps in a Geforce GTX 680.

We will show it running in our two-year old laptop, in live.

Director of R&D Jorge Jimenez's blog

Bald guy Nvidia head demo returns

lauren-02-thumb.jpg


lauren-06-on-thumb.jpg


More pics at his blog and teaser slides.

Looks pretty damned good. Eye shader on/off is a stark difference.
 
Last edited by a moderator:
holy crap!!!


...The bald guy looked very good on close ups but I feel there was a loss of detail when zoomed out. Also lighting plays an effect wether its life like or cgi. Some instances were life like. Still, truly amazing stuff. The woman looks beter then what quantam dreams presented for ps4.
 
I looked at those powerpoint slides and was blown away. And what changes between on/off slides is small but stark. Amazing.
 
Relies on high-end scanning using a Lightstage to get all that detail into the normal maps... Will see how many games can use it in practice.

Similar to the Nvidia techdemo, actually, only the shaders are better.
 
I find this better than Nvidia Ira.
Here there is a much more fluid transition between expressions.
 
Last edited by a moderator:
But it does not seems to be driven by arbitrary data. Nvidia's face was not just playing back pre-recorded data, it was able to form expressions on the fly.

This looks like it's just straight playback, which is why it's more fluid.
 
I re-watched Ira and I noticed that they got thought the same "loop of expressions" over an over, the same loop we have here, even the phrase he says is the same.
Acti says: "Yes, this is the same source data from ICT [captured at the University of Southern California] used by Nvidia at GTC, but rendered using a completely different tech" so maybe I am just imaging things.
Also the lighting here surely doesn't make "reading" the facial expression hard which might be the reason why for me it looks more fluid overall.
 
Last edited by a moderator:
They seem to have a better SSS + specular/reflection implementation, and also Jorge's eye shader.
 
By the way, where is this guy's face familiar to me from? My first tip was that he's an ex-Bungie dev...
 
Still, truly amazing stuff. The woman looks beter then what quantam dreams presented for ps4.

I'd say they all looked quite obviously better. The only one that counts though is the animated one which again, lookd much better IMO. The teeth and inside of the mouth were off but aside from that I'd say it was more or less out of the uncanny valley.
 
Is it possible this kind of graphics could be used in Ps4 games? If so, my dream of photorealsm in games may be realized.
 
Is it possible this kind of graphics could be used in Ps4 games? If so, my dream of photorealsm in games may be realized.

Looks like it could, as it can run at 180fps on a 680. Seems it's all in the shaders and captured mesh data. IIRC in that Nvidia demo it was gigs of performance data compressed down to 400mb.

Jump to 1:20 in the video. When he's lit in that way with the shadows it looks eerily real. And then the detail of the wrinkles around the eyes when he smiles is quite good too.

Kojima Productions showed a similar eye shader technique at their GDC conference, and how it gives the eyeball some depth where it looks like the pupil sits correctly inside instead of on the surface. It really makes all the difference imo. That and the skin tone + oiliness specularity.
 
Does it take the same computational power to make a character look that good lets say ten feet away instead of a foot or two? I can imagine a zombie game where most of the time you are encou tering one zombie close up, and several other zombies in the distance. When you are not encountering a zombie, you might be talking to one or two other characters close up. With character models that good and the surroundings as good as crysis 1' i think we would be close to a movie.
 
A small update from Jimenez.

I’d like to clarify some topics related with my last post:

First of all I’d like to credit the Institute for Creative Technologies for the amazing performance capture provided for the animation (http://ict.usc.edu/prototypes/digital-ira/). Their new capture technology enable photoreal facial animation performances together with extremely detailed skin features. The full team behind the capture, leaded by Paul Debevec, is the following: Oleg Alexander, Graham Fyffe, Jay Busch, Ryosuke Ichikari, Abhijeet Ghosh, Andrew Jones, Paul Graham, Svetlana Akim, Xueming Yu, Koki Nagano, Borom Tunwattanapong, Valerie Dauphin, Ari Shapiro and Kathleen Haase.

Second, it has been quoted in numerous sources if this was related with the Nvidia FaceWorks presented in GTC. I’d like to clarify that we both use the same performance capture from the ICT, but the animation and render engine is completely different. In other words, same source data but different engine.

Finally, I’d to clarify that the technology we presented runs in its higher quality preset at 74/93 fps at 720/1080p [probably a typo, figures switched] respectively, in a GeForce GTX 560 TI (a two-year old mid-range GPU).

Thanks to all the people who showed interest on our reseach, the slides will be available pretty soon online!

74 fps on a 560 TI at 1080p. We are going to have really nice quality talking heads next gen.
 
Back
Top