Accurate human rendering in game [2014-2016]

Status
Not open for further replies.
The-Last-of-Us-Characters-Sculpt.jpg


Also for Last of Us as well
 
Why don't you guys give the new COD more credit for its human rendering? I just re-watched the X1 stuff...and the scene where the soldier plants the bomb in the copter at the end looks astoundingly real life and facial animations+voice acting are top notch!
 
Why don't you guys give the new COD more credit for its human rendering? I just re-watched the X1 stuff...and the scene where the soldier plants the bomb in the copter at the end looks astoundingly real life and facial animations+voice acting are top notch!

I haven't gotten a chance to look at any direct-feed screenshots or HQ video, but from what I saw on Youtube the mocap is absolutely top-notch.
 
New Tomb Raider will use Mova Countour facial capturing system that will instead of ~100 dots on the face track about 7 thousand dots of fluorescent paint [visible only in IR]. This technology was first introduced in a big way by Steve Perlman [QuickTime and OnLive creator], which enabled Digital Domain to won an Oscar for creating the face of aged Brad Pitt in Curious Case of Benjamin Button. Since then it was used in many Hollywood movies [Willem Defo's alien face in John Carter, Mermaid in Pirates of Carribean 3, Mark Ruffalo's Hulk, Jeff Bridges' Tron 2 avatar... ].

IMG_20140807_131809.jpg

https://twitter.com/camilluddington/status/497429012176572418
https://twitter.com/camilluddington/status/497430078288953344
https://twitter.com/camilluddington/status/497430535782678528
https://twitter.com/camilluddington/status/497430864108597248
https://twitter.com/camilluddington/status/497434313374171137


Laa Yosh, you are using this tech [or something simmilar] @ Digic? I know that Blur is using this for their CGIs.
 
As far as I know, Blur has only used it once, in the Batman Arkham Asylum 2 trailer. A recent BTS video suggests that they're also using it now on the Halo 2 remake. But usually they use different methods.

We are not using the tech, haven't used it and I'm not sure if we will. You get a very accurate surface, no question about it, but that also means that it's most suited for a 1:1 recreation of the talent. It's also a lot harder to modify the performance unless you write some complex tools to manipulate the data. For example it's common to change character heights, timings, eye directions etc. and that's a bit harder if you are not working with elemental expressions.

Still, this data can also be processed in many different ways and can get more accurate results than just using a face cam. Anyway I kinda have to go now but can back to this later...
 
You get a very accurate surface, no question about it, but that also means that it's most suited for a 1:1 recreation of the talent.

I wonder did they choose to change face of Lara Croft for Definitive Eddition so that they could be more ready for easier facial capturing for the sequel? Model shown in latest VisualWorks' CGI trailer now looks to be older and visually more closer to Camilla's face.

ppgbjj.png

iiakht.png


But, lets wait for propper gameplay reveal. VisualWorks and Eidos have history of making her face different all the time.
 
I have great expectation for how realistic the new Lara will look especially coming off UC4's trailer. They'll probably simulate muscles on the face, realtime sweat, blood/saliva spew when punched or kicked, better TressFX, cloth physics, fur shader on the coat, accurate
boob, butt
physics and of course much better animation. I want full PBR pipeline with realtime reflection so the realistic character would be much grounded to the equally realistic environment. All locked at least 1080p/30 please.
 
Her face looks so alive. Even though there are no facial expressions, the eyes convey life. I dont know what are the attributes required in model design to make a face that even when it doesnt show emotions, it looks alive and "emotional". Most games fail to do this when there is no clear emotion expressed. They look like mannequins. Hell they fail even when there is emotion expressed.
 
I thought this was really impressive. I'm guessing this is the original Sinclair model (million of polygons) before being transferred to in-game?
ijUiNvWpTVwDq.png

ibmNJYy94RPwcB.jpg
 
And that's the truth isn't it. We see these great examples of what the software is capable of then we get the in-game experience with lots of sacrifices being made because, well because they have to the game to run as well.
 
Yes these are offline renders of the facial expression sculpts created in zbrush, to generate the wrinkle maps from. This level of quality in realtime rendering is still at least a generation away IMHO, getting actual geometry changes like that would require triangles that are way too small for efficient rendering. So most games are still adding facial wrinkles in the normal maps only, which can't change the silhouettes or cast proper shadows.

In the case of KZ4, the ingame facial animation / deformation system is also quite lacking compared to even some of the most advanced last-gen games. So even though they had this really nice source artwork, they still weren't able to make the most of it in the game...
 
We are not using the tech, haven't used it and I'm not sure if we will. You get a very accurate surface, no question about it, but that also means that it's most suited for a 1:1 recreation of the talent. It's also a lot harder to modify the performance unless you write some complex tools to manipulate the data. For example it's common to change character heights, timings, eye directions etc. and that's a bit harder if you are not working with elemental expressions.

Still, this data can also be processed in many different ways and can get more accurate results than just using a face cam. Anyway I kinda have to go now but can back to this later...

I figured, you could still collapse all that data into the typical few key points around the face for tweaking and changing stuff, with the advantage you still have a full 3D surface representing the exact topollogy of the actor for reference when needed...
 
Interesting they didn't model the jaw-muscle (at all). And I guess they didn't have an enamel-shader in zbrush.
Otherwise, impressive ofc.
 
Sorry LaaYosh, I made my self very unclear. I'm layman afterall.
What I was trying to say, I had previously thought those "many point" techniches (I'm inventing that naming right now) could feed the aniamators with the usual key points for traditional facial rigging, or elemental expressions as I believe you said...
I imagine, out of all the 7k points they get, they could get the avarege position of a small group of them over the tip of the nose, then another for chim, eybrows, etc... Untill you go back into what most other facial mo-cap systems output, a discreet amount of carefully placed key-pionts, allowing all the ease of adapting and changing the end result with the same tools as usual. Basically, you'd be almost like re-mo-caping the raw mo-cap data... This might make the whole 7k points feel frivolous, but it could be usefull since it provides animator with a true 3dimentional reference instead of just the few key-points+video when they are working.
Is all of this less trivial then I'm thinking?
 
Status
Not open for further replies.
Back
Top