Accurate human rendering in game [2014-2016]

Status
Not open for further replies.
Then again, that's exactly what Naughty Dog had in the Last of Us... And that's quite better too, IMHO. But then again they manually animated everything and they had a lot less work compared to Beyond.

Not to mention ND only used the blendshapes on Ellie's face during the pre-recorded cut-scenes. In Beyond it's all real-time all the time.
 
All well and good, but despite the clear advances in animation, the woman in the QB screens has this awfully rubbery looking CG skin (Second Son suffers from that as well) whereas Jodie's looks a lot more convincing and life-like to me. QD pulled of some pretty impressive stuff on Sony's aging warhorse there (heck, I even prefer Beyond's SSS aproximation to Second Son's)

Its lighting and post-processing, not shading.

---
that second screenshot you post looks worse than the other one, perhaps you should of choossen one with unacceptable compression?

better geometry = yes, better shaders = no
you can see instantly she's CGI
Disagree, in full res view shading looks much better on QB characters.

Beyond is artistically better looking, but shaders look less precise.
Color Grading and dithering on shadows really hurts the look of QB scene, i agree with that. Beyond is much cleaner and sharper.
 
Last edited by a moderator:
Beyond is also completed, QB does not even have a release date yet and those images are almost 6 months old or so.

But the actual looks or visual styles of the games is a question of subjective personal preference, there's no use in arguing about which one person X likes more.The original issue was that QB is indeed very realistic, even if heavily stylized, and in facial animation it seems to be more advanced.
 
Photo mode from Second Son:

blfashgimaeu3cmepoph.jpg

bleuerqcuaabyfchtjxi.jpg

http://i2.minus.com/iHM9Eg2ttwNIM.png
https://farm3.staticflickr.com/2932/13917070504_42c2e81980_o.png
https://farm4.staticflickr.com/3691/13914688035_f7740899a1_o.png
https://farm4.staticflickr.com/3827/13895248576_df17d6bc71_o.png
http://i.imgur.com/1PuQhmH.jpg
 
The facial animation and deformation system is more advanced as well.

Both Beyond and the Sorcerer demo are using only bones to deform the face, driving them by a straight translation of facial marker 3D mocap data. This makes some of the deformations notoriusly hard to reproduce, especially the mouth and the eyelids, even with a large number of bones.
Basically they don't capture the performance itself, the intent of the actor, but just the surface of the face instead; and at a very rough level of detail, as the number of markers that can be placed on a face is very limited.
The result is that the facial deformations have good dynamics, but look very weird at times; and the more extreme the facial deformation has to be, the more obvious this issue becomes.

Quantum Break however is using a more advanced method where two different capture techniques are combined. This is also the approach used in games like Ryse or Infamous Second Son.

First each actor's base facial expressions are captured, things like raising an eyebrow, puckering the lips, or blinking an eye. The breakdown of these expressions is based on the Facial Action Coding System (FACS) developed by clinical psychologists in the `60s, and first adapted for 3D facial animation on the LOTR movies and most notably Gollum. Nowadays it's pretty much an industry standard in VFX and spreading very quickly in game development.
These expressions are created basically as full 3D scans at a very high level of detail, up to millions of polygons, and capture not only the general shape of the face but also the tiniest folds and wrinkles and skin sliding and all. One can also capture the skin color changes from compression where blood is retained or pushed out from certain areas of the face. The scanning itself is usually done by stereo photogrammetry, basically using 15-50 digital cameras to shoot pictures from many different angles; the software then generates a 3D point cloud based on matching millions of points in the images based on color changes. Human skin is fortunately detailed enough for that :)
The entire expression library usually has 40-80 individual scans.

These scans are used to build a facerig where each basic expression can be dialed in at any desired intensity, and the mix of various FACS Action Units can be corrected if they look wrong on their own, too. Games with this approach usually also use a bones based rig, finetuned by a set of corrective blendshapes or morfs (which can be turned off as the character moves away from the camera and accuracy becomes less important); and they also add various wrinkle maps on top of the normal map to create the creases and folds for stuff like raising the eyebrows. These wrinkle maps are also generated from the facial scans.
Also, correctives are only used when necessary, because blendshapes are computationally expensive on realtime systems (no real GPU support) and they also take up a lot of memory. The main facial movement is covered by up to 150+ bones (in Ryse) and the riggers are mostly trying to match the scans by manually adjusting the bones for each expression.

The second capture then is the actual performance, which is usually an imaged based approach, using a face camera hinged from a helmet worn by the actor during the mocap (or in this case P-cap).
Software called a solver analyzes the facial movements (usually with the help of painted dots on the actor's face) and translates it into animation data driving the various basic expressions (AUs). There's no need to capture the actual facial deformation itself during the performance as it's already stored in the face rig itself, generated from the facial scans.
The result is more accurate and better looking deformation, and it's also much easier to manually animate on top to correct or mix or replace, as the animators only need to work with the expressions.

I hope all the above is clear and easy to understand. Also note that I'm in no way trying to bash Quantic's team, but there's no other way to say it, their methodology is outdated and inferior, and it shows. It was easy to please the hardcore audience with their demo mostly because it was an early move, but even today we already have games using more advanced techniques and producing much better animation. Quantic's texture and shader work is still good, so it looks nice in stills, but once those faces start to move the illusion breaks very quickly. If I were them, I'd look into the new approaches ASAP.

So does a game like Halo 4 use the same approach to facial animation like QB, Ryse, and Infamous?

halomac1.jpg
 
Halo 4 is different in that their facerigs are using only blendshapes, no bone based base layer at all. They also don't actually load the rigs into the game, but use a certain form of compression on the results (called Principal Component Analysis). I've written a more comprehensive post about this roughly a year ago.

But the general approach is the same, capture the base facial expressions in a more detailed first pass, then build a rig and drive it with facecam capture in a second pass.
 
I had a google, quantum break looks perhpas the best so far, though here the woman looks a lot better than the man
Quantum-Break-E3-2013-Xbox-One-Trailer_8.jpg

also this screenshot is quite old E3 2013. I think we will have to wait for some 'real' ingame screenshots, before truly judging

better geometry = yes, better shaders = no
you can see instantly she's CGI
No, This was in-game.

Lileanna from Dragon Age 3. HQ face, LQ everything else.
Typical frostbite, same was in Battlefield 4 too.
 
NVIDIA did a great job with their latest demo on Tegra K1 actually. Wondering what they could bring to console industry with their tech. I also think new game engine like Unreal Engine 4 will give more space for developers to just focus on creating realistic human physics instead of the objects or special effects.
 
Tegra K1 is a joke, it's TDP is way too high for what is allegedly a mobile part. Nvidia were in the console game last gen with the PS3 but the RSX. It is fairly universally declared a flop with most of the IQ gains coming from leveraging the SPUs to preprocess or render parts of the scene. AMD/ATI took the lead in consoles because they have a CPU and a GPU to offer, Nvidia only have an ARM licence and are trailing most other ARM vendors in implementing ARM11. Any way I'd be highly sceptical of any ARM design matching the last gen POWER CPU designs in raw perf let alone the x86 cores in the current gen
 
Tegra K1 is a joke, it's TDP is way too high for what is allegedly a mobile part. Nvidia were in the console game last gen with the PS3 but the RSX. It is fairly universally declared a flop with most of the IQ gains coming from leveraging the SPUs to preprocess or render parts of the scene. AMD/ATI took the lead in consoles because they have a CPU and a GPU to offer, Nvidia only have an ARM licence and are trailing most other ARM vendors in implementing ARM11. Any way I'd be highly sceptical of any ARM design matching the last gen POWER CPU designs in raw perf let alone the x86 cores in the current gen

I have to agree with you. I think NVIDIA loves to amaze people with great looking presentations and demos, although i always love seeing their demos. They should focus more on implementation as well. That's why their Tegra SoC didn't have a great success in mobile/tablet market, beaten by Qualcomm by a lot.
 
The one with the two girls fighting is actually quite convincing, especially as a thumbnail. Skin tones, lighting, contrast - at a quick glance it can be mistaken as a photograph.
 
On topic though - these shots are horrible. There's little that's accurate about them.
Yes, thats why I thought the above was a joke?
But the poster was serious right :oops:, they'ld sorta be OK for ps360 not great but semi-adequate.
 
Status
Not open for further replies.
Back
Top