The Dark Sorcerer [PS4]

Er, Halo 4 was a good demonstration IMHO on how this kind of tech can and should be used:
- body and voice actors were different from the scanned talent
- characters weren't 100% realistic and had costumes/suits and sets that would have been far too expensive to make (especially the Didact, an alien who's 4 meters tall)
- the intended emotional reactions required scene specific movement and actions, not just for the conversations
 
The facial animation issues are because of Quantic's tech, it's a bones based system and apparently there are no wrinkle maps at all. So the face mesh and the folds/wrinkles are static and that's why it sometimes breaks the illusion.

Halo 4 had more advanced tech using blendshapes and such, and so does Last of Us where a bone based system is augmented with blendshapes. ND is also using keyframe animation instead of mocap.
 
The facial animation issues are because of Quantic's tech, it's a bones based system and apparently there are no wrinkle maps at all. So the face mesh and the folds/wrinkles are static and that's why it sometimes breaks the illusion.

Halo 4 had more advanced tech using blendshapes and such, and so does Last of Us where a bone based system is augmented with blendshapes. ND is also using keyframe animation instead of mocap.

Why does QD rendition come across as more fluid?

Looked at a couple H4 vids and it looks good but no where close to the td.

http://www.youtube.com/watch?v=hBz66y-HFGM#t=18m52s
 
H4 faces are like 3500 polygons, and the mesh animation data is heavily compressed. It's running on a vastly underpowered CPU/GPU and has only 512MB of memory that it has to share with a LOT of textures and shadow maps too.

And yet, the H4 faces don't break down the way the sorcerer's face does in some poses... it really falls apart at times. They will of course polish the bone weights a lot longer, but 343's tech is inherently superior. When they scale it up to the One in Halo 5, it'll probably look better than the sorcerer.
 
It looks great, don't get me wrong, but the stuff we do just can't work with this engine.

Our scenes are still more detailed in terms of geometry, texture sizes, and number of assets. May I point out the hundreds of characters in WatchDogs city shots? Individually modeled bricks in the wall, or the pieces of the ships in AC4 and so on.
We can use a divide and conquer principle here as we don't have to render everything at once, we can composite layers of stuff together in 2D.
So, most of our assets are made to very different specs, we can actually model a lot of stuff that a game would still replace with normal maps.

The lighting and shadows is all raytraced using the same methods, there are no separate elements like pre-calculated GI and shadows mixed with realtime lights and shadow maps or anything. This gives a huge quality jump that's very hard to match, and you can still see this difference.

Also, we don't need to compromise in image quality, we can calculate a lot of samples for each pixel, unlike games which have very little supersampling AA, if any at all.

We can also do offline processing for physics simulations like cloth, hair, and any kind of FX. We're not limited by how much processing power is available for 1/30th of a second, and we can take shortcuts because we only need stuff to work for the camera.


So, again, this is a very good looking demo, the tech is impressive and the results are of a very high quality level. It will also make us work even harder to try to differentiate our work from ingame graphics and keep a competitive edge.

But the engine still takes a lot of shortcuts, uses approximations or replaces systems with 'fake' solutions, and compromises image quality so that it can run in real time. Some of these trade-offs impose limits on asset production workflows and standards too. It still can not be used to create the trailers that we're asked to produce, at the complexity and quality levels expected from us (and Blur and Axis and the others).

I think you've mistaken me. What I'm saying is, that the knowhow required for doing this well is something that studios increasingly shouldn't be expected to have or be able to develop on their own, and they should consult studios like yours instead on helping or even creating assets, animations, scenes, effects etc.

I was complaining in another discussion that I actually don't like CGI trailers that stand out too much versus the gameplay they precede - it makes the game look more gamey. Same for cinematic trailers ... . You've still managed that handily for Watchdogs, so take that as a compliment! But when I saw this, I was just wondering if a studio like yours doesn't know a lot more about how to do this well than game studios who are just getting used to next-gen capabilities, who are now new to technologies that are pretty much old-hat to your studio?

Shifty said:
Thing is, for a real game you want the models to be procedurally animated. If the in-game graphics are recorded live-actions, and you're aiming for photorealism, you may as well just record real video footage. It'll look better and play exactly the same. Photorealism in models for acting purposes needs to be adaptable so you can save all the effort of recording 100 hours of video and set up virtual actors to play virtual roles. So fundamentally, I'm not sure what this tech demo is really showcasing regards games, other than in-game cut-scenes can be great quality, but you'll then have typical in-game motion and control looking like a computer game.

But that was the whole point of the previous tech-demo that QD released just before working on Beyond: Two Souls, wasn't it? It contained performances of three different actors, and blended them together. I saw a 'making of' that trailer that explaining this and showed the various actors at work.

I look forward to a similar 'making of' for this one, though of course we're still waiting for this to come out for PS3 first:

 
It would be interesting to know how much time it took to make this trailer. I mean...from the perspective of a game...this was like 10 minutes of content, a game should be 10 hours plus! How are the related costs? How long does it take to get swuch a fidelity? What about consistency? Can you maintain this throughout every asset in your game?

Dunno about time (i think at least 9 months), but there are around 70 developers listed in end of the tech demo.

For me, its waste of money. They could start making real game and technology for it, and for E3 just get 2-3 characters from the game, put into some finished environment/room and show a 2-3 minute animations, different light conditions, some particles and camera angles and it would do the same as this demo, but they wouldnt waste resources into something that wont be never anything but 12 minutes funny real time movie.

Whole Crysis 3 was made by 100 devs in 23 months. This was done by 70 devs in 7-9+ months, so if this was pre-production for real game, it would be 20-25% complete.
 
Whole Crysis 3 was made by 100 devs in 23 months. This was done by 70 devs in 7-9+ months, so if this was pre-production for real game, it would be 20-25% complete.

You don't really know how many hours those 70 people put in, whether they worked on it full time for 9 months - unlikely - or whether they listed 70 people who worked on it for an unspecified amount of time while doing something else. So really you don't know anything.

Personally i'd be surprised if this took 70 people 9 months of their full attention. For a 12 minute video, that's crazy. Even a Hollywood studio wouldn't pay 70 people for 9 months for a 12 minute video out of which they will never make money.
 
You don't really know how many hours those 70 people put in, whether they worked on it full time for 9 months - unlikely - or whether they listed 70 people who worked on it for an unspecified amount of time while doing something else. So really you don't know anything.

Personally i'd be surprised if this took 70 people 9 months of their full attention. For a 12 minute video, that's crazy. Even a Hollywood studio wouldn't pay 70 people for 9 months for a 12 minute video out of which they will never make money.
LB! LB! LB! LB! :D

Nothing to add, just haven't seen you in a bit......BIG HUGS!!!! :love:
 
There are moments with the stylised photography where it approaches photorealism, and it doesn't render as good as many older game CGIs. The lip-sync issues really break it though. Dunno if that's the YouTube version if is the face models aren't accurate enough.

Thing is, for a real game you want the models to be procedurally animated. If the in-game graphics are recorded live-actions, and you're aiming for photorealism, you may as well just record real video footage. It'll look better and play exactly the same. Photorealism in models for acting purposes needs to be adaptable so you can save all the effort of recording 100 hours of video and set up virtual actors to play virtual roles. So fundamentally, I'm not sure what this tech demo is really showcasing regards games, other than in-game cut-scenes can be great quality, but you'll then have typical in-game motion and control looking like a computer game.

The different systems probably have different strengths and weaknesses. The animation here looks very organic and subtle. May be it stays more true to the original performance ?

As with all tech projects, they will move on to the next sets of issues such as the ones Laa Yosh highlighted. Look forward to their next iteration.

But I do really want to see TLoU and ND's systems.
 
I think you've mistaken me.

Yes I did, sorry :)

What I'm saying is, that the knowhow required for doing this well is something that studios increasingly shouldn't be expected to have or be able to develop on their own, and they should consult studios like yours instead on helping or even creating assets, animations, scenes, effects etc.

Well, it's not that clear IMHO.
Some of our clients have asked us for our assets, years ago, for their nextgen R&D teams, so that they can study it. Funnily enough, this is the reason why a new game's lead character has my hands - as in, it's the geometry that I've modeled, and it is also based on my own hands :)

On the other hand, 343 has built a workflow and tools for their facial animation in Halo 4 that is far more complex than ours. They've programmed their own performance capture solver for the face, they've scanned real actors for every character, and the pipeline can be scaled up for the Xbox One easily; Halo 4 is like its lowres version.

Or to talk about Quantic, this demo is clearly using an evolution of their existing pipeline. They've clearly invested a lot of R&D into the software side and a lot of money into their own Vicon mocap system, so they're trying to scale it up for PS4. Unfortunately I believe it's a dead end, using straight translation data to drive bones is not enough to capture the subtleties of how the face deforms.

You've still managed that handily for Watchdogs, so take that as a compliment!

Actually I can't, as I would not be content with stuff that looks like it can be done in realtime :)

But I think you're wrong here, because our main characters look vastly superior to what the engine can do even on the PS4. But Aiden is very closely based on his ingame version, so the differences are somewhat subtle. Have you noticed the tiny hairs on his sweater, for example? Or that the knitted stuff is actually displaced, instead of just using a normal map? Or that he has actual hair and facial hair and eyelashes instead of textured polygons? These are small but important differences that can still go a long way. Not to mention the subtleties in the raytraced bounce lighting and area shadows. Or that all the clothing is properly simulated, at a much higher fidelity then what the engine can do. Not to mention the polygon counts, but I'd rather check the actual data at the office tomorrow ;)
 
to me halo 4 was a big step up in facial animations

The acting in Halo 4 is much more subtle, both the movement and the voice.

Try watching this sorcerer video without sound, for example. It'll not work as well, because so much is riding on the actors' performances.
 
Yes I did, sorry :)



Well, it's not that clear IMHO.
Some of our clients have asked us for our assets, years ago, for their nextgen R&D teams, so that they can study it. Funnily enough, this is the reason why a new game's lead character has my hands - as in, it's the geometry that I've modeled, and it is also based on my own hands :)

On the other hand, 343 has built a workflow and tools for their facial animation in Halo 4 that is far more complex than ours. They've programmed their own performance capture solver for the face, they've scanned real actors for every character, and the pipeline can be scaled up for the Xbox One easily; Halo 4 is like its lowres version.

Or to talk about Quantic, this demo is clearly using an evolution of their existing pipeline. They've clearly invested a lot of R&D into the software side and a lot of money into their own Vicon mocap system, so they're trying to scale it up for PS4. Unfortunately I believe it's a dead end, using straight translation data to drive bones is not enough to capture the subtleties of how the face deforms.



Actually I can't, as I would not be content with stuff that looks like it can be done in realtime :)

But I think you're wrong here, because our main characters look vastly superior to what the engine can do even on the PS4. But Aiden is very closely based on his ingame version, so the differences are somewhat subtle. Have you noticed the tiny hairs on his sweater, for example? Or that the knitted stuff is actually displaced, instead of just using a normal map? Or that he has actual hair and facial hair and eyelashes instead of textured polygons? These are small but important differences that can still go a long way. Not to mention the subtleties in the raytraced bounce lighting and area shadows. Or that all the clothing is properly simulated, at a much higher fidelity then what the engine can do. Not to mention the polygon counts, but I'd rather check the actual data at the office tomorrow ;)

Thanks for the answers - Quickly before I go to bed, you've mistaken me again - I mean that after watching the CGI intro for Watchdogs, the game itself looks meh, very gamey! With reality I mean a movie, like a non-CGI movie that you see in the theatre ... :D
 
I do hate you if you really think that Watchdogs looks game-like ;)
Most of the comments we've got from Ubisoft was like, "everyone thinks parts of it are live action!" :p
 
I do hate you if you really think that Watchdogs looks game-like ;)
Most of the comments we've got from Ubisoft was like, "everyone thinks parts of it are live action!" :p

Parts of the CGI, yes! Parts of the gameplay, no. :)
 
Back
Top