The Dark Sorcerer [PS4]

Performances don't work that well IMHO. You tend to forgive a lot of unconvincing expressions and lip sync if the voice is good. Just think about games from 10 years ago, when Half-Life 1 characters that could only move their jaws were considered impressive. They still had lots of character and emotion because of the quality of the voice work.
 
Interesting comments Laa-Yosh. I'm blow away by this demo ( remedy works too, but don like that insagram colorur gradation), but looking more closely, indeed something is off (still good), especially in later parts.

Lame question. Is it possibile or practical for them and other studios to develop these blend shape tech on top of their existing pipeline or these kind of system requires rather ground up work.
 
My problem with nearly every game or tech demo or cut scene I've seen is the lack of movement when breathing. Even a character standing still should have some movement in both their chest and abdomen when breathing but its almost non existent. Even in this demo after the guy catches fire you hear him take a deep breath...but no movement like you would expect to see. I expected to see his chest expand outwards, his medallion on his necklace react to the chest expansion and his shoulders to move back..but nothing. In that Beyond trailer it was the same thing, you never see the characters breathe and it's unsettling for me.

Honestly how hard would it be to have a simple routine that runs and controls the breathing? After a character runs and stops the breathing would look faster, when they are scared the breathing would stop etc etc.

It's these little things that add up to create a more realistic visual experience. Our minds have been conditioned to look for small cues to let us know how a person is feeling or when danger is about. They talk about subtle and micro movements in the face but body language can tell us so much and they just don't realize it I guess.

Was still a pretty impressive demo, I thought the facial features of the goblin was much more conveying then the sorcerer.
 
Laa-Yosh, how do you think their pipeline work ? Does it require more or less manpower compared to ND and Halo's tech ? What about inFamous, AC and others ?
 
Never forget
:p
zombiejeffgordon.gif
 
My problem with nearly every game or tech demo or cut scene I've seen is the lack of movement when breathing.

Yes! I noticed this when David Gant was angry, standing still, looking at camera, and there was no noticeable breathing movement on his chest.
 
Lame question. Is it possibile or practical for them and other studios to develop these blend shape tech on top of their existing pipeline or these kind of system requires rather ground up work.

I think they should be able to do it - but this would require throwing away their existing face mocap pipeline completely.

Laa-Yosh, how do you think their pipeline work ?

There are two basic approaches to face mocap.

The first method is to just place small optical markers on the face and have the system treat it like the larger markers on the body, track their movement in 3D space. Then this translation data is simply used to drive the bones in the face rig.

The problem is that the human face is much more complex and we need this complexity to interpret facial expressions. You can understand body language well enough even without the proper muscle deformations, tendons etc. - but on the face you need all the folds and wrinkles and the soft tissue pushing and pulling above the bones and such.
However you cannot place enough marker dots to track that, and the optical mocap probably couldn't handle it anyway. It also means you need a gazillion cameras on your mocap stage to always have every marker seen by at least two cameras.

There is an interesting development on this called Mova Contour, where fluorescent makeup is applied and it provides thousands of tracking points, so more fidelity can be achieved. However this requires a static sitting actor and cannot be used in performance capture where you record body and face simultaneously.

Obviously you can add some extra stuff on top of the bone based face rig, like helper bones to properly control the eye lids or the inside of the lips, and you can probably derive their movement from the tracked markers. But ultimately you only capture a small subset of the skin surface, so the deformations will suffer. You also cannot use this method to have significant differences between the actor's face and the CG character's face. Just look at the creepy human-goblin.

It's also very hard to manually animate on top of this, as all the bones have to be manipulated individually and you can not re-use any work because it's all relative on top of the mocap data.



The other approach is to track the face, then try to understand what the actor is doing and generate some kind of metadata, that can then be used to drive a facial rig based on facial expressions. So instead of tracking how many millimeters a jaw or eyelid is moving, you're instead trying to get a percentage value on "jaw opener" or "lower eyelid raiser".

The upside is that you don't care about tracking the deformations on the actor's face, you build them into the rig yourself, so the tracking doesn't need such a high granularity. You can also track pupil dilation and the markers only need to be painted on which is less obtrusive to the actor. Also, you can use a single head mounted camera so the mocap stage can be a little more simple.

However face cameras can get in the way in some situations and you also require all sorts of electronics equipment to sync them to the body mocap and voice recording, and you have to worry about batteries and such. Still, it's less expensive than buying another 20-50 cameras.

Also, you do have to put the deformations into the face rig yourself and that can take some work. You either need some talented artists to sculpt blendshapes, or multi-talented riggers to build expressions with a bone rig; or you can use scanning to get expressions from the actors but that requires significant investment. But you only have to build these deformations once, and re-use the same blink or smile or whatever. This also helps to make a character's facial performance more consistent.

The capture however requires special software called a solver that can recognize facial expressions, which is why it took so long to start to see solutions, it had to be built on a lot of research and requires fast hardware. There are also very few of these on the market, which is why probably 343 wrote their own.

Or, you can also use a human to do the "capture" like ND does ;) Their system is by the way a bone based rig augmented with blendshapes, that has pre-defined facial expression poses for the bones, so technically it's the second approach. Only difference is that PS3 doesn't have enough memory for a full blendshape rig, so they are replicating its workings with the bones instead.
http://www.youtube.com/watch?v=myZcUvU8YWc&feature=player_embedded

Does it require more or less manpower compared to ND and Halo's tech ? What about inFamous, AC and others ?

Well, hard to say about the manpower as there are many ways to automate work and re-use existing data. All characters in an ND title or Halo 4 share the same face geometry so bone weights or blendshapes can be re-used. Thus it may take a lot of time to set up a single face but eventually you can get to the other extreme, 343 said they've added a new character in only 2 days at the end, including processing the scans. Of course if you want high quality for a hero character, you'll want to spend as much time as possible.

I have little experience with Quantic's method as I've always thought bone based rigs to be inferior as we had access to blendshapes. I imagine they try to reuse as much data as possible, too. But I think they need to do a lot more polishing and tweaking - this demo clearly wasn't advanced enough at the PS4 reveal and it still needs work, but Beyond on the other hand has some much more natural looking animation on the Ellen Page character (and it helps that her face is smoother than Defoe's ;) )

AC is similar to ND's rig in that it's bones based, and it's driven by face cam captured video. Don't know about inFamous.
 
Does CGI like Pixar have to deal with the facial rigging to the same complexity, or can they get away with simplified facial rigs because the characters are simpler?
 
If anything, movie rigs are even more complex.

Pixar's rigs are based on many little manipulators to give lots of freedom to the animator to create cartoonish expressions. Some of the non-humanoid characters are even more of a mess, just think about what Scrat has to go through in any Ice Age movie.
These types of rigs have to deal with extreme squash and stretch, they need to be able to literally bend the entire head all around, with the eyeballs and teeth, and do it in a way that's easy for the animators to control. Scrat is of course an extreme example, but almost all of these movies need to implement these features to some level. Or think about Elastigirl in the Incredibles.

Movie VFX is complex as well, but that's because of the level of realism required, every minute detail has to be covered. Dry lips need to stick together, eyeballs need to push the eyelids around, all the little wrinkles have to be there. Gollum even had controls for the various veins on his head.
This level of stuff requires thousands of blendshapes on faces built from 20-50 thousand polygons. There are usually 80-200 controls for the animator, and most of the blendshapes are used to fix combinations of expressions, like being able to talk while smiling.
In comparison, Halo 4 heads are 3500 polygons and the most complex rig had ~220 blendshapes. so there's quite a large gap between the two.
 
E3 showfloor impressions from gaffer:
So I ended Day2 watching the extended Demo of this in Sonys booth. The buzz was non existant, the line was small, and I was curious. Before getting things going the Quantic Dream dev went on and on how there are no tricks, this is running in engine in 3D realtime with no video overlays. I was still quite skeptical. They showed everything from the press conference and then the demo went on for another five minutes. Another actor shows up in a devil costume, many more goblins appear. I kept thinking to myself this just can be. The character in the devil outfit looks absolutely real. I admit I was #TeamCG so I have been burned before. And this is supposedly real time no less. When it was over. He started to describe how many polys there were per character. And how many shaders. Blah blah. But then he started to change the lighting with the controller. To move the camera around in full 3D. Zoom in and out to see amazing details like the characters eyes. He started turning shaders on and off. Then he went into wireframe mode to show off the models. He had them do different animations like pushups to show off the cloth simulations for the clothes and hair. Because we were the last show of the day he had plenty of time to go through it in detail.

I could not compute. It was real. This is the single most impressive realtime 3D graphics demo I have ever seen in my life. I have been to many E3's and even SIGGRAPH. Never has my jaw hit the floor so hard. I put my face right up to the huge hidef tv screen to look more closely at the end. Still looked real. There can be no debate here people. #TEAMCG is happening and in real time on PS4. And it is not Kojima that is delivering. It is Quantic dream. This really deserves alot more attention than it is getting. Especially after what happened with the TeamCG vs TeamReal debate. If you are at E3, you must goto the Sony booth and wait in line for the Quantic Dream demo to see it for yourself. I asked them to release the demo on PSN store when PS4 comes out. The last argument is that this is just a demo, and what can the game look like. They were very confident that the game will look like this or better. The Quantic Dream guy listed how they made this in 6 months using tools from the Beyond pipeline. And that it was not even using the full memory of PS4. He also pointed to their pedigree, and I have to admit, Beyond looks better than many next gen games at E3.

#TEAMCG has been vindicated! Discuss.
http://www.neogaf.com/forum/showpost.php?p=63271331&postcount=319
 
So it was 6 months. I hope the video of their presentation surface onto internet and that there was someone tech competent there to ask some questions. I hope Richard Leadbetter was there.

For comparison, from E3 Epic presentation, Infiltrator tech demo was in 3 months.
 
For comparison, from E3 Epic presentation, Infiltrator tech demo was in 3 months.

Epic used their old engine [they removed SVOGI from Elemental demo and then addedd metric ton of pissfilters], and QD built their engine from scratch, plus demo lasted 4x longer, plus it has totally different style of storytelling.

Plus, QD's engine was created for newly created PS4 [new tools, hardware, etc], and Inflitrator used prooven PC tech.
 
Epic used their old engine [they removed SVOGI from Elemental demo and then addedd metric ton of pissfilters], and QD built their engine from scratch, plus demo lasted 4x longer, plus it has totally different style of storytelling.

Plus, QD's engine was created for newly created PS4 [new tools, hardware, etc], and Inflitrator used prooven PC tech.

QD based it on Beyond technology and improved - "The Quantic Dream guy listed how they made this in 6 months using tools from the Beyond pipeline"
Demo lasted 4x times longer - seriously? Compare amount of assets, big city, whole factory vs one room and 3 characters
There is ton of new technology in this demo
Yes, it is different style.

PS4 is also proven PC tech.
 
Using your Windows PC engine to create a demo on a Windows PC is easier than porting your PS3 engine to a brand new development environment, and radically different underlying hardware, even if it is PC derived.
 
I haven't really seen that many wrongs with the cloth simulations in that movie... However, it's true that sometimes a shot has to be approved even if we're not completely happy with it. E3 is an unmovable deadline after all.

1 million polygons are for scenes where only a few characters are present, don't expect it in an open world.
And don't expect it for a cast of 300 characters either... Unless someone's willing to pay all the artists, of course.

In order for characters to get closer to CG quality, do they require a base count near a million polygons? Or could existing technology fake your way up there with much lower count? I recall that you claimed recently increasing the amount of polygons is much lower on the list of priorities for a game like The Last of Us. So I am wondering where the diminishing return is because it seems like the most impressive showings increased their count exponentially. A lot of games are only using thousands of polygons atm for key areas based off the artist thread.
 
Little summary and some new details about the tech demo

PS4 Demo The Dark Sorcerer: Even More Impressive Technical Specs Unveiled, and a Little Mystery
1370916420ijDP.jpg

A few days ago I wrote an article that unveiled a few of the impressive technical specs of The Dark Sorcerer tech demo by Quantic Dream, and today we learn more of the elements that contributed to turn what basically was treated as a “old man face” joke right after the PlayStation meeting in which the PS4 was unveiled into an extremely impressive demonstration of power that dropped many jaws at E3.

For convenient reading, I’ll add the new elements to what we already know in a single list, to give a proper overall vision of the specs, that were gathered from notes by our writers present at the demo’s showing at E3, this video, an article on the Japanese website 4Gamer, the official PS blog and couple tidbits from Quantic Dream’s official website.

The demo ran in 1080p native resolution. Texture resolution was 1080p as well.
The framerate was not optimized at E3, and ran between 30 and 90 frames per second.
The demo used only 4 GB of the PS4′s 8 GB of RAM.
The DualShock 4 can be used to dynamically move the camera position and switch lighting (between studio mode and film mode) within a single frame.
The set uses about one million polygons.
Each character takes a little less than one million polygons and 150 MB of textures
(there’s a reporting discrepancy here. See at the bottom of the list).
The textures for the skin and face models were actually obtained by scanning the face of actors actually cast for the project.
The sorcerer is played by David Gant, the Goblin by Carl Anthony Payne II, the Demon by Christian Ericksen and the Director by David Gasman.
The vertex density of the 3D models is comparable to the CG used for film making.
Each character uses 40 different shaders.
The scene uses Volumetric Lightning, allowing individual beams of light to be displayed when the light shines through the environment.
Color Grading and Full HDR ensure that colors are truly vivid and realistic.
All particle effects are simulated in real time and emit light/create shadows.
Limb Darkening is used to naturally darken the edges of the screen.
Effects that would normally be applied in post production like Lens Flare, True 3D Depth of Field and Motion Blur are implemented in real time based on an accurate optical simulation.
Camera lens distortion and imperfections are also simulated.
Physics-based real time rendering is used for reflections and done by the rendering engine. When the lighting changes, the shaders don’t, but the reflection effect still changes dynamically based on lighting. This is a technique that was previously possible only for pre-rendered CG at this level of detail.
An advanced technique named Subsurface Scattering (SSS) is used to simulate the shading of the skin. It involves letting the light penetrate translucent materials and then scatter and refract a number of times at irregular angles and exiting the surface again a different points. It’s another technique that was previously only used in CG.
The same Subsurface Scattering effect is used for the green skin of the goblin, for the wax of the candles and the crystal of the wand as well, despite the fact that the final result is entirely different.
To simulate the effect of wetness of the eye surface, the engine applies to it a mirrored image of the surrounding scene.
The cornea and pupil are actually modeled in 3D inside each eye.
Each hair is drawn separately instead of being a texture applied to a polygonal model.
All clothes, accessories and hair (including the feathers in the sorcerer’s collar) are physically simulated in their motion and interaction with the environment, like they were worn by a real actor
Each human model has 380 different bones: 180 in the face, 150 for the body and 50 for the exoskeleton. This is three times the number of bones used in Heavy Rain and twice the number used in Beyond: Two Souls.
The performance capture studio used for motion and facial capture has been built internally at Quantic Dream and uses 64 cameras tracking 80 markers attached to the face and 60 to the body.
The demo has been created by using the PS3 development pipeline used for Beyond: Two Souls, as PS4 development tools still weren’t available.

There’s also a bit of a mystery that we are still unable to solve. The article on the Official PlayStation blog talks about a little less than a million polygons per character, but our team reports that during the presentations at E3 “only” 60-70,000 polygons were mentioned. The other sources agree with that notion. I included the official PlayStation blog number in the list above, as it’s the most “official” source we have, but the discrepancy seems strange. We reached out to Quantic Dream for a clarification on the issue, and we’ll keep you updated if we hear anything relevant.

That’s quite a wall of text…I know. but that’s what you get when you try to describe a rather large leap in technological innovation. Now we just have to wait and see what Quantic Dream will be able to do with this kind of technology and proper PS4 development tools. One thing is for sure: seeing the demo in action it’s hard not to be excited for the next generation.
 
the ps4 gpu can push 1.6 billion triangles - does this mean than it can push 1.6 billion polygons ? if so then 60-70000 polygons seems tiny ?
 
Back
Top