The Order: 1886

Really impressive stuff. Was underwhelmed at its first showing, but slightly intrigued when told all footage was in-engine. Yet my interest on this has progressively grown with every new bit of info about it.

Didn't expect to see something this polished out of Ready at Down. No offense to their studio, I do think they are very good, but this seems to be a game of tremendously bigger budget and production value to what they've made in the past. Great to see a competent studio evolving with time. Such a pleasant surprise so far. Hope it stays that way.
I'm growing more and more emphatic towards sony for their efforts on breeding great talent over the years.
 
More pics

1382545345-character-detail-pipeline.jpg


character_costume_pipjqozc.jpg


1382544667-tsr-poster-joe.jpg
 
Seems everyone's creating their own massive library of materials for engine pipelines that may well be very similar. Maybe there'll be some sharing in regards to this between Sony WWS.
 
Visual effects studios, even those using the same rendering engines, are usually developing their own shaders and such too. It makes sense to fit and optimize everything to your own workflow and pipeline, as any cost gains from standardization will usually have to be paid elsewhere.

This game for example, seems to use a certain kind of lighting, lots of very diffuse and gray stuff, underlit interiors and so on, so they probably have a very differently constructed renderer and so on. Makes sense to fit materials to this too; also, the period dictates very unique materials for cloth in particular, compared to a game with a SF, fantasy or contemporary setting.
 
The new pics look really good to me, I fully dig the style! I also am super excited about the dev comments stating that every body will be physically modeled and that this adds to the action mayhem :)

Can't wait to see a gameplay vid...but I also hope that they don't spoil the story or too much of the universe before release.

Quite excited about this game...good job MJP and co :)
 
Seems everyone's creating their own massive library of materials for engine pipelines that may well be very similar. Maybe there'll be some sharing in regards to this between Sony WWS.

Ubisoft and Sony are two big publishers that would benefit much from standardized game engines. But who knows how would taht impact the vision of their experimental studios [like Naughty Dog].

At least Sony needs to standardize facial capture, Second Son has MUCH more detailed capture tech than KZSF.

RAD did not mention nothing about actors and factial capture so far....
 
Ubisoft and Sony are two big publishers that would benefit much from standardized game engines. But who knows how would taht impact the vision of their experimental studios [like Naughty Dog].

At least Sony needs to standardize facial capture, Second Son has MUCH more detailed capture tech than KZSF.

RAD did not mention nothing about actors and factial capture so far....

I found it interesting that EA Sports has their own next-gen engine called Ignite, rather than using Frostbite. Maybe it's secretly a derivative, but it may just be that the jack-of-all-trades on-size-fits-all engine is still not a great solution in a lot of cases. I do know that Frostbite has borrowed from EA Sports before (ANT), so they still do some sharing.
 
Facial capture is pretty much standard - use an HD facecam and when necessary, add some markers too. The fun part is the solver that converts your captured data into animation and there are many possible ways to do that.

What KZ seems to be doing and what Quantic Dreams is doing is to use the same talent for the performance and for the likeness. This way they have to 100% replicate the performance up to the skin deformation.
This approach is a relatively good fit for face rigs that use only bones - basically, replicate the markers used on the actor in the face rig as bones and drive them using the 3D translation data. I'd consider this a "dumb" rig, in that the system has no idea about what the actor does - like, is it a smile, a blink? - and thus it does not require a solver.
The problem is that realistic deformations would require practically unlimited numbers of bones/markers to reproduce things like skin (up to 0,7-1cm of soft tissue) sliding over the facial bones, thin skin wrinkling and folding, and also volume preservation (all human tissues are basically water, thus not compressible). Also, you cannot capture the inside of the mouth and the tongue, both very visible and necessary for a convincing result.
It is possible to add a secondary set of bones to the rig that are not directly driven by the capture but can be either "programmed" or manually animated to compensate for the lack of subtlety on a bones based rig. It is however a limited solution.

If you don't use the same talent for the capture and the likeness, then things get even worse, as you have to figure out proper offsets that you need to apply to the marker data. This will create more and more freaky looking results, which is why almost noone is using the "dumb" approach for such cases.

More complex facerigs require a solver, a math based software tool that analyzes the captured data, either the video footage or the extracted marker movements, and attempt to figure out what the performance means and break it down into elemental facial movements. Almost every solver is based on psychiatric research from the '60s called the Facial Action Coding System. This is basically a set of about 40 elemental expressions called Action Units, that can be combined with various intensities to create practically every possible facial expression. So the solver's task is to break down the 2D or 3D data into values for these AUs.
Then the face rig replicates these expressions using either just bones, blendshapes, or a combination of both, and the data is used to drive these expressions. So instead of a direct transformation, it's based on metadata, and thus the facial deformations can be independent of the performance.
This also means that no match is required between the actor and the likeness (but it certainly helps), that it's easy to manually tweak the animation, also easy to create procedural animation (but the results can still look silly...).

The Order is using face capture by the way, explicitly mentioned on the image linked in this thread. They also seem to be using a mixed bones and blendshapes based rig (and blended normal maps to create facial wrinkles), which is very similar to Naughty Dog's and Crytek's tech.
I'm not sure what GTA V or Infamous does, but I do know that Halo4 is almost exclusively blendshapes (they have a single bone for the jaw) so it's different (and they're using a more different set of expressions compared to FACS).

I can also go into a little more depth on the pro/con of bones and blendshapes if there's interest, but I think the above is enough for one post ;)
 
Oh and the same image also shows that RAD's artists scan talent for the faces, but then they modify the model considerably, probably to better fit their concept for the character. Not sure if the talent for the scans and the capture is the same as there's no picture from their performance capture sessions.

It is actually very hard to find actors who have a good combination of looks, voice and acting talent. Most people equally good at all of these also have the ability to become Hollywood movie stars and ask a lot of money. So game devs can either go for average looking characters (GTA V) - or cast actors with good talent and find some good looking people for the likeness (or sculpt the heads from scratch). Or an interesting case is Palmer from Halo 4, where you can see in the video docs that they used the capture talent for the looks as well, but re-dubbed her voice in post.
 
^^^

From I read and saw David Cage seems to uses more markers (90 I think) during performance capture in Beyond Two Souls or Kara than GG in KZ or Imaginarium/Crytek in Ryse.
Does this mean they can get better results or the number of markers is not the decisive factor?

Probably it's OT but this is a very interesting argument..not my question but what you say.
 
Last edited by a moderator:
I am quite impressed with this pic an wonder if it is really ingame:

http://media1.gameinformer.com/imagefeed/screenshots/TheOrder1886/game_informer_screenshots_010.jpg

Not because of graphics, but because of the physics of her jacket (or whatever it is). Looks really grounded and quite correct for me, how the cloth interacts with the ground (no clipping or what so ever, quite smooth). But that is also why I wonder if this is a cutscene with artists helping a bit by hand, or really a real time gameplay screenshot?

Time that RAD shows their 'everything with soft body physics' feature in action :)
 
The foot and the lower leg are going through the coat, that much is clear. If it was canned (pre-calculated) animation streamed from disc then they would have fixed that part, so logic suggests it is indeed done in real time.
Also, for a quick example even current gen AC games were able to do some level of dynamic cloth on such hanging pieces, and Alan Wake used some level of actual sims on the protagonist's jacket. Considering the low number of characters in scenes shown so far, and that it's running on a much faster CPU, it's reasonable to assume that they indeed have some cloth sim solution. Not entirely water tight ;) and there may be some case specific voodoo going on, but it's still there.
 
The foot and the lower leg are going through the coat, that much is clear. If it was canned (pre-calculated) animation streamed from disc then they would have fixed that part, so logic suggests it is indeed done in real time.
Also, for a quick example even current gen AC games were able to do some level of dynamic cloth on such hanging pieces, and Alan Wake used some level of actual sims on the protagonist's jacket. Considering the low number of characters in scenes shown so far, and that it's running on a much faster CPU, it's reasonable to assume that they indeed have some cloth sim solution. Not entirely water tight ;) and there may be some case specific voodoo going on, but it's still there.

True...just one comment: in another pic, I had the impression that the coat of the woman has a 'cut' down to the legs and is basically a two piece coat...that is why it looks like the shoes are clipping 'through' the coat, as the two pieces fall around the feet...but I could be wrong?

Edit: here is a pic showing the coat a bit better:

http://www.gamersyde.com/pop_images_the_order_1886-23532-1.html
 
All such coats are split up in the middle, especially if they're tight, in order to allow the wearer to walk and sit and such*. However the way the boots are above the part of the cloth that lays on the ground strongly suggests an undetected collision.

*We're actually consulting with a costume designer / historian on our current project, learning about patterns and fabrics and such, really really fascinating stuff ;)
 
To you and your team, very nice work MJP. Easily one of the most impressive looking next-gen titles.

Dying to see some videos though. :)
 
All the behind the scenes development stuff is amazing. Can't wait to see some gameplay footage. If it lives up to what I expect this will be my first PS4 game.:devilish:
 
Back
Top