Uncharted 4: A Thief's End [PS4]

Status
Not open for further replies.
I think it doesn't look so good because they are trying for 1080p 60fps for the final game.

The E3 trailer was running in real-time according to a conversation I had with one of their devs (not to mention DF found aliasing indicating it wasn't supersampled CGI or high fidelity in-engine offline rendered cutscenes such as used in TLOU and previous UCs).

The downgrade is probably symptomatic of the extremely limited scope seen in the E3 trailer (which lets them bump up the fidelity) and the night-time setting.
 
That's probably more about your tastes and such.

Unity's characters, except for the two leads, are near 1:1 implementations of scans from real living humans. They'r full of subtle touches of life in their faces and the extremely realistic lighting makes them look very real. Arno is very likely based on our original character sculpt (as he looks very similar) and I believe Elise is a heavily modified scan of a real person. My only other suspect from the supporting cast is Napoleon, who's probably also an original sculpt or a modified scan, as in his case Ubi obviously wanted to go for a likeness of the old paintings. Still,all of the characters are very well done, the facial animation is a big step up from AC4, and cutscenes are very movie-like, also because of the lighting.

UC4's characters however are probably all hand crafted, and as amazingly talented ND's artists are, you can still easily tell that they're not real people. The lighting is a bit simplistic as well, probably because they're still aiming for 60fps and have to take some shortcuts for efficiency. But once again, they're absolutely top quality, I don't know if there's any other game studio with such a level of artistry.
What they certainly have on ACU is that the game renders at 1080p with very nice antialiasing and there's a lot of high frequency skin pore detail in the textures, accentuated by the high specularity and the renderer brings out those details very sharply. I don't want to sound like I'm trying to degrade it, but it's a visual 'trick' to push the results. People are right noticing that it's reminiscent of FFTSW, where the same technique was used to 'sell' the characters. It's a very conscious artistic choice which you may or may not like, but it doesn't make the faces more real or lifelike or convincing IMHO. But that is good, the jump from previous games is still quite large and very near the limit of turning Drake unrecognizable and maybe even a bit creepy. Had they gone with scanned faces, it'd feel very weird and disconnected from UC3.

All in all I wouldn't say that there's a great gap between the two games' assets, the difference has to do a lot more with art direction.
Just out of curiosity have you guys ever been used for scanning to make your own characters?
 
I've viewed the clip quite a few more times, the game play is just looking great.

I now reckon they are aiming for 60 fps and will happily eat humble pie if it's 30 fps on release.
 
Just out of curiosity have you guys ever been used for scanning to make your own characters?

Yes, we are using lots of scanning now.

Advanced Warfare's main cast of all 12 speaking characters was based on head scans of the real life actors who played them, the data acquisition was performed by Activision's team. We've also got 40-70 facial expression scans and such, but all the assets obviously needed processing. We've also had to modify the characters in some cases, which I expect to become fairly standard procedure.

However we've also built our own photogrammetry rig and keep upgrading it constantly. On AW, yours truly was the stand-in for the dead body of Will Irons and for the arms of all the main soldier characters; although more or less modified (like biceps size ;) ) it's still fun to see my vein patterns on the hands and forearms... I've also been the test subject for the full body scan pipeline, and man it feels pretty weird to have several people work on your digital self or see yourself in shorts on the movie screen in dailies ;)
It wasn't vanity though - I wouldn't ask anyone to do anything I wouldn't do myself, including shaving... And I also wanted to know how exhausting the scanning process is; the answer is that it's surprisingly bad, for example having to stand as still as possible for 10-20 minutes while the camera array is calibrated.

On a current, unannounced project, we're now using in-house scanning very extensively; the hero character is a heavily modified version of the mocap actor / stunt guy, and the other main character is based on a body double and a different actress's facial likeness and performance. But we're also adding artistic changes to that character as well. Another unannounced project is going to use a lot of facial scans for both main and background characters, but once again we'll modify the data and also mix in hand sculpted character faces, bodies and so on.

Understand that scanning is not a turnkey solution, at least not in our workflow (it is possible to go with 4D scanning where you shoot at 30-60fps and capture full facial movement, like LA Noire did). The assets are noisy, hair, body hair, eyebrows and eyelashes can't be properly captured; facial expression scans can only cover a small percentage of the full range and usually need to be modified as well to fit into the rigging pipeline. You need to do a LOT of manual work and it takes an accomplished artist to apply it at a proper quality level.
Also, there's still a lot of subtle high frequency detail that you just cannot capture, so you'll need to manually paint the fine texture details. Then you also want the character to move, which opens up an entirely new world... :)

However, the real life subtleties that you can capture are simply invaluable. Even highly trained and experienced traditional artists are usually somewhat limited in their artistic interpretation, and they also prefer to idealize both facial features and body shapes; and another probably underapreciated element of living people is the inherent asymmetry, which is usually way beyond what you'd expect. For example it's very common for faces to have a "fat" and "slim" side, the height of the eyes and ears are usually offset by several millimeters, muscles on one side may be far more developed, and so on. It's also interesting how far real life anatomy usually is from the classical idealized looks, for example someone you'd consider to be a pretty strongly built person might look surprisingly underdeveloped in your content creation app. This is another important lesson we've learned: no matter how hard you try, or what complex shaders you use, the interactive display in Maya or Zbrush will never really show a good enough representation of your character. However, funnily enough, when you load scan data, it still manges to look more convincing even with a simple Phong shader.

There's also an additional result from working with scans, your artists will learn just how far they can push their craft and benefit immensely from the real life data.

And then I haven't even mentioned the use of scanning for props and environments. Since they're static, you can shoot more images to get a more accurate solve from the photogrammetry software, and even do stuff like put a camera on a quadcopter drone... So the possibilities are endless.

But in the end it's still another tool in the box, a very useful one - but I don't expect it to completely replace talented artists, there's always going to be a need for a human eye and touch.
 
Yes, we are using lots of scanning now.

Advanced Warfare's main cast of all 12 speaking characters was based on head scans of the real life actors who played them, the data acquisition was performed by Activision's team. We've also got 40-70 facial expression scans and such, but all the assets obviously needed processing. We've also had to modify the characters in some cases, which I expect to become fairly standard procedure.

However we've also built our own photogrammetry rig and keep upgrading it constantly. On AW, yours truly was the stand-in for the dead body of Will Irons and for the arms of all the main soldier characters; although more or less modified (like biceps size ;) ) it's still fun to see my vein patterns on the hands and forearms... I've also been the test subject for the full body scan pipeline, and man it feels pretty weird to have several people work on your digital self or see yourself in shorts on the movie screen in dailies ;)
It wasn't vanity though - I wouldn't ask anyone to do anything I wouldn't do myself, including shaving... And I also wanted to know how exhausting the scanning process is; the answer is that it's surprisingly bad, for example having to stand as still as possible for 10-20 minutes while the camera array is calibrated.

On a current, unannounced project, we're now using in-house scanning very extensively; the hero character is a heavily modified version of the mocap actor / stunt guy, and the other main character is based on a body double and a different actress's facial likeness and performance. But we're also adding artistic changes to that character as well. Another unannounced project is going to use a lot of facial scans for both main and background characters, but once again we'll modify the data and also mix in hand sculpted character faces, bodies and so on.

Understand that scanning is not a turnkey solution, at least not in our workflow (it is possible to go with 4D scanning where you shoot at 30-60fps and capture full facial movement, like LA Noire did). The assets are noisy, hair, body hair, eyebrows and eyelashes can't be properly captured; facial expression scans can only cover a small percentage of the full range and usually need to be modified as well to fit into the rigging pipeline. You need to do a LOT of manual work and it takes an accomplished artist to apply it at a proper quality level.
Also, there's still a lot of subtle high frequency detail that you just cannot capture, so you'll need to manually paint the fine texture details. Then you also want the character to move, which opens up an entirely new world... :)

However, the real life subtleties that you can capture are simply invaluable. Even highly trained and experienced traditional artists are usually somewhat limited in their artistic interpretation, and they also prefer to idealize both facial features and body shapes; and another probably underapreciated element of living people is the inherent asymmetry, which is usually way beyond what you'd expect. For example it's very common for faces to have a "fat" and "slim" side, the height of the eyes and ears are usually offset by several millimeters, muscles on one side may be far more developed, and so on. It's also interesting how far real life anatomy usually is from the classical idealized looks, for example someone you'd consider to be a pretty strongly built person might look surprisingly underdeveloped in your content creation app. This is another important lesson we've learned: no matter how hard you try, or what complex shaders you use, the interactive display in Maya or Zbrush will never really show a good enough representation of your character. However, funnily enough, when you load scan data, it still manges to look more convincing even with a simple Phong shader.

There's also an additional result from working with scans, your artists will learn just how far they can push their craft and benefit immensely from the real life data.

And then I haven't even mentioned the use of scanning for props and environments. Since they're static, you can shoot more images to get a more accurate solve from the photogrammetry software, and even do stuff like put a camera on a quadcopter drone... So the possibilities are endless.

But in the end it's still another tool in the box, a very useful one - but I don't expect it to completely replace talented artists, there's always going to be a need for a human eye and touch.
That was brilliant information. Thanks :)
 
Yes, we are using lots of scanning now.

Advanced Warfare's main cast of all 12 speaking characters was based on head scans of the real life actors who played them, the data acquisition was performed by Activision's team. We've also got 40-70 facial expression scans and such, but all the assets obviously needed processing. We've also had to modify the characters in some cases, which I expect to become fairly standard procedure.

However we've also built our own photogrammetry rig and keep upgrading it constantly. On AW, yours truly was the stand-in for the dead body of Will Irons and for the arms of all the main soldier characters; although more or less modified (like biceps size ;) ) it's still fun to see my vein patterns on the hands and forearms... I've also been the test subject for the full body scan pipeline, and man it feels pretty weird to have several people work on your digital self or see yourself in shorts on the movie screen in dailies ;)
It wasn't vanity though - I wouldn't ask anyone to do anything I wouldn't do myself, including shaving... And I also wanted to know how exhausting the scanning process is; the answer is that it's surprisingly bad, for example having to stand as still as possible for 10-20 minutes while the camera array is calibrated.

On a current, unannounced project, we're now using in-house scanning very extensively; the hero character is a heavily modified version of the mocap actor / stunt guy, and the other main character is based on a body double and a different actress's facial likeness and performance. But we're also adding artistic changes to that character as well. Another unannounced project is going to use a lot of facial scans for both main and background characters, but once again we'll modify the data and also mix in hand sculpted character faces, bodies and so on.

Understand that scanning is not a turnkey solution, at least not in our workflow (it is possible to go with 4D scanning where you shoot at 30-60fps and capture full facial movement, like LA Noire did). The assets are noisy, hair, body hair, eyebrows and eyelashes can't be properly captured; facial expression scans can only cover a small percentage of the full range and usually need to be modified as well to fit into the rigging pipeline. You need to do a LOT of manual work and it takes an accomplished artist to apply it at a proper quality level.
Also, there's still a lot of subtle high frequency detail that you just cannot capture, so you'll need to manually paint the fine texture details. Then you also want the character to move, which opens up an entirely new world... :)

However, the real life subtleties that you can capture are simply invaluable. Even highly trained and experienced traditional artists are usually somewhat limited in their artistic interpretation, and they also prefer to idealize both facial features and body shapes; and another probably underapreciated element of living people is the inherent asymmetry, which is usually way beyond what you'd expect. For example it's very common for faces to have a "fat" and "slim" side, the height of the eyes and ears are usually offset by several millimeters, muscles on one side may be far more developed, and so on. It's also interesting how far real life anatomy usually is from the classical idealized looks, for example someone you'd consider to be a pretty strongly built person might look surprisingly underdeveloped in your content creation app. This is another important lesson we've learned: no matter how hard you try, or what complex shaders you use, the interactive display in Maya or Zbrush will never really show a good enough representation of your character. However, funnily enough, when you load scan data, it still manges to look more convincing even with a simple Phong shader.

There's also an additional result from working with scans, your artists will learn just how far they can push their craft and benefit immensely from the real life data.

And then I haven't even mentioned the use of scanning for props and environments. Since they're static, you can shoot more images to get a more accurate solve from the photogrammetry software, and even do stuff like put a camera on a quadcopter drone... So the possibilities are endless.

But in the end it's still another tool in the box, a very useful one - but I don't expect it to completely replace talented artists, there's always going to be a need for a human eye and touch.
Thanks for the information.
Too bad I couldn't shake your hand in game. ;)
 
Yes, we are using lots of scanning now.

Advanced Warfare's main cast of all 12 speaking characters was based on head scans of the real life actors who played them, the data acquisition was performed by Activision's team. We've also got 40-70 facial expression scans and such, but all the assets obviously needed processing. We've also had to modify the characters in some cases, which I expect to become fairly standard procedure.

However we've also built our own photogrammetry rig and keep upgrading it constantly. On AW, yours truly was the stand-in for the dead body of Will Irons and for the arms of all the main soldier characters; although more or less modified (like biceps size ;) ) it's still fun to see my vein patterns on the hands and forearms... I've also been the test subject for the full body scan pipeline, and man it feels pretty weird to have several people work on your digital self or see yourself in shorts on the movie screen in dailies ;)
It wasn't vanity though - I wouldn't ask anyone to do anything I wouldn't do myself, including shaving... And I also wanted to know how exhausting the scanning process is; the answer is that it's surprisingly bad, for example having to stand as still as possible for 10-20 minutes while the camera array is calibrated.

On a current, unannounced project, we're now using in-house scanning very extensively; the hero character is a heavily modified version of the mocap actor / stunt guy, and the other main character is based on a body double and a different actress's facial likeness and performance. But we're also adding artistic changes to that character as well. Another unannounced project is going to use a lot of facial scans for both main and background characters, but once again we'll modify the data and also mix in hand sculpted character faces, bodies and so on.

Understand that scanning is not a turnkey solution, at least not in our workflow (it is possible to go with 4D scanning where you shoot at 30-60fps and capture full facial movement, like LA Noire did). The assets are noisy, hair, body hair, eyebrows and eyelashes can't be properly captured; facial expression scans can only cover a small percentage of the full range and usually need to be modified as well to fit into the rigging pipeline. You need to do a LOT of manual work and it takes an accomplished artist to apply it at a proper quality level.
Also, there's still a lot of subtle high frequency detail that you just cannot capture, so you'll need to manually paint the fine texture details. Then you also want the character to move, which opens up an entirely new world... :)

However, the real life subtleties that you can capture are simply invaluable. Even highly trained and experienced traditional artists are usually somewhat limited in their artistic interpretation, and they also prefer to idealize both facial features and body shapes; and another probably underapreciated element of living people is the inherent asymmetry, which is usually way beyond what you'd expect. For example it's very common for faces to have a "fat" and "slim" side, the height of the eyes and ears are usually offset by several millimeters, muscles on one side may be far more developed, and so on. It's also interesting how far real life anatomy usually is from the classical idealized looks, for example someone you'd consider to be a pretty strongly built person might look surprisingly underdeveloped in your content creation app. This is another important lesson we've learned: no matter how hard you try, or what complex shaders you use, the interactive display in Maya or Zbrush will never really show a good enough representation of your character. However, funnily enough, when you load scan data, it still manges to look more convincing even with a simple Phong shader.

There's also an additional result from working with scans, your artists will learn just how far they can push their craft and benefit immensely from the real life data.

And then I haven't even mentioned the use of scanning for props and environments. Since they're static, you can shoot more images to get a more accurate solve from the photogrammetry software, and even do stuff like put a camera on a quadcopter drone... So the possibilities are endless.

But in the end it's still another tool in the box, a very useful one - but I don't expect it to completely replace talented artists, there's always going to be a need for a human eye and touch.

Thanks! Sorry I hit the report button thinking it was "Likes". ;)
 
Well, I've watched the Uncharted 4 gameplay trailer and was very impressed. Perhaps some here are underwhelmed because the initial teaser trailer was showcasting a scene at night-time where the focus was predominantly on Nate fully upclose (highlighting the detail on his face and body as well as the lighting and facial expression) which then later pans out to a very dense forrest (though again; at night, only lighted by the moon, so little detail on how good things actually look). Fast forward to the gameplay trailer and here we have what is a fully playable 15 minute extract where Nake is reduced to very small size in the 3rd person perspective, so obviously, all those details that made us go 'wow' in the teaser become insignificant. Instead we have gameplay in broad daylight with very dense folliage and what seems to be rather sophisticated AI. What also impressed me, is the scale of everything. That together with the foliage - IMO, how could that not be impressive?

This is a huge step up in my book - even compared to TLoU remaster which however way you put it, is a last generation PS3 game on steroids. And it shows; While there might be great textures everywhere at higher resolution (compared to the PS3 version), geometry still is relatively simple. In U4, the dense foliage and the scale of everything is on another level. Comparing it directly to the teaser trailer - impressing with a night scene is always a lot easier than in a broad daylight gameplay segment.

As for the 30/60fps - I'm not quite sure where people here are getting the idea that Naughty Dog are indeed targeting 60fps? Yes, the initial teaser was running at that, and TLoU gives people like me lots of hope that 60fps might be an option - but in regards to U4, I don't think I've seen any info on them promising any such thing. In fact, I'm pretty much expecting them to go 30fps in the end, but with a very solid framerate and perhaps more polish if what they have now is closer to 60 (but not quite) than 30. I won't be disappointed either way, even if the game ends up being exactly what it is what was shown in this gameplay footage. Sure, I'd prefer 60 (at the expense of visuals), but I'm happy either way.
 
The gameplay they've shown will really be impressive running at 60fps. I'll be very disappointed if they don't target 60fps (I won't mind a few drops) as hinted in previous communication.
 
As for the 30/60fps - I'm not quite sure where people here are getting the idea that Naughty Dog are indeed targeting 60fps? Yes, the initial teaser was running at that, and TLoU gives people like me lots of hope that 60fps might be an option - but in regards to U4, I don't think I've seen any info on them promising any such thing. In fact, I'm pretty much expecting them to go 30fps in the end, but with a very solid framerate and perhaps more polish if what they have now is closer to 60 (but not quite) than 30. I won't be disappointed either way, even if the game ends up being exactly what it is what was shown in this gameplay footage. Sure, I'd prefer 60 (at the expense of visuals), but I'm happy either way.
I believe we got the idea because I believe thats what ND said

Regardless here is an "interesting" analysis of the gameplay video (bah not so interesting after watching it)
 
As for the 30/60fps - I'm not quite sure where people here are getting the idea that Naughty Dog are indeed targeting 60fps? Yes, the initial teaser was running at that, and TLoU gives people like me lots of hope that 60fps might be an option - but in regards to U4, I don't think I've seen any info on them promising any such thing.

They did in fact promise that:

"We’re targeting 60fps for Uncharted 4: A Thief’s End and as you can see the visual fidelity for our character models will reach new heights.
...
You should pre-order now.
"
http://www.naughtydog.com/site/post/uncharted_4_a_thiefs_end_2014_e3_trailer/

But I don't fault them for having 30fps presentation 1 year before game will be out. It is wonder we got that so soon [they always targeted E3 for first playable demo with big setpiece moment].
 
Maybe they're doing the 30/60fps thing like The Last Of Us on PS4, and they're showing the 30fps option because it looks better for trailers?
 
Maybe they're doing the 30/60fps thing like The Last Of Us on PS4, and they're showing the 30fps option because it looks better for trailers?
Perhaps for final release, but certainly not for marketing. People love frame rate as much as they love resolution. It's early in development, I don't think it'll be easy getting it back up to 60 though. The game is already competitive graphically, asking them to double the frame rate essentially means they're extracting 2x the performance of all their competitors; leveraging Order 1886 as a benchmark in next gen techniques and likely heavy utilization of the hardware I have my doubts. Even if it were possible, I very much doubt it would be trivial, and I don't expect that level of performance until a handful years from now.
 
Perhaps for final release, but certainly not for marketing. People love frame rate as much as they love resolution. It's early in development, I don't think it'll be easy getting it back up to 60 though. The game is already competitive graphically, asking them to double the frame rate essentially means they're extracting 2x the performance of all their competitors; leveraging Order 1886 as a benchmark in next gen techniques and likely heavy utilization of the hardware I have my doubts. Even if it were possible, I very much doubt it would be trivial, and I don't expect that level of performance until a handful years from now.

Depends how it's done. If it's like TLOU, the game is capped at 30fps with some visual enhancements, but if it was running uncapped it would probably run much faster. I'm just throwing the idea out there. Someone may have already mentioned it. Most game trailers are not shared at 60fps, even if the game runs that way.
 
Status
Not open for further replies.
Back
Top