UC4: Best looking gameplay? *SPOILS*

Status
Not open for further replies.
And we all know that what they do with SSS with hair or skin is a hack. Skin shading is not PBS based for example too expensive and it is good mostly because of artists skills...
 
What I'm saying is how do you know they are dynamic SSR and not a pre-bake SSR pass.
How do you prebake reflections based on different viewing angles and positions?! A simple cubemap will be low res and break in different positions, needing many cubemaps.

Why would you ever want to re-render the entire scene every frame if the geometry never moves and the lighting is static?
Because reflections are dependent on viewing position. Walk closer to the wall and the reflection of the wall will occupy a larger part of the screen, needing higher quality, and a different viewing angle.
 
I, personally, would consider the term SSR to be defined as dynamically computed reflections in screen-space every frame (or even every few frames) since that's when most game devs talk about implementing them.

In short, you speak as if it's just as costly as some games' dynamic SSR implementation of moving objects (i.e. DOOM or QB for example).
Most or all games that use SSR for every opaque object, not just moving object.
It's actually easier to do that way.

Usually it goes like this.
SSR traces screen if ray doesn't hit anything, it uses perspective corrected cubemap as backup, if there is nothing at that point a 'far' cubemap acts as a backup.
Thus there are some obvious problem cases like character blocking background which gives quite clear visual artifacts etc. (There has been nice research for multi layer techniques which make this less visible.)
How do you prebake reflections based on different viewing angles and positions?! A simple cubemap will be low res and break in different positions, needing many cubemaps.
Easy way is to do lightfield / array of cubemaps for the surface... ;)
Actually did such test with 3dsmax long time ago, lets say it used a lot of memory and had same problems as those little childrens toys/post cards with 3D images.
Been wondering about combining 3D array of cubemaps for lighting information and distance fields to trace against.. (not really sure if it could be properly mixed with PBR/convolution, might be nice backup for SSR pass.)

There has been proper research for precomputed radiance transfer for years and some have got really good results even for specular.
Many have limitations which make them less than perfect for games.
 
Last edited:
Your opinion means nothing imo ;)... You ship no games.

I don't have to -- the principles of knowing 3D are the same.

I prefer hear what my friend working since 15 years in the game industry as an artist thinks about Naughty Dog and Guerrilla Games and in his and co worker opinion are the best technical AAA first partygames and among the best, better than his french team working on a PS4 games exclusive. What is the best PBS rendering pipeline in his opinion? No it is not any Sony first party studio or Epic or Crytek it is DICE...

OK.

I respect Laa Yosh for example not a fanboy at all like you and visiting all game studio knows probably much more than you about realtime rendering. If Sebbbi or Graham or phil_ or MJP other devs or Andrew Lauritzen say it shit I understand they work on realime rendering for years...

It boggles my mind how you separate realtime rendering from conventional 3d concepts. I've actually started writing my own realtime renderer in OpenGL to see how feasible it would be for our studio to view models from Maya with a programmable vertex/pixel shader pipeline. So far things are quite easy. Concepts are pretty much the same except I'm dealing with significantly limiting RAM and a somewhat different pipeline when shading triangles. Already testing throwing a significant number of texture maps @ 4k in 8-bit to the GPU to learn it's limitation. In short, realtime rendering dev and offline rendering dev share several similarities. We've interviewed several software engineers with realtime experience and with offline rendering experience. We don't create a disparity between them. If they are knowledgable enough, they can do both jobs.

I understand why you were ban from GAF

If any of the mods were like you 3, I understand too.
 
"like you 3"

Get off your high horse and maybe people will take you seriously.
 
How do you prebake reflections based on different viewing angles and positions?!

Isn't that exactly what a cubemap reflection does? You can pre-render the scene and project to all sides of a cube. Doing a lookup based on reflection vector from camera to index a texel. Totally dependent on view angle and position.

A simple cubemap will be low res and break in different positions, needing many cubemaps.

In every SSR that I've seen, the reflection is always low res indicating even if they re-rendered the scene, it was 1) A low res pass (not the same as the actual render) and 2) Parts of the scene are invisible to save on cost.
 
Yet another claim by yours truly, now are you claiming that only me, ultragpu and chris1515 are annoyed by your shenanigans on this forum? This thread is here because of you.

This thread continues to be bumped because of you though. So how are we different?
 
Most or all games that use SSR for every opaque object, not just moving object.
It's actually easier to do that way.


Usually it goes like this.
SSR traces screen if ray doesn't hit anything, it uses perspective corrected cubemap as backup, if there is nothing at that point a 'far' cubemap acts as a backup.
Thus there are some obvious problem cases like character blocking background which gives quite clear visual artifacts etc. (There has been nice research for multi layer techniques which make this less visible.)

Easy way is to do lightfield / array of cubemaps for the surface... ;)
Actually did such test with 3dsmax long time ago, lets say it used a lot of memory and had same problems as those little childrens toys/post cards with 3D images.
Been wondering about combining 3D array of cubemaps for lighting information and distance fields to trace against.. (not really sure if it could be properly mixed with PBR/convolution, might be nice backup for SSR pass.)

There has been proper research for precomputed radiance transfer for years and some have got really good results even for specular.
Many have limitations which make them less than perfect for games.

So basically SSR is defined by the gaming industry as casting a ray in screenspace at fragment time. It doesn't matter whether it's using a static cube map (as backup) or a dynamic scene that hits static/moving objects. Understood.
 
This thread continues to be bumped because of you though. So how are we different?

The origin of this thread doesn't have anything to do with the content. It was opened because you couldn't stand people talking about this topic in the Uncharted 4 thread and you always had to turn it off topic. Thus, two threads were created to discuss two different topics. I didn't chose the title, it was chosen by -an annoyed at that moment- Shifty who tried to at least keep one Uncharted topic about the game. This topic is being bumped because there's more to show about this very, and i mean very, detailed game. I've seen anims nobody even mentioned and while playing it again I always see new things. As long as there's more to post about Uncharted 4 and how beautiful it looks, this thread will be relevant. And expect it to be bumped with Neo and the eventual patch as well. I know this is not to your liking, but guess what, nobody cares!
 
Isn't that exactly what a cubemap reflection does? You can pre-render the scene and project to all sides of a cube. Doing a lookup based on reflection vector from camera to index a texel. Totally dependent on view angle and position.
Yes in simple form of cubemap it's just a capture of scenery from a point.
In every SSR that I've seen, the reflection is always low res indicating even if they re-rendered the scene, it was 1) A low res pass (not the same as the actual render) and 2) Parts of the scene are invisible to save on cost.
SSR resolution is to save performance and some reflections being blurry is to try mimic rouch surface and blurry reflection.
SSR is limited to what is in the screen and Z-buffer, if something is not visible in screen or not written to Z-buffer it's not visible in reflection. (Transparencies are not visible due to them not being written to Z-buffer.)

Here is one of the better implementations of SSR.
http://www.frostbite.com/2015/08/stochastic-screen-space-reflections/
So basically SSR is defined by the gaming industry as casting a ray in screenspace at fragment time. It doesn't matter whether it's using a static cube map (as backup) or a dynamic scene that hits static/moving objects. Understood.
Almost.

Ususally both local perspective corrected and far cubemap used for backup are pre-rendered and thus missing any moving objects.
One of the reasons why SSR is so important to give some specular occlusion for moving objects.
Also great to give specular occlusion to cubemap as cubemaps as you said cannot have decent amount of information of the area. (Quite sparsely placed and only one layer.)
 
Last edited:
The origin of this thread doesn't have anything to do with the content. It was opened because you couldn't stand people talking about this topic in the Uncharted thread and you always had to turn it off topic. Thus, two threads were created to discuss two different topics. I didn't chose the title, it was chosen by -an annoyed at that moment- Shifty who tried to at least keep one Uncharted topic about the game. This topic is being bumped because there's more to show about this very, and i mean very, detailed game. I've seen anims nobody even mentioned and while playing it again I always see new things. As long as there's more to post about Uncharted 4 and how beautiful it looks, this thread will be relevant. And expect it to be bumped with Neo and the eventual patch as well. I know this is not to your liking, but guess what, nobody cares!

Screenshot away bro!
 
Yes in simple form of cubemap it's just a capture of scenery from a point.

SSR resolution is to save performance and some reflections being blurry is to try mimic rouch surface and blurry reflection.
SSR is limited to what is in the screen and Z-buffer, if something is not visible in screen or not written to Z-buffer it's not visible in reflection. (Transparencies are not visible due to them not being written to Z-buffer.)

Here is one of the better implementations of SSR.
http://www.frostbite.com/2015/08/stochastic-screen-space-reflections/

Thanks for this Jlippo!

I was more thinking about if you actually "knew" your shader was going to do a ray-casting lookup and you intentionally remove objects before the z-pass so that they aren't recorded in the buffer (even if the camera can see them) so that they intentionally miss. I've seen some games where I'm looking right at a SSR in a pool or on a surface and it's not representing ALL objects in that scene even though they are clearly in my view. For example, In UC4, Nate is not reflected at all in the reflection-based materials even though SSR is being used.
 
I was more thinking about if you actually "knew" your shader was going to do a ray-casting lookup and you intentionally remove objects before the z-pass so that they aren't recorded in the buffer (even if the camera can see them) so that they intentionally miss. I've seen some games where I'm looking right at a SSR in a pool or on a surface and it's not representing ALL objects in that scene even though they are clearly in my view. For example, In UC4, Nate is not reflected at all in the reflection-based materials even though SSR is being used.

Using SSR for everything in a scene can lead to some very obvious artifacts. Most games are using a combination of SSR + cubemaps as a fallback, Uncharted 4 and Doom (afaik) are doing both at the same time with SSR reflections on some objects and cubemap reflections for scenery (in most cases, with some exceptions):

Example from Doom:

1 reflection on health pack and light source mixed with a cubemap reflection that stays no matter the position of the camera: https://abload.de/img/379720_20160703192945nku4c.png
2 ssr from light source disappears when not on screen: https://abload.de/img/379720_20160703192950y9u77.png
3 ssr from health pack does the same: https://abload.de/img/379720_20160703192954iuujt.png
4 object not reflected when moved: https://abload.de/img/379720_2016070319301246u9s.png

Edit: Both U4 and Doom share a similar dither pattern in the ssr implementation (some gore so nsfw i guess): https://abload.de/img/379720_20160703194307uzu7c.png

Another example of ssr + cubemaps (gore here too):
https://abload.de/img/379720_20160703194621hvukj.png
https://abload.de/img/379720_20160703194651xlutl.png
 
Last edited:
Isn't that exactly what a cubemap reflection does? You can pre-render the scene and project to all sides of a cube. Doing a lookup based on reflection vector from camera to index a texel. Totally dependent on view angle and position.
Right, it's location dependent, which breaks when you deviate widely from that point. eg. A shiny car with a scenery cubemap has a reflected mountain in the distance. Fine as long as you don't drive close to the mountain, at which point the cubemap fails as the reflections are from a different POV. A cubemap for interiors in that scene linked above would break constantly with different viewpoints. Ergo it's far smarter and easier to use SSR.

You should be able to test this readily. Grab an interior scene in one of your offline projects. Render a cubemap and project it onto a reflective floor with the camera in the viewpoint - it'll look fine. Then move the camera and the reflections will go to pieces. Or, if projected from the same point, just be horrifically low res.
 
Yea, we'll see comments like Sony games rivaling/beating modern animated movies for sure this time. ;)
I think it's about time you stop responding to people and claim they are saying anything like this, because they are not. Not once did anyone state this.

Every time I come into this thread I just want to look at U4 screens and see descriptions of what is going on in the scene visually.

But instead, what do I get? I see you being a drama queen... newsflash: nobody cares. Newflash 2: people are not blind to your attitude in this thread. If there is a pocket of members here that wants to just keep implying Naughty Dog is clueless and "fakes" all their graphics, it really doesn't matter. We're talking about real-time graphics in videogames, we're not talking about CGI movies in this thread.

If you can't be productive and contribute in the thread without being condescending, then take a break.
 
Last edited:
I think it's about time you stop responding to people and claim they are saying anything like this, because they are not. Not once did anyone state this.

Every time I come into this thread I just want to look at U4 screens and see descriptions of what is going on in the scene visually.

But instead, what do I get? I see you being a drama queen... newsflash: nobody cares. Newflash 2: people are not blind to your attitude in this thread. If there is a pocket of members here that wants to just keep implying Naughty Dog is clueless and "fakes" all their graphics, it really doesn't matter. We're talking about real-time graphics in videogames, we're not talking about CGI movies in this thread.

If you can't be productive and contribute in the thread without being condescending, then take a break.

Pointing out inaccurate "wording" for descriptions of graphics features in screenshots isn't being condescending (even Laa-Yosh liked my comment). I basically don't care anymore about what people think about me on these boards (or others). For the record, I did take a break for over a month and it doesn't matter how long I'm gone, when I come back, I still see "hype-talk" about exclusive PS4 games and innuendos about how magical the games are. That too is tiring. UC4 is a beautiful game (I've said that many many times) to be sure and a great one at that, but we'll see this kind of speak for Horizon and God of War I'm sure -- simply because they are PS4 exclusives and satisfy the "great formula".

Finally, I'm delving into real-time as well (most big CGI companies have already started making RT engines for layout/rigging/animation departments), and while I'm learning new methods about graphics hardware through an API, computing a reflection vector, making a projection matrix, filtering a texture, setting up vertices and normals, computing bump-mapped derivatives, etc.. is all the same always anyday. So yea, my comments do apply.
 
Status
Not open for further replies.
Back
Top