Quantum Break: Graphics Tech

The problem isn't necessarily the screenspace techniques (which i do have a problem with) but that Quantum Break is using so much screen space info that it looks completely out of place most of the time, one example being the screen space specular I posted in a video previously.

Edit: This

That just looks wrong.

So does most games lack of using ambient occlusion for objects that are dynamic in the scene like buckets, cans, etc.. Give me the artifacted SSR/AO anyday compared to NO reflections/AO at all.
 
So does most games lack of using ambient occlusion for objects that are dynamic in the scene like buckets, cans, etc.. Give me the artifacted SSR/AO anyday compared to NO reflections/AO at all.

But QB is lacking AO in some cases, LSAO isn't perfect.
 
Distant ambient occlusion is completely lacking, from the screens posted above
quantumbreak5_2_20161z4jz5.png


Another angle, same place
quantumbreak5_2_20161kekf0.png


Another case for small geometry
quantumbreak5_2_201619pk7x.png
 
Last edited:
Here the res of screenspace effects works against the games favor
quantumbreak5_2_20161vwja3.png


And that's the PC version maxed (and native with upscaling off only volumetric lighting is medium)...
 
Last edited:
The screens above are in direct sunlight. The objects are too small to be shadow casters. Where the AO stands out is when objects are *IN* shadow already (as in the above pic) and are close to a surface. I haven't seen them missing.
 
The screens above are in direct sunlight. The objects are too small to be shadow casters. Where the AO stands out is when objects are *IN* shadow already (as in the above pic) and are close to a surface. I haven't seen them missing.

In the third picture you can see an object that is partially in shadow and doesn't have LSAO applied to it.
 
Are we nick-picking now? :p

Technically, everything is nitpicking in this thread. It's just a consequence of hyperbolic statements (which you are quite fond of). Still, the case remains, if LSAO was "perfect" the third picture would look different, it isn't. But it's still quite good, if a bit too heavy handed in some objects. Only clear issue i have with it is similar to the game, the LOD (even in the PC version) is very aggressive. Although, you can make an argument that the game doesn't need better LOD since it spends most of its time in-doors.
 
Of course, but not on every object that has specular.
It's extremely uncommon for games with SSR to selectively mask color sampling from objects. Actually, I've never seen that in any of the games I've played that use SSR.

I used to think that the InFamous stuff on PS4 might, which would make some sense given the dullness of a lot of stuff to be reflected, but it turns out it is sampling color, it's just hard to spot.
For example, this is a wet concrete surface, and like many other wet concrete surfaces the SSR looks like it might just be grey from most angles. But if you line it up so that the wooden box is about as vibrant as it'll be, the color sampling is obvious.

gLxPKN1.jpg


Everything is mostly done in screenspace.
Lots of effects are done in screen-space, many of which are natural for it and can be done with minimal screen-occlusion-related artifacts, such as chromatic aberration and tone mapping. Most lighting and shadowing in most games isn't screen-space, though; if anything, I'd say that AO is about the only aspect of lighting that is solidly most frequently done in screen-space, and even then there are devs that think this has too many problems.

Even QB's lighting has a huge amount of world-space aspects.

I still don't see why downplaying it strictly for QB isn't being bias.
Of course I'm biased. You're biased, too. Our biases color our preferences for the effects of various techniques.

I don't think I'm singling out QB here though, this just happens to be a QB thread and QB happens to use screen-space lighting data for a lot of stuff. I've whined about plenty of uses of SSR.

Yeap. I still consider not having any reflection as "more" wrong.
To me, it's kind of like worrying that your decals don't have quite the right albedo when they're all constantly experiencing horrible z-fighting. Or trying to improve the accuracy of your rigid body collision results when said rigid objects are randomly teleporting all over the map. Getting a sampling of the subtleties right when there's a huge fundamental issue.
 
I've tried explaining how screenspace shaders work, but some people seem to still use these completely unrelated distinctions like "dynamic objects" "distant geometry" "every surface"... The shader doesn't care, and doesn't even know of an object is dynamic, a character, scenery, lit, shadowed, whatever. It just runs the same code for every pixel on-screen.
There are some extra complexities, sure, but the way some of you guys word certain sentences give me the impression you are not talking about those complexities, but rather you don't understand the very fundamental way such effects work. No point discussing tech without informing oneself first. There is plenty literature available online.
 
I've tried explaining how screenspace shaders work, but some people seem to still use these completely unrelated distinctions like "dynamic objects" "distant geometry" "every surface"... The shader doesn't care, and doesn't even know of an object is dynamic, a character, scenery, lit, shadowed, whatever. It just runs the same code for every pixel on-screen.
There are some extra complexities, sure, but the way some of you guys word certain sentences give me the impression you are not talking about those complexities, but rather you don't understand the very fundamental way such effects work. No point discussing tech without informing oneself first. There is plenty literature available online.
I think that VFX_Veteran is arguing that some games run different screen-space shaders for different masked screen regions corresponding to different reflective objects, which *would* be doable (but is unusual or nonexistent for what he's claiming, as far as I know).

Screen-space shaders definitely can be made to care about the things that you're saying they can't. They are, in certain cases, for instance in some TAA implementations masking some dynamic objects from the TAA blending is a viable hack to minimize ghosting.
 
Last edited:
But you aren't giving me an example of a game that doesn't use screenspace techniques? How is Remedy getting points taken away for something that everyone in the industry uses? The only world space AO is in RoTR for the PC and that drains the graphics card. I've never seen a world space reflections technique in a game either. Volume ray-marching for volume lights only in Lords of the Fallen. Volume smoke with ray-marched world space computations? Batman:AK with NvidiaFX patch. Again on the PC. Name a console game that uses world space for AO, reflections, volume lighting, etc..?
I do not have it but what about The Division?
Seems in some ways with all options on it looks beautiful, not always but some aspects of the open world environment rather than say character.
Cheers
 
I think that VFX_Veteran is arguing that some games run different screen-space shaders for different masked screen regions corresponding to different reflective objects, which *would* be doable (but is unusual or nonexistent for what he's claiming, as far as I know).

Screen-space shaders definitely can be made to care about the things that you're saying they can't. They are, in certain cases, for instance in some TAA implementations masking some dynamic objects from the TAA blending is a viable hack to minimize ghosting.[/QUOthereIt can be done, but it is costlier, the esier and fastest AO, is the global one, and to se extent SSR. I see people talk about every dynamic object casting AO, as if that was advanced tech.

Of course, as I said, there are some gotchas here and there. For example, some Deffered lighting engines allow artists to flag objects to not receive AO, but that's very specific, and it sure isn't a performance saving feature. The SSAO still run for every pixel on screen*, considering every object on it, its just that flagged objects chose to not apply that AO to them on the second rendering pass.
EDIT: Other masking objects out of either receiving or casting AO could be devised as well, but they would make SSAO slower, not faster.

SSR reflections are a bit trickier, because there are single-surface-orientation implementations out there, which simplifies the ray calculation, and saves you from reading the normal buffer.
But for the robust implementations, that support multiple surfaces with varying orientations and positions simultaneously, there is literally nothing impressive about "applying it liberally in many objects on-screen". Just like SSAO, those types of SSR algos run for every pixel on screen, independently if they'll use it or not. For a consistent framerate, the algo has to be optimised for the worst case, and in any free camera game can have the screen end up completely covered by a reflective material, even if the artists just put this single small puddle on their level. At this point masking things in or out just adds extra complexity, and makes the shader run slower. With a shader like that in place, the only thing keeping artists from making every single surface a damn mirror is their artistic vision. Framerate is non-affected. This applies to QB, it also applies to the ps4 launch titles KKSF, and InfamousSS.
What QB does do better then most in SSR department though, is how they handle reflections on rough surfaces. KKSF calculates reflections for every surface as if mirror like, and blurs them after the fact in screen space again, with varying blur radios depending on how rough the surfaces are. There are several reasons why this doesn't look very good or realistic, and its far from PBResque. QB seems to do some sort of stochastic distribution of rays across rough surfaces, that gets them that nice sharp contact reflections that grow blurrier with distance from the reflected object. They also seem to not do even the slightest of blurring on that, which keeps them sharp, at the cost of making them very noisy. You can even spot their jitter pattern on large flat rough surfaces.

*Usually at quarter res or lower. QB does it at 720p, which makes their SSAO and SSR higher res than even those of 1080p gamea, and also have them match native g-buffer resolution, which is unusual, but also made easier by the fact that their native resolution is simply that low...

EDIT: Mirrors Edge Catalyst will ship with EA's Frostbite's new SSR implementation, which is one of the most robust and high quality ones right now. I recomend you guys checking their siggraph presentation.
 
Last edited:
Technically, everything is nitpicking in this thread. It's just a consequence of hyperbolic statements (which you are quite fond of). Still, the case remains, if LSAO was "perfect" the third picture would look different, it isn't. But it's still quite good, if a bit too heavy handed in some objects. Only clear issue i have with it is similar to the game, the LOD (even in the PC version) is very aggressive. Although, you can make an argument that the game doesn't need better LOD since it spends most of its time in-doors.

I actually don't have hyperbolic statements. I think you need to go to the PS4 forums for that. I only defend other companies.

I never said any technique was perfect in gaming btw (afterall, I do come from offline rendering). I only stated things I appreciate in various games whether they are filled with "errors" or "clean".
 
It's extremely uncommon for games with SSR to selectively mask color sampling from objects. Actually, I've never seen that in any of the games I've played that use SSR.

I'm talking about SSR linking to objects (assuming color and not capsule). Is this not a thing? Every material on an objects can/cannot be "bound" to a SSR pass depending on the "reflecitivity" of the material correct?

Lots of effects are done in screen-space, many of which are natural for it and can be done with minimal screen-occlusion-related artifacts, such as chromatic aberration and tone mapping. Most lighting and shadowing in most games isn't screen-space, though; if anything, I'd say that AO is about the only aspect of lighting that is solidly most frequently done in screen-space, and even then there are devs that think this has too many problems.

I would argue that creating a normal pass (that have world space vectors) and then walking over materials to compute the diffuse lighting *is* still screen-space because the normals have been baked into a projected 2d buffer instead of computing the normal on-the-fly. This would make things very difficult getting normals from outside the camera's frustrum (i.e non-existant in that buffer) for example getting a bumped diffuse computation for GI.

I guess I consider "baked" to be "screen-space" in a way. These days, the offline rendering community hates baked passes. It overloads the workflow and doesn't allow the artist complete freedom to get "natural" results.

Of course I'm biased. You're biased, too. Our biases color our preferences for the effects of various techniques.

Fair enough..

I don't think I'm singling out QB here though, this just happens to be a QB thread and QB happens to use screen-space lighting data for a lot of stuff. I've whined about plenty of uses of SSR.

So what's your limitation from a game developer's perspective? Is the hardware there (including PC hardware) to make a Halo game with full world-space computed AO/reflections and still maintain 1080p/60FPS?

To me, it's kind of like worrying that your decals don't have quite the right albedo when they're all constantly experiencing horrible z-fighting. Or trying to improve the accuracy of your rigid body collision results when said rigid objects are randomly teleporting all over the map. Getting a sampling of the subtleties right when there's a huge fundamental issue.

Yea, we both are going to have to disagree here. I think the big flaw in QB's SSR solution is trying to switch between real-reflections and an environment mapped cube based on camera angle. It absolutely is too aggressive and is discontinuous. No denying that. But I'd much better IMO than walking around on a pre-baked water surface that is just a static reflection of the environment from a pre-render pass with no dynamic reflection at all and missing certain objects that are seen through the camera but not in the SSR buffer pass.
 
SSR reflections are a bit trickier, because there are single-surface-orientation implementations out there, which simplifies the ray calculation, and saves you from reading the normal buffer.
But for the robust implementations, that support multiple surfaces with varying orientations and positions simultaneously, there is literally nothing impressive about "applying it liberally in many objects on-screen". Just like SSAO, those types of SSR algos run for every pixel on screen, independently if they'll use it or not. For a consistent framerate, the algo has to be optimised for the worst case, and in any free camera game can have the screen end up completely covered by a reflective material, even if the artists just put this single small puddle on their level. At this point masking things in or out just adds extra complexity, and makes the shader run slower. With a shader like that in place, the only thing keeping artists from making every single surface a damn mirror is their artistic vision. Framerate is non-affected. This applies to QB, it also applies to the ps4 launch titles KKSF, and InfamousSS.
What QB does do better then most in SSR department though, is how they handle reflections on rough surfaces. KKSF calculates reflections for every surface as if mirror like, and blurs them after the fact in screen space again, with varying blur radios depending on how rough the surfaces are. There are several reasons why this doesn't look very good or realistic, and its far from PBResque. QB seems to do some sort of stochastic distribution of rays across rough surfaces, that gets them that nice sharp contact reflections that grow blurrier with distance from the reflected object. They also seem to not do even the slightest of blurring on that, which keeps them sharp, at the cost of making them very noisy. You can even spot their jitter pattern on large flat rough surfaces.

*Usually at quarter res or lower. QB does it at 720p, which makes their SSAO and SSR higher res than even those of 1080p gamea, and also have them match native g-buffer resolution, which is unusual, but also made easier by the fact that their native resolution is simply that low...

EDIT: Mirrors Edge Catalyst will ship with EA's Frostbite's new SSR implementation, which is one of the most robust and high quality ones right now. I recomend you guys checking their siggraph presentation.

This is what my eyes are appreciating. Thanks for this.

Will check out Mirror's Edge for sure.
 
I do not have it but what about The Division?
Seems in some ways with all options on it looks beautiful, not always but some aspects of the open world environment rather than say character.
Cheers

I tried the beta and hated it.. never bought the game actually.
 
EDIT: Mirrors Edge Catalyst will ship with EA's Frostbite's new SSR implementation, which is one of the most robust and high quality ones right now. I recomend you guys checking their siggraph presentation.

Now that brought this up, i noticed some very interesting things in the beta. Some screens i got from it:

SSR
http://abload.de/img/04.24.2016-07.44.17.0w5k6o.png
http://abload.de/img/04.24.2016-08.35.04.2cqkor.png
http://abload.de/img/04.24.2016-10.23.27.0u5j8e.png

Jittery TAA (looks good in motion)
http://abload.de/img/04.24.2016-07.48.01.05ljcg.png
http://abload.de/img/04.24.2016-07.46.18.0e8k8j.png

And i generally found the beta to be relatively stable in motion.
 
I'm talking about SSR linking to objects (assuming color and not capsule). Is this not a thing? Every material on an objects can/cannot be "bound" to a SSR pass depending on the "reflecitivity" of the material correct?
If reflectivity varies, that's usually just different parameters at different pixels, not different passes.

I would argue that creating a normal pass (that have world space vectors) and then walking over materials to compute the diffuse lighting *is* still screen-space because the normals have been baked into a projected 2d buffer instead of computing the normal on-the-fly. This would make things very difficult getting normals from outside the camera's frustrum (i.e non-existant in that buffer) for example getting a bumped diffuse computation for GI.
I guess that's fair, although something that usually only has much significance for any sharp specularity on-screen. The effect of offscreen normals tends be pretty subtle.

I guess I consider "baked" to be "screen-space" in a way.
Care to elaborate? Baked lighting data is camera-independent.

So what's your limitation from a game developer's perspective?
I'm not a Bungie employee, just a fan and graphics enthusiast. I've given Nate Hawbaker crap for Destiny's water SSR, though. :smile2:

Is the hardware there (including PC hardware) to make a Halo game with full world-space computed AO/reflections and still maintain 1080p/60FPS?
Sure, depending on the scene and the desired precision of said effects. :D

I think the big flaw in QB's SSR solution is trying to switch between real-reflections and an environment mapped cube based on camera angle. It absolutely is too aggressive and is discontinuous.
Unfortunately there doesn't seem to be a good way around that, besides "make a racing game so that there's minimal change in camera inclination." ISS does a "good job" of making it less distracting, but that's largely accomplished by using a very large, soft cutoff. They basically made the SSR more stable-looking by having it accomplish less. A lot of people even thought that the player character was excluded from the reflection system!
 
Back
Top