Image Quality and Framebuffer Speculations for WIP/alpha/beta/E3 games *Read the first post*

I quickly looked & they I think they dont say that at all

I dont think they even mention T2x TAA, do they?

They mention smaa t2x though perhaps youre getting confused.

The general rule = with the same method the higher the samples the better the quality but at slower performance

In this case it may be a bit different.

From what I gathered SMAA 1tx used in Ryse uses several previous frames blended in the same single (1tx) temporal AA buffer. So the temporal component is more robust (no more temporal artefacts like ghosting seen occasionaly in Infamous) but at the price of some constant blurring (and indeed Ryse is a lot more blurry than the usual 900p->1080p upscaling typical blur).

When SMAA 2tx uses 2 previous temporal frames stored in 2 temporal buffers (2tx) and the algorithm chooses the best previous samples (kinda) depending of the scene.

But I suspect the temporal AA component might/will have many implementations and/or combinations.
 
So, we’re currently looking at 840-900p for cutscenes and mostly 840p gameplay? Maybe when everything is said and done within 5-6 months, 900p across the board.

Resolution generally doesn't increase this far along in development. Especially not when a consistent 60fps is a priority.

Look at Titanfall as an example. PR made it seem like a possibility that they could increase the resolution to 900p at launch, but that never happened. They're still busy trying to stabilize the framerate and alleviate the frame tearing.
 
Resolution generally doesn't increase this far along in development. Especially not when a consistent 60fps is a priority.

Look at Titanfall as an example. PR made it seem like a possibility that they could increase the resolution to 900p at launch, but that never happened. They're still busy trying to stabilize the framerate and alleviate the frame tearing.

I was being more optimistic about the situation, than realistic.

Anyhow, does anyone know if the engine is forward or deferred based (maybe a hybrid of the two)?
 
I was being more optimistic about the situation, than realistic.

Anyhow, does anyone know if the engine is forward or deferred based (maybe a hybrid of the two)?

Seeing as how Activision PR has not issued a single word about this being a new engine, I would assume it's still using the same core engine all the older Call of Duty's. Which would most likely mean it's forward rendered.

The new post-processing effects (depth of field, motion blur) are a definite plus in my book though. I do love myself some good, cinematic movement-based motion blur; in fact, I wish every game had this effect. But I wonder if it impacts input lag in any significant fashion.
 
Resolution generally doesn't increase this far along in development. Especially not when a consistent 60fps is a priority.

Look at Titanfall as an example. PR made it seem like a possibility that they could increase the resolution to 900p at launch, but that never happened. They're still busy trying to stabilize the framerate and alleviate the frame tearing.

Not saying it will increase, but Titanfall devs said they may increase resolution in a patch after release.

The same thing kind of happened to Trials Fusion (day one X1 patch 800P>900P) and several PS4 games recieved day one resolution increasing patches.

So if resolution can be increased even after release, it's hardly too late 6 months from release.
 
In this case it may be a bit different.
...

When SMAA 2tx uses 2 previous temporal frames stored in 2 temporal buffers (2tx) and the algorithm chooses the best previous samples (kinda) depending of the scene.
I'd call that using more samples. Rejecting some as part of the algorithm is still using them in the process. So more samples == better quality and more overhead. Different technique == different results with the same resources.
 
From what I gathered SMAA 1tx used in Ryse uses several previous frames blended in the same single (1tx) temporal AA buffer. So the temporal component is more robust (no more temporal artefacts like ghosting seen occasionaly in Infamous).
Why would using an accumulation buffer prevent ghosting?
 

Not sure how they arrived to that number. 837p seems to be consistently what Global and I are getting when looking at the "gameplay" sections in the trailer.

Edit: I'm getting 864p or 880p when measuring the long edge in this Kevin Spacey cutscene (12/15 or 22/27 scaling ratio):

1.png


But I do notice that the action scenes are much blurrier, suggesting different (dynamic) resolution for gameplay and cutscenes.

Edit 2: I could swear I'm only getting 600p (!) from the few edges I can find in this screenshot:

kljozf.png


But it's extremely blurry, so it's difficult to measure.
 
Last edited by a moderator:
837 is also quite unlikely, it should probably be a multiple of 8...

Doesn't have to be a multiple of eight; but rather it should be a multiple of (16/9). 837p, for example, would have an equivalent horizontal resolution of 1488, making it an even 1488x837.
 
Yeah, doesn't have to be, but it's a lot more practical. For example, any kinds of tile based operations, hierarchical stuff, half-sized buffers, and then there's the 2x2 quad efficiency issue with the rasterizer. It just makes a lot more sense - and let's face it, your pixel counting is based on heavily compressed video grabs, isn't it? Not to mention that the final resolution may easily change before the release.
 
Why would using an accumulation buffer prevent ghosting?

It doesn't. "Ghosting" is an artifact that results from using a temporal filtering algorithm that incorporates samples that are unrelated to the current pixel value that you want to present. Therefore the magic is all in how you determine which temporal samples should be used and which ones shouldn't. The SMAA paper/presentation originally suggested weighting samples based on velocity output to a per-pixel velocity target. This generally works for surfaces where you output velocity, but for transparents and other things where you don't output velocity it falls apart. This is why Crytek proposed weighting purely based on color values from a local neighborhood, which is actually based on research presented a few years ago from the game Dust 514.
 
Doesn't have to be a multiple of eight; but rather it should be a multiple of (16/9). 837p, for example, would have an equivalent horizontal resolution of 1488, making it an even 1488x837.

Why should the a/r of native res be 16:9?
We had games with 8:9 or dynamic aspect ratio.
 
Edit 2: I could swear I'm only getting 600p (!) from the few edges I can find in this screenshot:
kljozf.png
But it's extremely blurry, so it's difficult to measure.

That scene was a big reason why I thought there was varying resolutions depending on the scene in the trailer.
 
ProjectCARS_PC-1-wm.png

ProjectCARS_PS4-1-wm.png



ProjectCARS_PC-4-wm.png

ProjectCARS_PS4-4-wm.png


I think native resolution is the same on both but pc version has better AA and tessellation and maybe a little bit better shader quality.
 
Last edited by a moderator:
The "main differences" I see right now, is the PS4 shots are actually racing (so motion blur, DOF, and reflections are going to be quite different than static shots, compared to the posted PC shots). But don't get me wrong, a high-end PC would/should of course offer better overall IQ.
 
Back
Top