Image Quality and Framebuffer Speculations for WIP/alpha/beta/E3 games *Read the first post*

The 29% difference in pixels between 792p and 900p would be the lowest between PS4 & XB1 in all Ubisoft games released so far on next gen, at least between those games:

44% differene with AC4 but the XB1 game had worse performance in some levels.
44% difference with Trials fusion patched at 900p (remember that the game even shipped at 800p on XB1 vs 1080p on PS4) and like AC4, the XB1 game had worse performance in many levels.

Also this 792p resolution can't be a coincidence. That's almost certainly an ideal esram framebuffer size configuration, like 600p was ideal for many X360 games. Now if only framebuffer experts could enlighten us of its actual meaning considering the fact that Titanfal uses 2xMSAA and watchdogs will most probably use SMAA temporally supersampled...
 
Now if only framebuffer experts could enlighten us of its actual meaning considering the fact that Titanfal uses 2xMSAA

1408x792 itself is pretty arbitrary. It's just +10% in each dimension versus 720p. At the very least, it's divisible by 8, which should come in handy for down-sampled intermediate buffers. It's not great for scaling to either 1080p or 720p.

Beyond that, it's hard to say what other buffers they keep around simultaneously. Post-process buffers can reuse some memory spaces, depending. A 2k x 2k shadowmap is 16MB already. 8MB if they choose 16-bit. Then add more for cascades or other shadow filtering methods. Keep the shadows that need to be updated every frame in ESRAM, the slower update shadowmaps outside...

Every 32-bpp target would simply be ~4.25MiB, which doesn't really neatly "fit" the 32MB ESRAM in any multiple combination (including 64-bit formats). Of course, it remains to be seen how spilling over into DDR3 (partial rendertargets) affects things.

If they were going to go for a framebuffer that neatly filled 16MB or 32MB, 1536*888 (anamorphic 16:9 scaling) would be very close for 12/24-byte per pixel. Obviously, whether they should fill the entire ESRAM for just the framebuffer at a given time depends on their needs.
 
To me it feels like this problem runs deeper than the size of the various buffers, it's more likely the result of renderers not built around the nextgen systems and techniques. Neither WD nor (in particular) TF looks like they're "proper" nextgen engines with linear lighting and physics based shading, they're more likely some forward+ based approaches. So maybe the problem is submitting geometry twice, or some other inherent inefficiency that can't be helped without some major architectural reworking and possibly asset modifications too.

Yes, the Order on the other hand works pretty nicely without going deferred; but its scope is much more limited and the framerate isn't 60fps either.
 
1408x792 itself is pretty arbitrary. It's just +10% in each dimension versus 720p. At the very least, it's divisible by 8, which should come in handy for down-sampled intermediate buffers. It's not great for scaling to either 1080p or 720p.

Beyond that, it's hard to say what other buffers they keep around simultaneously. Post-process buffers can reuse some memory spaces, depending. A 2k x 2k shadowmap is 16MB already. 8MB if they choose 16-bit. Then add more for cascades or other shadow filtering methods. Keep the shadows that need to be updated every frame in ESRAM, the slower update shadowmaps outside...

Every 32-bpp target would simply be ~4.25MiB, which doesn't really neatly "fit" the 32MB ESRAM in any multiple combination (including 64-bit formats). Of course, it remains to be seen how spilling over into DDR3 (partial rendertargets) affects things.

If they were going to go for a framebuffer that neatly filled 16MB or 32MB, 1536*888 (anamorphic 16:9 scaling) would be very close for 12/24-byte per pixel. Obviously, whether they should fill the entire ESRAM for just the framebuffer at a given time depends on their needs.

It may be a dumb question but wouldn't a res like 960*1080 be a better choice than 1408*792 due to better upscaling at 1080p?
 
It may be a dumb question but wouldn't a res like 960*1080 be a better choice than 1408*792 due to better upscaling at 1080p?

I suppose it'd be worth examining with 2xMSAA, but 2x upscale might just simply be too horrendous (if you're ever bothered by low res transparents, then you'd be bothered here too).

Now, 1600x900 compared to 1320x1080 would be neat. 1320x1080 is the same aspect ratio as 880x720 (BLOPSII).

On the flip side, if you have a 24 byte per pixel framebuffer/G-buffer, 1288x1080 is right under 32MB.
 
The 29% difference in pixels between 792p and 900p would be the lowest between PS4 & XB1 in all Ubisoft games released so far on next gen, at least between those games:

44% differene with AC4 but the XB1 game had worse performance in some levels.
44% difference with Trials fusion patched at 900p (remember that the game even shipped at 800p on XB1 vs 1080p on PS4) and like AC4, the XB1 game had worse performance in many levels.

Also this 792p resolution can't be a coincidence. That's almost certainly an ideal esram framebuffer size configuration, like 600p was ideal for many X360 games. Now if only framebuffer experts could enlighten us of its actual meaning considering the fact that Titanfal uses 2xMSAA and watchdogs will most probably use SMAA temporally supersampled...

I cant say from any experience because I havent played Trials fushion but from all I have read the game runs very smooth on both systems. The closest thing to a locked 60fps we have had aside from MGS5. Ive heard there is a touch of texture popin on both consoles.
As far as 792p being the Ideal framebuffer size for the Xbox One. That would make sense if you ignore all the other titles on the Xbox One that hit 900 and 1080p. We have two games at 792p and to be honest I think they are hitting that res to avoid the rumored 720p that were attached to them at least early on. Both Watch dogs and Titanfall look worse graphically than games that hit 900p and 1080p on the X1. I honestly think the X1 is most suited for 900p. Like Sebbi has stated niether console is truly being used to its potential by launch window and crossgen titles.
 
I suppose it'd be worth examining with 2xMSAA, but 2x upscale might just simply be too horrendous (if you're ever bothered by low res transparents, then you'd be bothered here too).

Now, 1600x900 compared to 1320x1080 would be neat. 1320x1080 is the same aspect ratio as 880x720 (BLOPSII).

On the flip side, if you have a 24 byte per pixel framebuffer/G-buffer, 1288x1080 is right under 32MB.

So 1320*1080 would be the better choice right? I remember Blops 2 at 880*720 looking pretty good upscaled at 1080p.

Also about 1288*1080 filling the esram it will be interesting to see how 1280*1080 will look, I know that the Gran Turismo 5 runs at this res when the PS3 is set at 1080p but I haven't seen it in person to judge.

Seeing how the amount of esram is the main problem which causes the sub-1080 resolutions on deferred rendering techniques along with cross gen games to an extent will tiling work to achieve 1920*1080 or is it too taxing? I remember reading a few years back how tiling could be a solution to the sub-HD resolutions caused by edram on the 360 but from what I know apart from theories tiling wasn't actually being implemented in a game to help up the res.
 
I remember reading a few years back how tiling could be a solution to the sub-HD resolutions caused by edram on the 360 but from what I know apart from theories tiling wasn't actually being implemented in a game to help up the res.
Tiling is used on 360; games like Forza Horizon (720p4xMSAA, so ~14MB for a single 32-bit target) wouldn't be possible without it.

Some developers have been opposed to it, of course. Higher resolutions are a performance tradeoff in their own right, predication and redundancies can make it even worse than usual.

It may be a dumb question but wouldn't a res like 960*1080 be a better choice than 1408*792 due to better upscaling at 1080p?
I wonder if texture filtering has anything to do with it? Ignoring any biases based on scene composition, a game with a non-anamorphic framebuffer ought to tend toward sharper MIP levels than a game with a highly anamorphic buffer using the same number of pixels.

Racing games are always said to benefit from horizontally anamorphic rendering. Part of that might be geometry aliasing, but part of that might also be that the texturing on the track surface is typically being smushed vertically in the distance, and so benefits from more sampling on the vertical axis.

Maybe. I dunno.
 
Last edited by a moderator:
Tiling is used on 360; games like Forza Horizon (720p4xMSAA, so ~14MB for a single 32-bit target) wouldn't be possible without it.

Some developers have been opposed to it, of course. Higher resolutions are a performance tradeoff in their own right, predication and redundancies can make it even worse than usual.


I wonder if texture filtering has anything to do with it? Ignoring any biases based on scene composition, a game with a non-anamorphic framebuffer ought to tend toward sharper MIP levels than a game with a highly anamorphic buffer using the same number of pixels.

Racing games are always said to benefit from horizontally anamorphic rendering. Part of that might be geometry aliasing, but part of that might also be that the texturing on the track surface is typically being smushed vertically in the distance, and so benefits from more sampling on the vertical axis.

Maybe. I dunno.

Didn't know that Horizon used tiling, thanks for pointing that out. In fact knowing this makes Horizon even more impressive if anything.

I guess as the dev tools evolve we'll see more devs experimenting with different resolutions and tiling in the future. Can't wait to see the second wave of the XB1 software and especially the exclusive games.
 
Didn't know that Horizon used tiling, thanks for pointing that out. In fact knowing this makes Horizon even more impressive if anything.

I guess as the dev tools evolve we'll see more devs experimenting with different resolutions and tiling in the future. Can't wait to see the second wave of the XB1 software and especially the exclusive games.

There is a big difference. On X360 they were using memory tiling just for one quick operation on the framebuffer for each frame.

They never used tiling for comparatively now hundreds of Megabytes of textures/G-buffer on the X360 because its main memory was already fast enough to deal efficiently with those assets.
 
Last edited by a moderator:
New build:
20610621704802808492.jpg

Old build:
90364181703106281719.jpg


IMO polycount and native resolution is lower than old build but PP effects and AA looks better.
 
Shaders seems better, but thats pretty much all You can tell from those shots. Lighting and grading is too different to make any direct comparison.
 
If by lower poly you're referring to the simpler clothing, RAD tweeted that these are more appropriate infiltration uniforms instead of full ceremonial dress uniforms.
 
Lighting has dramatically improved - the girl on the right now casts proper shadows on herself and the wall behind her as well.

It's still not without problems though, the guy on her left, looking down, has some issues where some more of his face should be in shadow here and there. It's probably an issue with the shadow map precision.
 
hmm collar is not perfectly round or smooth anymore. I know they are different clothing but the newer one has more noticeable polygon edges, i wonder if they will fix that considering Andrea Pessino mentioned on twitter that nothing has been downgraded but most likely upgraded.
 
Wounded guy is in the same suit, yet the sleeve is certainly lower poly.
His belt and buttons are also worse and he doesnt have a bag over the belt.
 
Last edited by a moderator:
Back
Top