Digital Foundry Article Technical Discussion [2017]

Status
Not open for further replies.
The difference is massive. Now i think it's clear for everyone that there is a strong bandwith bottleneck on PS4Pro.

The X version proves that the developers did their best with each hardware, including the XB1.
 
I'm guessing that they ran out of memory on 4Pro in terms of raising the maximum render resolution (not enough to go up to 1620p, 1800p etc. for instance) even if it could.

There's a shadow bug on the woman's arms @5:10 on the OneX.
The texture comparison in the preceding footage (BJ's forearm) could just be due to memory streaming priority. Doom can exhibit similar issues on PC with low GPU memory.
yea doom had this as well, symptom of VT/Megatexture system perhaps?
 
a chance the game use DR FP16 or the difference would have been even bigger :mrgreen:
yea ;) totally forgot about the RPM implementations here.
I would like to identify the points in which we think RPM FP16 helped the scenes in wolf 2. That would be awesome to see it in action.
 
I would accept bigger drops in resolution for a rock solid 60fps that doesn't noticeably dip, even in the spots where it's only 2 fps.

Another thing that's becoming clear to me is that with the diminishing returns of resolution increases, I'd rather have a higher ration of cpu to gpu than they had with the ps4/xbox one era. There should be enough cpu that 60fps in open world games like GTA, Assassin's Creed is not an issue. Here they're talking about having potential cpu related frame drops in a small-scale linear game.
 
Last edited:
Like I predicted...Pro is going to get wedged into a weird value proposition between the base consoles and 1X. And out of all the specs and feature differences the one where I think they dropped the ball the most is not having 1080p supersampling as a system level feature. You could make a case that if you are holding out on 4K HDR display but still want a console that will play games the best on a 1080 screen for the next few years until next gen..then PS4 Pro is the best value proposition simply due to it being $100 less. But it's annoying as hell to know that super-sampling implementation is left up to the developer..
 
I would accept bigger drops in resolution for a rock solid 60fps that doesn't noticeably dip, even in the spots where it's only 2 fps.

Dynamic res should only work when it's a GPU issue, and I can't imagine them not dropping to hit 60fps if that were the case, so it must be something on the CPU side.
 
They run at different resolutions. Maybe the difference is enough to push deferred over the edge.

Obviously, different resolutions will have a different pixel to vertex ratio. But that might not be what the issue is here.

WRT to X1, with a deferred renderer it may be possible to have fewer render targets or a lower total pool of memory handling high BW activities at once (they talk about DMAing data from esram, presumably after they've finished with something like generating a gbuffer). This would help the X1, as we know from iDs comments that mitigating the BW hit from spilling over from fast esram into slow DDR3 was a priority.

Losing ALU time by moving to a theoretically less ALU efficient renderer - with additional DMA transfers - could be a win if your bottleneck was actually caused by most of your buffers / targets existing in DDR3 and choking on 1/3 of the BW of the target console's main memory.

Again, it is crazy to insist that what would mitigate the X1's most significant bottleneck would be an automatic win for a system without that bottleneck.

Interestingly, low end PC GPUs would normally suffer from a shortage of BW - meaning you'd want to work within GPU caches as much as possible. A deferred renderer - particularly a tiled one - would be ideal for this (ala PowerVR). Even without tiling, localising reads and writes by working from fewer or smaller buffers at once could be a performance win.
 
Dynamic res should only work when it's a GPU issue, and I can't imagine them not dropping to hit 60fps if that were the case, so it must be something on the CPU side.

Resolution scaling is always either reactive or predictive, and so maintaining the highest possible resolution will likely always risk dropping the frame rate. Halo "Locke!?" 5 seems to have been ultra conservative with maintaining framerate and I suspect the resolution varies more wildly because of it.
 
Again, it is crazy to insist that what would mitigate the X1's most significant bottleneck would be an automatic win for a system without that bottleneck.

There are bottlenecks on all hardware, just not at the same degree. A particular optimization is not reproductible when a hardware has a specific feature. It's not the case on XB1. Esram has nothing particular.

If you can save some ressources on XB1 with a particular rendering choice, then it's good for all systems. Checkerboard rendering is good when there is limited GPU power, but it will improve performances on any system. Even for a GTX1080ti...

Playing between XB1's DDR3/Esram is not a rendering choice.

All XB1 exclusives work particularly well on XBX and prove that what works on XB1 works really well on XBX. If i follow your reasoning, then games designed around Esram from the ground up should not run well on XBX.

Let's see what happens : "In terms of pure compute power, Xbox One X has a 4.6x advantage over the launch version of the older system and you get all of that scalability transferred into raw pixel count in Halo 5 - and more. In fact, in like-for-like scenarios we've seen anything up to five or even six times the resolution on Xbox One X."

http://www.eurogamer.net/articles/d...-xbox-one-x-is-the-way-its-meant-to-be-played
 
Last edited:
There are bottlenecks on all hardware, just not at the same degree. A particular optimization is not reproductible when a hardware has a specific feature. It's not the case on XB1. Esram has nothing particular.

If you can save some ressources on XB1 with a particular rendering choice, then it's good for all systems. Checkerboard rendering is good when there is limited GPU power, but it will improve performances on any system. Even for a GTX1080ti...

All XB1 exclusives work particularly well on XBX and prove that what works on XB1 works really well on XBX. If i follow your reasoning, then games designed around Esram from the ground up should not run well on XBX.

Let's see what happens : "In terms of pure compute power, Xbox One X has a 4.6x advantage over the launch version of the older system and you get all of that scalability transferred into raw pixel count in Halo 5 - and more. In fact, in like-for-like scenarios we've seen anything up to five or even six times the resolution on Xbox One X."

http://www.eurogamer.net/articles/d...-xbox-one-x-is-the-way-its-meant-to-be-played

You're stating as absolutes things you have absolutely no ability to know to be true.

You cannot infer from X1X running X1 games well that all optimisations for X1 are ideal for X1X. Why the hell would you DMA data around or intentionally organise it into 32 MB friendly clusters of work if you didn't have to.

If every optimisation worked the same on all console hardware, W2 wouldn't have opted for deferred rendering on X1 and forward (as on all none critically BW constrained systems) on X1X.

And as for that final quote, I'm facepalming my way into a black eye. Of course X1X can scale up with resolution better than X1, it's not running into esram shortage related BW issues that limit it's ability to scale buffers upwards.

Aaaaaaaaargh!
 
If every optimisation worked the same on all console hardware, W2 wouldn't have opted for deferred rendering on X1 and forward (as on all none critically BW constrained systems) on X1X.

Because it's a compromise between performances and your artistic goals. You could compare that to the choice of an anti-aliasing. The cheaper choice doesn't always meet your visual standards.

Also, it is possible that deferred rendering becomes faster only at a lower resolution. But it would work faster on all systems at a lower resolution.

Everything that is cheaper on XB1 would be cheaper on other consoles. I mean the main parts of the hardware are basically the same. The only real difference being a splitted memory on XB1.
 
for all systems. Checkerboard rendering is good when there is limited GPU power, but it will improve performances on any system. Even for a GTX1080ti...
I don't think if you are memory bound this would help. Checkerboarding is very much an ALU saver. But it still needs to be fed. And the rest of the pipeline still needs to work with a 4K frame buffer.

Luckily they pair the right memory with the amount of ALU. So that actually isn't a problem. The GeForce 1080TI is designed for 4K. If you asked it to checkerboard up to 16K frame buffer I think you will hit a hard wall on bandwidth.
 
I don't think if you are memory bound this would help. Checkerboarding is very much an ALU saver. But it still needs to be fed. And the rest of the pipeline still needs to work with a 4K frame buffer.

Luckily they pair the right memory with the amount of ALU. So that actually isn't a problem. The GeForce 1080TI is designed for 4K. If you asked it to checkerboard up to 16K frame buffer I think you will hit a hard wall on bandwidth.
And you'd hit a wall on ROPs too. Bandwitdth + Rops making it a reinforced concrete unbreakable wall. :yep2:
 
I've played Doom on Switch and can tell that this game looks absolutely great! Before Switch I've had only PS2 and Xbox 360, Uncharted 4 is the only game I completed on PS4, played it on friend's PS4. So for people like me, who haven't PS4 or Xbox One, level of graphics what Switch can offer in game like Doom is very impressive. All talks about what Switch is no more than portable Xbox 360 is just not truth. There's no game on Xbox 360 or PS3that is even close to Doom on Switch. All those effects, motion blur quality, bokeh, particles, geometry level is absolutely amazing, especially for portable console.
Yes frame rate and resolution is not the best, but I think no one should forget, what this is just first year for Switch, after some 2-3 years we will have even better graphics on Switch, with higher frame rate or more stable frame rate and with higher resolution. Just look at all previous consoles. I even think what Wolfenstein 2 (game also use Id Tech 6 engine), will be more amazing on Switch next year.
 
DF: Call of Duty: World War 2 first look between One X and 4Pro -- http://www.eurogamer.net/articles/d...uty-world-war-2-ps4-pro-xbox-one-x-first-look

...
There's the sense that Sledgehammer may have pulled back slightly from the super-dense post-process approach seen in Infinite Warfare - film grain is pared back a touch, for starters - but there's still the feeling that the developer is aiming for a cinematic look to the title. Similar to many titles we've seen recently, the game employs a heavy temporal component in its anti-aliasing, meaning that the traditional super-rich detail level associated with native rendering in video games is absent. You can call the presentation soft, but by that chalk, the same can be said for any movie or TV show. Like it or not, techniques such as these represent the future of the video game aesthetic.

And by extension, that makes pixel-counting - which relies heavily on flat geometric edges - very difficult to deploy on this title. Similar to other games that rely heavily on temporal anti-aliasing, the difference between Xbox One X and PlayStation 4 Pro primarily comes down to clarity. The Microsoft platform does render more pixels more of the time, and this primarily manifests by presenting additional detail in the image. It's no game-changer though - things just look a little clearer for most of the time.
 
Status
Not open for further replies.
Back
Top