Do consoles punch above their weight? Analyzing DF data *spawn

  • Thread starter Deleted member 86764
  • Start date
All true, but the PS4 is is much faster than a 750Ti on paper. The real surprise is that it doesn't always vastly outperform that GPU..... especially if this 60% advantage is real. In which case the PS4 should be outperforming the GTX 960. How often have we seen that? In Battlefront the PS4 is likely under performing (I haven't run the the specific numbers), in Hitman my previous couple of posts have shown that it's performing roughly in line with it's theoretical specs, and ROTTR is a great showing for the XBO... until you consider the advantage it's eSRAM brings, in which case, it's performance seems pretty predictable, and nothing like the 60% advantage it "should" be gaining.
No, they always use an overclocked 750ti, 200mhz GPU and 400mhz Vram, so not so much faster even on paper and nvidia is usualy beating AMD handily if we compare cards with the same amount of flops.

And if you take Tomb raider 2, a ~1550 flops 'flops efficient' Nividia card is beaten easily by a ~1300 flops 'not flops efficient' AMD card. And you are mistaken about Esram. It is only here to alleviate the low main bandwidth, it's not a miracle hardware. Tens of comparisons with PS4 multiplats showed this.
 
Nvidia drivers are also very CPU efficient, from their recent comparison in RotTR (note that the 390 has a + ~1.5 tflop advantage over a 970 on paper):

capture4csin.png


Now imagine if they were testing everything on an amd cpu/gpu combination (7850 with an under-clocked to 1.8 ghz 8350 [which is still ahead in power on paper in comparison to X1/Ps4 CPU]). I'm pretty certain that combination would almost always lose in face-offs, which is why they opted with Intel/Nvidia in the first place.

The question shouldn't be if consoles are able to pull better perf in comparison to the same PC spec, but how much more are they able to. That of course changes from game to game and engine to engine, and also depends on how much time devs devote to low level optimization (the ps4 for example has two g apis, GNM [low level] and GNMX [high level]). Quoting directly from this article
"Most people start with the GNMX API which wraps around GNM and manages the more esoteric GPU details in a way that's a lot more familiar if you're used to platforms like D3D11. We started with the high-level one but eventually we moved to the low-level API because it suits our uses a little better," says O'Connor, explaining that while GNMX is a lot simpler to work with, it removes much of the custom access to the PS4 GPU, and also incurs a significant CPU hit.

A lot of work was put into the move to the lower-level GNM, and in the process the tech team found out just how much work DirectX does in the background in terms of memory allocation and resource management. Moving to GNM meant that the developers had to take on the burden there themselves, as O'Connor explains:

"The Crew uses a subset of the D3D11 feature-set, so that subset is for the most part easily portable to the PS4 API. But the PS4 is a console not a PC, so a lot of things that are done for you by D3D on PC - you have to do that yourself. It means there's more DIY to do but it gives you a hell of a lot more control over what you can do with the system."
 
Last edited:
That's kinda my point. Claims of 60% more performance from consoles compared to equivalent PC GPU's are totally unsubstantiated. More than that, they're directly countered by the best evidence that we have, i.e the DF face off's/performance analysis.



It is a fairly equivalent GPU to be fair... in every way except memory bandwidth. The 7770 comes with 2GB at 72GB/s. With the eSRAM, the XBO comes with at least 3GB at something in the 200GB+ range..... is it really a big surprise that it outperforms a 7770?



That doesn't really have anything to do with the performance comparison. Min spec is a PC specific measure.



Why? They compare at exactly equal quality settings (or as close as possible) which is supported by both multiple screenshots and video's - which you can even zoom in on, courtesy of DF. Plus they provide HD video's showing real time frame rate and individual frame time metrics. What else could you possibly require? It's an absolutely brilliant comparison tool - exactly as it's designed to be. What specific problems do you have with it?

The problem I have with trying to use DF faceoffs to determine if consoles punch above their weight is that they don't use a PC with similar components to the consoles a majority of the time.

Sure they may use a GPU with similar performance and architecture from time to time but they hardly ever use a similar CPU. Hardware with similar architecture exists, but nothing on PC has true unified memory at console specs. Not to mention ESRAM.
 
No, they always use an overclocked 750ti, 200mhz GPU and 400mhz Vram, so not so much faster even on paper and nvidia is usualy beating AMD handily if we compare cards with the same amount of flops.

Even with that overclock and assuming the card is capable at running 200Mhz over it's rated boost clock at all times, the PS4 still holds many advantages over the 750Ti on paper as follows:

Pixel Fill Rate:
125%​
Texel Fill Rate:
112%​
Geometry Rate:
62%​
Memory Bandwidth:
166%​
Shader Flops:
112%​

Note the massive memory bandwidth advantage. Yes, Nvidia is foten more efficient than AMD in terms of both FLOP use and memory bandwidth, but with advantages like those above, we'd still be expecting the PS4 handily beat the 750Ti in pretty much every situation even if it wasn't punching above it's weight at all. What we see in reality is the 750Ti quite often being able to keep up with the PS4 and occasionally getting outperformed by a fairly narrow margin. So where is the 60%?

And if you take Tomb raider 2, a ~1550 flops 'flops efficient' Nividia card is beaten easily by a ~1300 flops 'not flops efficient' AMD card.

That 'not flops efficient AMD card' is also connected to a theoretical 272GB of memory bandwidth compared to 92.8GB on the 750Ti OC. If it can't beat the 750Ti with that advantage, plus the fact that the game would have been architecture specifically around the GCN architecture rather than Maxwell then it'd be a very poor situation indeed. Problem is, ROTTR is a one off, in most games the XBO actually is performing worse than the 750Ti. So where does that leave the console punching above it's weight by 60% theory?

And you are mistaken about Esram. It is only here to alleviate the low main bandwidth, it's not a miracle hardware. Tens of comparisons with PS4 multiplats showed this.

No one claimed anything about it being 'miracle hardware', but like it or not, it does give the XBO a massive bandwidth advantage over the 750Ti (and R7 360) and ROTTR being originally an XBO exclusive would certainly have been optimised around that throughput. Comparisons to the PS4 don't change that fact since even if the eSRAM only gives the XBO a memory throughput similar to the PS4 in the real world, it's STILL massively more than the 750 Ti or R7 360.

Comparing completely mismatched parts to prove a console advantage doesn't really prove anything. I don't doubt that games will be built for the strengths of the consoles architectures, so in the case of a game that started life as an XBO exclusive, that means it's going to be built around very high memory bandwidth throughput. Unless you're comparing to a PC GPU that has similar capabilities (which the 750Ti/R7 360 do not, even though they may be faster in other respects) then you're not proving the console punches above it's weight, your just proving that a GPU which is less capable on paper (even if just in one key respect) is less able to keep up in the real world in a game that depends on that capability.

When you can demonstrate that the PS4 is performing consistently in line with an R9 280 (and XBO with something around the R7 270) only then will you proving that consoles are generally punching above their weight by about 60%. And I haven't seen that demonstrated in even one game, nevermind most.
 
PS4 memory bandwidth has to be shared between Jaguars and GPU and there is some heavy contention. You should know this.

Don't take this 60% too seriously. It's PR and maybe valid only on one very specific benchmark. As long as we don't know which benchmark they (maybe) did, it doesn't mean much, except that consoles punch above their weight.
 
Now imagine if they were testing everything on an amd cpu/gpu combination (7850 with an under-clocked to 1.8 ghz 8350 [which is still ahead in power on paper in comparison to X1/Ps4 CPU]). I'm pretty certain that combination would almost always lose in face-offs, which is why they opted with Intel/Nvidia in the first place.

This comes back to whether you're asking if consoles punch above their weight as an entire system, inlcuding the CPU/memory, which I think they do. Or whether they punch above their weight in pure GPU terms.

The answer to the second is that in theory they should, but in practice they usually don't by a significant amount, and pretty much never by 60% outside of very extreme corner cases (of which opposite examples also exist).
 
The problem I have with trying to use DF faceoffs to determine if consoles punch above their weight is that they don't use a PC with similar components to the consoles a majority of the time.

Sure they may use a GPU with similar performance and architecture from time to time but they hardly ever use a similar CPU. Hardware with similar architecture exists, but nothing on PC has true unified memory at console specs. Not to mention ESRAM.

Yeah I'd personally like to see them do more direct comparisons using the likes of an R7 265 and underclocked Athlon FX 8xxx. But then that becomes more of an academic comparison rather than one that's useful from a purchasing decision point of view - which is what the face offs are supposed to be about. In either case though they could do with being a bit more consistent in what hardware they test, and how they test it.

PS4 memory bandwidth has to be shared between Jaguars and GPU and there is some heavy contention. You should know this.

I did know this. My numbers are already taking that into account. i.e. I'm comparing to a PS4 GPU bandwidth of 156GB/s which assumes the PS4's CPU is using it's maximum allocation of 20GB/s bandwidth.
 
This comes back to whether you're asking if consoles punch above their weight as an entire system, inlcuding the CPU/memory, which I think they do. Or whether they punch above their weight in pure GPU terms.
The 60% number between PC & PS4 should be the performance of the entire system, not GPU alone. Did SCE claim PS4 GPU alone is stronger by 60%?
 
I did know this. My numbers are already taking that into account. i.e. I'm comparing to a PS4 GPU bandwidth of 156GB/s which assumes the PS4's CPU is using it's maximum allocation of 20GB/s bandwidth.

GPU bandwidth is hit disproportionately hard when the CPU accesses memory. CPU access is prioritised. So if the CPU were accessing 10 GB/s of data the total available to the GPU would be reduced by something far higher than that. Based on a graph Sony shared with developers, it looks like it might be as high as something around double.
 
The 60% number between PC & PS4 should be the performance of the entire system, not GPU alone. Did SCE claim PS4 GPU alone is stronger by 60%?

No specific mention was made of it being a GPU comparison so they may well have meant entire system but how do we break that down? Should we assume PS4 can perform in line with a PC that has both CPU and GPU components that are 60% faster? Or the same GPU but a 60% faster CPU? Or 30% faster on each? And that's before we add memory complications in.

It seems a bit redundant to talk about PS4 matching a PC with a 60% more powerful CPU since pretty much all modern PC CPU's already comfortably exceed that. And since GPU's tend to be the focus of PC gaming performance, I suspect that was more what they had in mind when composing the 60% number.
 
The 60% number between PC & PS4 should be the performance of the entire system, not GPU alone. Did SCE claim PS4 GPU alone is stronger by 60%?

It's unlikely that all areas of the system would exactly the same kind of performance advantage. Sony's claim comes from unspecified middleware, under unspecified circumstances.
 
GPU bandwidth is hit disproportionately hard when the CPU accesses memory. CPU access is prioritised. So if the CPU were accessing 10 GB/s of data the total available to the GPU would be reduced by something far higher than that. Based on a graph Sony shared with developers, it looks like it might be as high as something around double.

On that basis, assuming a permanent loss of 20GB/s for the PS4 seems pretty fair. We could look at the worst case scenario and assume the PS4 loses 40GB/s to the GPU thanks to it's UMA but even then it's still have a greater than 40% memory bandwidth advantage over the 750Ti (and the XBO advantage would be even higher).
 
Back
Top