Do consoles punch above their weight? Analyzing DF data *spawn

  • Thread starter Deleted member 86764
  • Start date
Yep. But also never forget that console versions of games that also run on PC very often are just straight ports with less game specific optimisations than have been done, say, in a 'game ready' driver by Nvidia or AMD, and then you could argue that frequently consoles may well be punching below their weight.

A great example is the recent Firewatch release, which runs on Unity, which in the current version (5.4 is supposed to improve this) runs the main code basically single threaded, which can cause simple disc IO to cause in-game stuttering. A small development studio developing a game for Unity will have developed the game on PC, and may well just be content that they can ship the games to console, and beyond just getting it to work with ok graphics settings at 30fps, they are not likely to spend any extra effort unless the game does really well. Or they may not even be able to fix the issue, due to limitations in the engine (as with that version of Unity).
 
Why don't anyone build a PC with HD 7850 (maybe overclocked a little), Jaguar equivalent CPU and 8GB main RAM?

Then you will have most accurate comparison results between PC & PS4.
 
Why don't anyone build a PC with HD 7850 (maybe overclocked a little), Jaguar equivalent CPU and 8GB main RAM?

Then you will have most accurate comparison results between PC & PS4.

There's no such thing as an 8 core Jaguar CPU on the PC. The closest you can get is a 4 core Jaguar at 1.6Ghz or an 8 core Athlon FX at 2.8Ghz.

Also, it depends whether you are trying to compare the equivalence of the whole system (in which case I don't think anyone would argue that consoles will handily win) or just the GPU component.

The absolute closest match in GPU terms to the PS4 GPU is the R7 265. Pairing that up with 6GB system RAM and an underclocked AthlonFX 8100 would be about the closest you could get. That would certainly make for an interesting comparison.
 
Some games use multiplatform engine like Unity or other 3rd party game and sometimes they aren't fully optimise wait for cinsole optimization or other AAA games likeThe Crew maybe use GNMX equivalent of Direct X on PS4 or the high level API on Xbox One and not the low level GNM API or the equivalent on Xbox One... It is very difficult to compare multiplatform games because economically it is better to do some compromise...

I think console optimisation will be better in 2016 and 2017 and so on. At least console share the same CPU and GPU architecture...

And VR optimisation will be good on PS4 side... There is no choice...
 
Last edited:
There's actually two aspects to this debate - the hypothetical and the actual. Hypothetically, is it possible to extract more performance from console silicon (yes, but how much?), and do devs actually extract more bang per mm^2 given the constraints of running a game development business? In most cases I expect the console somewhat capped in cross-platform titles due to economic reasons. However, the launch-point for this particular thread was a Sony statement that middleware is performing 60% faster on PS4. I suppose we should be looking at middlewares and their performances then.
 
Consoles do punch above their weight, but a single large figure like 60% - with no specifics given - is always going to be controversial. In GPU limited scenarios that kind of figure doesn't appear to be representative of frame rates at the same resolutions.

I could easily believe that console APIs are 60% faster than DX11 on the CPU end though. Fingers crossed for DX 12 on the PC delivering better results on the CPU and higher utilisation of the GPU.
 
Consoles do punch above their weight, but a single large figure like 60% - with no specifics given - is always going to be controversial. In GPU limited scenarios that kind of figure doesn't appear to be representative of frame rates at the same resolutions.

I could easily believe that console APIs are 60% faster than DX11 on the CPU end though. Fingers crossed for DX 12 on the PC delivering better results on the CPU and higher utilisation of the GPU.
Hmm I feel that it's on the GPU end and CPU end. For all of my understanding, not only does the large overhead have to require the CPU to issue more commands; but the GPU is also receiving a lot of inefficient commands to work with as well. By reducing both likely is what is leading to large performance gains.
 
SomeGPU command are not exposed in Direct X11 or Direct X12 and are available in console API.

See slide 57 of sebbbi presentation...

http://advances.realtimerendering.c...siggraph2015_combined_final_footer_220dpi.pdf

If I'm reading that correctly, (and I could easily not be) isn't it saying that those features can be exposed on PC via CUDA and OpenCL? Is it possible for a game to use both API's to expose the additional features? I'm pretty sure some DX based games use CUDA for physics for example.
 
Yeah that's been being done for years. Just Cause 2 had CUDA depth of field for example. And we now have Gameworks proprietary effects packages.
 
My previous post included the relative theoretical performance advantage the PS4 holds over the R7 360 in all the key areas. To summarise again:

Pixel Fill Rate:
152%​
Texel Fill Rate:
114%​
Geometry Rate:
76%​
Memory Bandwidth:
148%​
Shader Flops:
114%

So clearly just taking the shader flops advantage is an inappropriate measure. But even if we did, we'd see the PS4 should be performing 14% faster than the R7 360. In the Hitman performance analysis video it's performing roughly between 16-30% faster, generally at the lower end of that scale. So where's the 60%? It's not even close. And that's if we ignore completely the PS4's much larger memory bandwidth and pixel fill rate performance.​



It's not really that hard. Yes a locked frame rate doesn't tell us much. But generally, "locked" games do dip below that lock regularly enough for us to draw a comparison to the PC GPU's performance at similar "low performance points". Effectively you are directly comparing minimum frame rates rather than average frame rates.

Additionally, you can compare on the basis of graphics settings. Even if the console maintains a rock solid 30fps, if a PC GPU can maintain the same perfect lock, but also run at higher graphical settings, we can infer it is performing better. It's not as if the console developers would have left those free graphical upgrades on the table if the GPU could handle them.

I think it's fair to say that consoles punch well above their weight in CPU and overall memory terms (although DX12 may re-balance things there) however in GPU terms I see no evidence at all in support of the 60% figure or anything close to it.
My problem with this whole thread is trying to compare 2 Amd consoles with a nvidia desktop GPU as well as low power AMD CPUs with Intel desktop CPUs.

The consoles will always punch above their weight if the software is being optimized. The only true way to find out is to run a multiplat game on both consoles and 2 PCs with specs that match the consoles as closely as possible.

With the consoles having shared memory and 8 core amd jaguars I don't believe an exact comparison will ever be possible.

IMO it is obvious that these consoles GPUs out perform their closest PC counter parts in most cases if you factor in things like quality settings and not just resolution and frame rates.
 
My problem with this whole thread is trying to compare 2 Amd consoles with a nvidia desktop GPU as well as low power AMD CPUs with Intel desktop CPUs.

I'm not sure if I've missed something here but the post to which you just replied, firstly compared both AMD consoles to an AMD GPU, and secondly, specifically pointed out that consoles perform better in terms of theoretical CPU capability. SO were you trying to agree or disagree with the post?

The consoles will always punch above their weight if the software is being optimized.

So what you're saying is that if one platform receives optimisation, and the other doesn't, it will perform better.... I'd have to agree with that. Of course, the actual test will be real world results.

With the consoles having shared memory and 8 core amd jaguars I don't believe an exact comparison will ever be possible.

I think we're all agreed on this one.

IMO it is obvious that these consoles GPUs out perform their closest PC counter parts in most cases if you factor in things like quality settings and not just resolution and frame rates.

Maybe you should go ahead and post the evidence that makes this so obvious. Because as far as I can see, the vast majority of evidence (in the form of digital foundry face off's / performance analysis which specifically "
factor in things like quality settings and not just resolution and frame rates.") shows the exact opposite of what you're stating above.
 
I'm not sure if I've missed something here but the post to which you just replied, firstly compared both AMD consoles to an AMD GPU, and secondly, specifically pointed out that consoles perform better in terms of theoretical CPU capability. SO were you trying to agree or disagree with the post?



So what you're saying is that if one platform receives optimisation, and the other doesn't, it will perform better.... I'd have to agree with that. Of course, the actual test will be real world results.



I think we're all agreed on this one.



Maybe you should go ahead and post the evidence that makes this so obvious. Because as far as I can see, the vast majority of evidence (in the form of digital foundry face off's / performance analysis which specifically "
factor in things like quality settings and not just resolution and frame rates.") shows the exact opposite of what you're stating above.
OK DF has been using a rig with a Nvidia GPU to directly compare to console ports for the most part.

As my post outlined there is no absolute proof that the consoles GPUs punch over their weight. That's why I wrote IMO.

The reason I believe they punch over their weight has more to do with their APIs and every system having the same components.

In a majority of DF comparisons the consoles are usually running settings that are comparable to PCs high setting.

I have no evidence but I doubt you could achieve the same settings on an under clocked AMD Athlon with a 7770 GPU.
It might be possible on some multiplat titles, but I seriously doubt you could run Rise of the Tomb Raider at Xbox One level quality settings on a rig like that.
 
The reason I believe they punch over their weight has more to do with their APIs and every system having the same components.

I agree that the theory says that should be the case. However when looking at the evidence, it doesn't seem to match up. I'm as surprised by that as anyone, but ultimately we need to go where the evidence points.

In a majority of DF comparisons the consoles are usually running settings that are comparable to PCs high setting.

While generally true, that doesn't really say anything unless we know what "high" equates too. More accurately, the DF comparisons *always* attempt to run at equivalent settings across PC and console, and those settings generally correspond to "high" on the PC which generally corresponds to one setting below maximum. Although lately, it seems the consoles have been falling more in line with the "medium-high" setting which is more like 1.5 settings below maximum. That's exactly what we'd expect as the generation moves on.

I have no evidence but I doubt you could achieve the same settings on an under clocked AMD Athlon with a 7770 GPU.

No evidence is needed for such a claim. A 7770 is a weaker GPU than either consoles at stock speeds. An underclocked version, shouldn't. and wouldn't stand a chance. The question is, how does an R7 265 or R7 360 compare? They are pretty much on par with the PS4 in theory. Have we seen them performing significantly worse? On a regular basis? I'd imagine there is plenty of source material to check that out on DF.

It might be possible on some multiplat titles, but I seriously doubt you could run Rise of the Tomb Raider at Xbox One level quality settings on a rig like that.

ROTTR is a corner case. More than likely because it plays to the XBO's strength of it's eSRAM which even more powerful PC GPU's like the R7 350 can't match if the games has been designed for it. That's exactly where optimisation comes into play and the big performance advantages show themselves. But I'd argue that's not "console vs PC optimisation" and more " optimising for one architecture (that features high speed eSRAM) vs another architecture (that doesn't)". However the same can and does apply in reverse (most multiplat games aren't optimised to make the most of eSRAM).

Take a game that looks (arguably) better than ROTTR such as SW: Battlefront which plays to the strengths of the PC architecture (maybe). You see GPU's that are theoretically in line with console GPU's performing in line, or even better than with console GPU's.
 
Last edited:
I agree that the theory says that should be the case. However when looking at the evidence, it doesn't seem to match up. I'm as surprised by that as anyone, but ultimately we need to go where the evidence points.



While generally true, that doesn't really say anything unless we know what "high" equates too. More accurately, the DF comparisons *always* attempt to run at equivalent settings across PC and console, and those settings generally correspond to "high" on the PC which generally corresponds to one setting below maximum. Although lately, it seems the consoles have been falling more in line with the "medium-high" setting which is more like 1.5 settings below maximum. That's exactly what we'd expect as the generation moves on.



No evidence is needed for such a claim. A 7770 is a weaker GPU than either consoles at stock speeds. An underclocked version, shouldn't. and wouldn't stand a chance. The question is, how does an R7 265 or R7 360 compare? They are pretty much on par with the PS4 in theory. Have we seen them performing significantly worse? On a regular basis? I'd imagine there is plenty of source material to check that out on DF.



ROTTR is a corner case. More than likely because it plays to the XBO's strength of it's eSRAM which even more powerful PC GPU's like the R7 350 can't match if the games has been designed for it. That's exactly where optimisation comes into play and the big performance advantages show themselves. However the same can and does apply in reverse, and 60% still seems like a massive exaggeration.

Take a game that looks (arguably) better than ROTTR such as SW: Battlefront which plays to the strengths of the PC architecture (maybe). You see GPU's that are theoretically in line with console GPU's performing in line, or even better than with console GPU's.
I understand your points Sir. I still don't think their is any absolute proof of either of our views.

Also I was referring to an under clocked 4 core athlon CPU and a Stock 7770 which I believe runs at a higher clock rate than the Xbox one. One reason I mentioned the 7770 is it's the minimum spec recommended GPU for RoTR.

I don't think DF can truly provide us with any conclusive answer to the question.
 
I understand your points Sir. I still don't think their is any absolute proof of either of our views.

That's kinda my point. Claims of 60% more performance from consoles compared to equivalent PC GPU's are totally unsubstantiated. More than that, they're directly countered by the best evidence that we have, i.e the DF face off's/performance analysis.

Also I was referring to an under clocked 4 core athlon CPU and a Stock 7770 which I believe runs at a higher clock rate than the Xbox one.

It is a fairly equivalent GPU to be fair... in every way except memory bandwidth. The 7770 comes with 2GB at 72GB/s. With the eSRAM, the XBO comes with at least 3GB at something in the 200GB+ range..... is it really a big surprise that it outperforms a 7770?

One reason I mentioned the 7770 is it's the minimum spec recommended GPU for RoTR.

That doesn't really have anything to do with the performance comparison. Min spec is a PC specific measure.

I don't think DF can truly provide us with any conclusive answer to the question.

Why? They compare at exactly equal quality settings (or as close as possible) which is supported by both multiple screenshots and video's - which you can even zoom in on, courtesy of DF. Plus they provide HD video's showing real time frame rate and individual frame time metrics. What else could you possibly require? It's an absolutely brilliant comparison tool - exactly as it's designed to be. What specific problems do you have with it?
 
According to DF some recent games have being performing better on console compared to their usual overclocked i3 + 750ti (GPU + vram).

Battlefront (slightly better perf on PS4):
Tom wheels out the Digital Foundry budget PC in an attempt to match Star Wars Battlefront at PS4-level quality settings - and performance. Even with an overclock in place, our trust rig didn't quite make the grade, requiring an impromptu upgrade..

Hitman (much better perf on PS4):
Hitman...both cards are significantly outperformed by the PS4 here...my take...advantage due to the closed box...etc.

Tomb Raider 2 on Xb1 (much better perf on XB1 compared to 750ti OC):

First and foremost, the venerable DF budget PC with an i3 processor and GTX 750 Ti finally met its match with Rise of the Tomb Raider. Running with settings similar to Xbox One, we found that testing areas saw the game turn in frame-rates between 13 and 25fps

They talked about The division that run better on consoles than on similar PC, but I don't remember in which video I heard it.
 
  • Like
Reactions: KOF
According to DF some recent games have being performing better on console compared to their usual overclocked i3 + 750ti (GPU + vram).

Battlefront (slightly better perf on PS4):


Hitman (much better perf on PS4):


Tomb Raider 2 on Xb1 (much better perf on XB1 compared to 750ti OC):



They talked about The division that run better on consoles than on similar PC, but I don't remember in which video I heard it.

All true, but the PS4 is is much faster than a 750Ti on paper. The real surprise is that it doesn't always vastly outperform that GPU..... especially if this 60% advantage is real. In which case the PS4 should be outperforming the GTX 960. How often have we seen that? In Battlefront the PS4 is likely under performing (I haven't run the the specific numbers), in Hitman my previous couple of posts have shown that it's performing roughly in line with it's theoretical specs, and ROTTR is a great showing for the XBO... until you consider the advantage it's eSRAM brings, in which case, it's performance seems pretty predictable, and nothing like the 60% advantage it "should" be gaining.
 
Back
Top