Digital Foundry Article Technical Discussion Archive [2013]

Status
Not open for further replies.
They specifically said they weren't trying to recreate the consoles, only test a theory that 50% more GPU power doesn't mean 50% better performance as measured by frame rates.

Very nice of them to say that. But it doesn't mean much when the headline reads:
In Theory: Can Xbox One multi-platform games compete with PlayStation 4?
 
Well I don't like that article:
1) perfs doesn't scale linearly with the CUs count. They use part that have more CUs respectively than both Durango and Orbis.
2) the 2 GPU they chose have 32ROPs and a 256 bit bus.
3) That is imho a bad premise, CPU may prove a bottleneck vs such a high end desktop chip. Jaguar may not suck but a core i7 with its 8 threads clocked that high is not in the same ball park.

They are pretty clear on why they chose how they did, etc. but it doesn't cut it. They try to hard to match the specs to in fine not being able to to do it.
I think a bonaire vs HD7870 the whole thing powered by an old Athlon X4 at low speed would paint a better picture along with testing at various setting so trying to outline when bandwidth becomes a concern, the impact of AA, resolution, etc. The impact of the CPU bottleneck.


I had a completely different take away, especially after you compensate for the well-known sony-bias of DF.

My take away is that there is a reasonable probability that the XBone will outperform the PS4. That probability depends on the efficiency gains of the move engines, esRAM, PRT, and DX11.2 efficiency gains. If those features provide meaningful performance gains - say 10 to 15% each, then the XBone could have a meaningful advantage over the PS4. Couple that with playing with the resolution of the display planes and you might have some very interesting outcomes.
 
They specifically said they weren't trying to recreate the consoles, only test a theory that 50% more GPU power doesn't mean 50% better performance as measured by frame rates.
Well it is a bad premise imo and their result does show that by the way is a pretty limited picture, ramp the resolution an you should see more differences.
And by the way they compare 2 cards with 200% the ROPS throughput of Durango, the difference is not only in extra ALUs power.
Overall the whole article still pretend to rate Orbis vs Durango and fails imo.

I think that Bonaire vs the HD7870 both at 800MHxz was a better match (+memory speed tweaks as much as doable), as we speak of a 5% difference in CU.
Then we had 26 vs 32 ROPS, I would have wanted to see the impact of AA, frame rate data, of resolution as well of the impact of CPU limitation. But also how those cards behave with soft V-sync.
I think that data like this could prove interesting too:
http://www.hardware.fr/articles/890-4/performances-theoriques-pixels.html
http://www.hardware.fr/articles/855-7/performances-theoriques-pixels.html
But those data are more of less available elsewhere as those links shows.

They also could have run some compute tasks (on the GPU) and measure the impact on perfs, etc.

Anyway, not really their best article imo.
 
Last edited by a moderator:
Richard says that xbox one is more balanced system than the ps4 -

According to inside sources at Microsoft, the focus with Xbox One was to extract as much performance as possible from the graphics chip's ALUs. It may well be the case that 12 compute units was chosen as the most balanced set-up to match the Jaguar CPU architecture. Our source says that the make-up of the Xbox One's bespoke audio and "data move engine" tech is derived from profiling the most advanced Xbox 360 games, with their designs implemented in order to address the most common bottlenecks. In contrast, despite its undoubted advantages - especially in terms of raw power, PlayStation 4 looks a little unbalanced by comparison. And perhaps that's why the Sony team led by Mark Cerny set about redesigning and beefing up the GPU compute pipeline - they would have seen the unused ALU resources and realised that there was an opportunity here to transform that into an opportunity for developers to do something different with the graphics hardware.

Why would the ps4 would be an unbalanced system without the GPGPU customizations ?
How the Xbox one is more efficient than ps4 as richard states with the data move engines and SHAPE audio ?
 
I had a completely different take away, especially after you compensate for the well-known sony-bias of DF.

My take away is that there is a reasonable probability that the XBone will outperform the PS4. That probability depends on the efficiency gains of the move engines, esRAM, PRT, and DX11.2 efficiency gains. If those features provide meaningful performance gains - say 10 to 15% each, then the XBone could have a meaningful advantage over the PS4. Couple that with playing with the resolution of the display planes and you might have some very interesting outcomes.
As far as the GPU is concerned I fail to see how the PS4 could not pull ahead, whatever the relevance of that difference in the mass market.
 
Well it is a bad premise imo and their result does show that by the way, it's a pretty limited picture, ramp the resolution an you should see more differences.
And by the way they compare 2 cards with 200% the ROPS thoughput of Durango, the difference is not only in extra ALUs power.
Overall the whole article still pretend to rate the Orbis vs Durango and fails imo.

I think that Bonaire vs the HD7870 both at 800MHxz was a better match (+memory speed tweaks as much as doable), as we speak of a 5% difference in CU.
Then we had 26 vs 32 ROPS, I would have wanted to see the impact of AA, frame rate data, of resolution as well of the impact of CPU limitation. But also how those cards behave with soft V-sync.
I think that data like this could prove interesting too:
http://www.hardware.fr/articles/890-4/performances-theoriques-pixels.html
http://www.hardware.fr/articles/855-7/performances-theoriques-pixels.html
But those data are more of less available elsewhere as those links shows.

They also could have run some compute tasks (on the GPU) and measure the impact on perfs, etc.

Anyway, not really their best article imo.

I agree, it is for me a bit difficult to judge the value of this comparison, although I myself think that it is interesting in its own. I am surprised by the outcome, as a theoretical performance increase of 50% resulted sometimes in over 25% real life performance increase....this seems to be quite a lot at first sight imo.

But what I do not understand is the statement that we enter diminishing returns with increasing numbers of CUs. What is the reason for this? Is it hardware related (e.g. amount of bandwidth) or software related, i.e. graphics engines are not that high parallelizable?
 
If indeed the CU count increase, even with extra bandwidth, TMU and ROPs doesn't scale the performance linearly, there'll be a lot of idle transistors, and hopefully first parties will make use that through GPU compute. I'm nearly certain multi-plats will not really dig that much into that territory.

Well whether or not multi platform games might depend on who making the different ports and the tools that they use. If the port is not in-house will a dev merely try to make a PS4 look as close as possible to the XBO port or will they look to make the best port possible, based on the budget of course ;-) Compute may be ignored or maybe not depending on the incentives of the development house.

In the beginning gaming engines will obviously work on a common set of resources but overtime compute will become more of a resource to be exploited on both systems if only because if gives you some extra level of flexibility when it comes to juggling CPU / GPU time for each of those frames as they mercilessly make their way through the system 1/30th - 1/60th of a second at a time . Once compute becomes a "thing" then you could see the engine leveraging the PS4 resources. Of course the same might be true of the ESRAM as well.

All of this is speculative of course and in any case compute will lag behind the first party. implementations.
 
I had a completely different take away, especially after you compensate for the well-known sony-bias of DF.

Really I have a completely different take because they are specifically calling out the ESRAM as having 192.gb/sec rather than 102 gb/sec. The former is a rumor and comes about because of some unknown or downright unlikely phenomenon, the latter a much more solid number.

They even gave the XB1 stand in machine the benefit of the doubt when it came to overall bandwidth which is arguably a finger on the scale for MS.

ADDED:

:D just noticed this:

An interpretation of Cerny's comment - and one that has been presented to us by Microsoft insiders - is that based on the way that AMD graphics tech is being utilised right now in gaming, a law of diminishing returns kicks in.

So the article is asking MS insiders to interpret what the Sony guy says. Sounds really Sony Biased LOL
 
Last edited by a moderator:
I had a completely different take away, especially after you compensate for the well-known sony-bias of DF.

My take away is that there is a reasonable probability that the XBone will outperform the PS4. That probability depends on the efficiency gains of the move engines, esRAM, PRT, and DX11.2 efficiency gains. If those features provide meaningful performance gains - say 10 to 15% each, then the XBone could have a meaningful advantage over the PS4. Couple that with playing with the resolution of the display planes and you might have some very interesting outcomes.

DMA backed by compression hardware (which the PS4 has as well) is not exclusive to the XBONE. Neither are PRT, Nor are any gains the hardware has made (The DX11.2 API does not make the XBONE magically more efficient).

Only 1/4 of the 4 things you prescribe as giving so much efficiency are even exclusive to the XBONE. And im sorry to say it but it is quite clear that DF has had a little less Sony bias lately and a little more MS bias, this is quite obvious with the unsubstantiated bandwidth increase figures that are used in the article
 
Richard says that xbox one is more balanced system than the ps4 -

According to inside sources at Microsoft, the focus with Xbox One was to extract as much performance as possible from the graphics chip's ALUs. It may well be the case that 12 compute units was chosen as the most balanced set-up to match the Jaguar CPU architecture. Our source says that the make-up of the Xbox One's bespoke audio and "data move engine" tech is derived from profiling the most advanced Xbox 360 games, with their designs implemented in order to address the most common bottlenecks. In contrast, despite its undoubted advantages - especially in terms of raw power, PlayStation 4 looks a little unbalanced by comparison. And perhaps that's why the Sony team led by Mark Cerny set about redesigning and beefing up the GPU compute pipeline - they would have seen the unused ALU resources and realised that there was an opportunity here to transform that into an opportunity for developers to do something different with the graphics hardware.

Geez lots of MS insiders in this article :smile:

While I do believe that there was lots of profiling of advanced xbox 360 games and that data was used to guide the designs of some/most/all of the modules we have no idea one way or the other that the PS4 is wildly unbalanced as a system.

One problem with designing your system around a specific set of solutions generated by profiling a specific set of xbox360 games is that you may be cutting yourself off from a range of flexibility when it comes to solving problems and performance issues that will inevitably arise when dealing with a bunch of next gen games vs last gen games.

This doesn't mean that Microsoft's idea is dumb, quite the contrary since you gain a lot from the data produced by such things, but it may make the XB1 unbalanced in it's own way.

Just to be clear here that whatever bottlenecks MS will be avoiding by profiling certain xbox360 games will help quite a lot for of next gen games.
 
Last edited by a moderator:
Geez lots of MS insiders in this article :smile:

While I do believe that there was lots of profiling of advanced xbox 360 games and that data was used to guide the designs of some/most/all of the modules we have no idea one way or the other that the PS4 is wildly unbalanced as a system.

"Wildly unbalanced" is certainly too strong, but the comments made by Cerny himself seem to clearly indicate that the PS4 is ALU-heavy and that those ALUs are likely to be poorly utilized if they aren't being fed GPU compute tasks on top of the traditional rendering workload.

The question then becomes: how are multiplatform developers going to be incentivized to do extra development to implement these extra compute tasks if they can expect roughly equivalent performance across platforms without it?
 
If the audio block in the Xbox effectively frees up the CPU, then GPGPU could come in there at least on PS4 for equivalency in multiplatform games. Or be used for audio.
 

This article reads like a textbook example of confirmation bias. If that's not Leadbetter's natural state I can only assume his inside sources at Microsoft are so far up his ass at this point he has become a human puppet.





Richard says that xbox one is more balanced system than the ps4 -

Why would the ps4 would be an unbalanced system without the GPGPU customizations ?
How the Xbox one is more efficient than ps4 as richard states with the data move engines and SHAPE audio ?

It's not. He's just wrong and grasping at straws to downplay any PS4 advantage. Ooh, Microsoft profiled games looking for bottlenecks. I'm sure that never occurred to Sony. Weird how it is so much harder to find a bottleneck in the ps4 design than the Xbox One's though. Wonder how that happened? Oh well, here are some random and meaningless benchmarks designed specifically to nullify the PS4's hardware advantages!
 
That we might see something less than 50% more pixels at the same update and quality in multiplatform titles. Seems obvious anyway, but that's what I took from the article.

I don't think the eSRAM will be as awkward as is suggested, at the very least it's a way better solution than the eDRAM implementation in Xbox 360.
 
I think the talk about PS4 being unbalanced is a little foolish in light of it being a console. Console software can expand to take full advantage of the available hardware, I dont think there's really such as thing as an unbalanced system (ignoring extremes). It should be fairly straight forwards for developers to add additional shader work to a game to take advantage of the additional power.

As for a system having too may CU's to be useful for graphics, this too is a bit of a ridiculous concept, even in a PC where the software isnt tailored to the hardware. Double cu power will always result in double performance in fully cu limited scenarios. Or put another way, if you double everything in a system, your going to get double performance, the only reason this isnt going to be the case is if some part of the system hasnt been doubled and thats causing a bottleneck for the parts that have - or in the case of dual gpus there is an associated overhead to the doubling. PS4 wont achieve 50% more performance than X1 because its held back by a cpu and memory system that isnt 50% faster, just like the TR comparison.
 
The next person to mention bias by GAF or DF or IGN or anyone else, in this or any other thread, is going to get a two weeks vacation on me. Every other discussion seems to be about the messengers rather than the messages. Furthermore, the PS4 vs XB1 debate has already been hit on the head (twice?) and this thread has to be very wary of getting axed if it's going to undertake such a comparison. I won't call it just yet, but given various unknowns, I think it's best to wait until at least launch until we start into that debate.
 
I think the talk about PS4 being unbalanced is a little foolish in light of it being a console.
I agree. In simple terms, you can solve a problem with either more power (calculate it) or more storage (precompute it). You can't have an excess of compute power, because that compute power can always do other jobs. It's like saying PS2 was imbalanced for it's crazy BW and fill - it was solving the issue of 3D graphics in a different way. CUs are never going to sit idle in a console where devs can target them specifically, unlike on PC where they have to compromise for more generic setups. IMO 'balance' in a console is mostly about making life easier for the devs. Limitless processing power and no RAM, for example, may allow the same game to be rendered by computing everything on the fly, but that system would be a pig to develop for! Both consoles are looking okay at this point (except the blood-sucking OSes leaching the vibrancy and richness from our glorious game VRs!). It's actually surprising how much variety there still is considering we were facing two PC clones early on.
 
Really I have a completely different take because they are specifically calling out the ESRAM as having 192.gb/sec rather than 102 gb/sec. The former is a rumor and comes about because of some unknown or downright unlikely phenomenon, the latter a much more solid number.

They even gave the XB1 stand in machine the benefit of the doubt when it came to overall bandwidth which is arguably a finger on the scale for MS.

ADDED:

:D just noticed this:



So the article is asking MS insiders to interpret what the Sony guy says. Sounds really Sony Biased LOL

You read that completely wrong fwiw. MS insider interpretation of gcn capabilities matched DFs interpretation of Cernys GPGPU claim wrt GCN.

192 is not a rumor its a claim. The sane claim roughly as when MS said that the edram can get up to 256gb\s bandwidth but only when doing 4xMSAA.
 
Status
Not open for further replies.
Back
Top