Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
That bodes well for the possibility of utilising the IGP's for compute work along side discrete GPU's for graphics in the future.

That was my guess when I was thinking in APUs. APU + GPU = superchargued CPU + APU, but I guess it is not the real purpose of APUs.
 
Makes everything look bad actually

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/6

Looking at 768p (as its actually playable) The i74950 is at 39fps it bats everything but the gt 650m it does much better than even the desktop trinity .

Would love to see a next gen radeon of GeForce come with a 128megs of edram/sram built in. The die size is 264mm2 + 84mm2 . ITs really nice
After reading the article I can safely assume that the eSRAM of the Xbox One compares favourably with it.

But comparisons are meaningless in this case. It's like comparing apples to oranges.

Gigaflops are also kinda meaningless in this case, not to mention one of them is classic EDRAM while the other is eSRAM.

Well as Anand points out Iris Pro has just slightely less than double the ALU resources of the HD4600 and the compute performance seems to scale pretty perfectly with that. If the eDRAM were providing an additional benefit (other than providing sufficiently increased bandwidth to allow the ALU's to scale) then we should have seen an even greater performance increase.

That said, I have no idea how the latency of Crystalwell compares to the eSRAM in Xbone.

What we do seem to be able to take from this though is that Intels compute performance seems to be pretty spectacular - maybe even beter than GCN! That bodes well for the possibility of utilising the IGP's for compute work along side discrete GPU's for graphics in the future.
As blakjedi pointed out already, one of the main differences is that the eSRAM in Xbox One is completely GPU-centric, :eek: while the EDRAM in Haswell is fully shared between all the cores of the CPU, the GPU, and additional media.

Not to mention Iris features classic EDRAM compared to the 6T-SRAM in Xbox One.

I think a much, much better and more interesting comparison would be comparing the Hasswell GT3 (Iris Pro 5100) with the Hasswell GT3e (Iris Pro 5200) side to side, because one of them doesn't include the EDRAM while the other does. :smile:

Besides that, the EDRAM is separate from the main microprocessor.
 
Iris Pro does really well in compute I wonder if anything related to the EDRAM? After all we have heard compute shaders should benefit the most from ESRAM.

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/17

55300.png


Anand does say this

"Compute" in this graph is a ray tracer. All ray tracers are RAM latency benchmarks. It's no surprise halved latency (DDR3 vs. eDRAM) for the first 128MB in your scene gives you a huge boost. Checking http://www.luxrender.net/wiki/LuxMark it says:
"2 new benchmark scenes, for a total of 3 benchmarks with a raising complexity (~200,000, ~500,000, ~2,000,000 triangles);"
and "Room is a 2,000,000+ triangles benchmark. This scene has been designed by Mourelas Konstantinos "Moure" (http://mourelask.weebly.com/).
Note: Room scene is extremely complex and is available only on 64bit executables. Old and/or low-end GPUs may be unable to render this scene."

I wonder how big the working set for that "big" scene is. Two million triangles are, naively, at 3 vertex/triangle and 4 bytes/vertex approximately 24MB, add some acceleration structures on top of that, and the frame buffer. I'm not downloading the benchmark, but maybe someone knows how much RAM it really touches per frame.

It would be nice to have luxmark numbers for GT3e CPU and GPU and GT2 CPU and GPU. CPU should be getting a boost from the eDRAM, too.

A cursory check at the results DB http://www.luxrender.net/luxmark/ is confusing, it seems i can't match anand's result. Maybe he used some custom benchmark?
 
As blakjedi pointed out already, one of the main differences is that the eSRAM in Xbox One is completely GPU-centric, :eek: while the EDRAM in Haswell is fully shared between all the cores of the CPU, the GPU, and additional media.

I doubt CPU access to the edram will have any appreciable impact on the graphics performance.

Not to mention Iris features classic EDRAM compared to the 6T-SRAM in Xbox One.[/quoet]

True, we don't know how the latency compares to the eSRAM in Xbone but I'm assuming pretty poorly.

I think a much, much better and more interesting comparison would be comparing the Hasswell GT3 (Iris Pro 5100) with the Hasswell GT3e (Iris Pro 5200) side to side, because one of them doesn't include the EDRAM while the other does. :smile:

That will tell us how useful the extra bandwidth is which is interesting in itself but I'm not sure how that tells us anything useful about the edram in Xbone (other than the obvious extra bandwidth is good).
 
I doubt CPU access to the edram will have any appreciable impact on the graphics performance.

Not to mention Iris features classic EDRAM compared to the 6T-SRAM in Xbox One.[/quoet]

True, we don't know how the latency compares to the eSRAM in Xbone but I'm assuming pretty poorly.



That will tell us how useful the extra bandwidth is which is interesting in itself but I'm not sure how that tells us anything useful about the edram in Xbone (other than the obvious extra bandwidth is good).
:cool: The only way to know would be if the Xbone GPU was based upon Hasswell and had 128MB of EDRAM instead of 32 MB of eSRAM.

According to the article 32MB of EDRAM in Hasswell was more than enough but they wanted this new approach to be future proof.

So they added 128MB and future CPUs will include that amount as well. Sounds convincing to me, because of backwards compatibility and so on.

I am more interested in seeing how Xbox One actually performs as a console. Another Epic UE4 demonstration would be certainly epic. This time running on the Xbone, and a Digital Foundry article on the subject afterwards would be a resounding win win situation.

PS4 is just more powerful. Period. :smile2:But it is going to be a very interesting generation. Developers say so.

We asked Avalanche’s chief technical officer Linus Blomberg how the two consoles compare. “It’s difficult to say, as it’s still early days when it comes to drivers,” he told us.

“With each new driver release, performance increases dramatically in some areas. The PlayStation 4 environment is definitely more mature currently, so Microsoft has some catching up to do. But I’m not too concerned about that as they traditionally have been very good in that area.

The specs on paper would favour the PS4 over the Xbox One in terms of raw power, but there are many other factors involved so we’ll just have to wait and see a bit longer before making that judgment.
 
You need to look at the prices Intel is charging for Iris Pro chips before making that statement. Intel is selling them for $487.
If people think that the difference between an APU made for the PC and the specialized hardware of any modern console -PS4, Xbone- is going to be small judging by the gigaglops then I think they're in for a surprise.
 
They could use the GPU on ps4 to do what within the x1 si done by shape, so the graphic will be almost on par, if x1 don't uses cloud computing, of course. And if the clock is 800 MHz.

Too much 'if'


Scenario 3:
Microsoft will pay 3rd party software house to use cloud computing, how will they change ps4 versions?
Lower framerate and all in local or same framerate but all similar to a disconnected X1?
 
Last edited by a moderator:
Lets see if more than 5 people actually use this, since there aren't too many Vitas out in the wild.

I have one, so we just need 4 more.

Thing is, in order for me to get value out of this doesn't require Sony sell 30 million Vitas since it's going to be required for EVERY PS4 game that doesn't use PSEye.
 
Lets see if more than 5 people actually use this, since there aren't too many Vitas out in the wild.
And that's the point. It's a feature that might actually make some people buy a PSVita to go along with their PS4. Can't hurt. Synergy is no one-way street.

See ShadowRunner's earlier post.
 
Android can already pair with the PS3 controller, and maps everything to the native game controller inputs for games. I don't see why it wouldn't work for the xbone controller if it's bluetooth. Even more so for windows phones as I suppose they have a games API for such inputs. Still cumbersome, ideally they'd need to make a slim clip on.

That might be a good alternative, support pairing of the DS4 to every Android and iOS device.

Then do the streaming to the most popular devices out there, instead of limiting to the Vita, which people aren't going to buy solely for gaming.

Rather than try to encourage people to buy Vita, supporting the way more popular phones and tablets out there will encourage sales of the PS4.
 
Far more important question is which console is going to serve as the lead platform? Last time around it was the 360 because it came out a year earlier and was easier to develop for, which doesn't apply this time. Will it be the Xbox One and everything will "scale up" to the PS4 and PC or will it be the PS4 and everything "scales down" to the Xbox One? Or is it just dynamic resolution such that the versions are largely the same?

I do not think there is gonna be a "lead platform" next gen. Most competent devs have given up that strategy.
 
this is a speculation of yours, I think that such powerful audio block is a surprise for all, probably Sony was good with the X-FI audio chip in the ps3 and the chip in the ps4 is the same, why not?

And this is speculation of yours. You have no idea how capable the PS4 audio chip is, nor do you comprehend the technical information you keep repeating. Your assumption that the PS4 will be at some kind of audio disadvantage is literally baseless. It doesn't matter how advanced you imagine SHAPE to be, you have no way to make a comparison. And even if we grant your premise, your conclusions don't follow from your argument.
 
Rather than try to encourage people to buy Vita, supporting the way more popular phones and tablets out there will encourage sales of the PS4.
Just "supporting" the "way more popular phones and tablets" for the sake of it doesn't seem like Sony's vision, though. They want to enable you to play (and enjoy) almost any PS4 game on their (proprietary) handheld gaming device.

Good luck playing a game programmed to work with two analogue sticks on your IPhone. Doesn't make any sense, wouldn't play well and would be a pain in the butt for the devs to support (i.e. input-scheme wise; I reckon Gaikai would be technically capable of the streaming-task in general). The fact that it wouldn't sell any PSVitas is just the last nail in the coffin of that idea.

The decision to limit that kind of streaming compatibility to PSVita is both economically viable for Sony and game-play-wise reasonable for their customers - and I personally think it's WAY more interesting, straight-forward, and relevant to be actually able to PLAY almost any PS4 game on my PSVita than to have, say, SmartGlass displaying my Halo stats on my tablet while playing Halo while watching Star Trek while ordering tickets for Into Darkness while skyping with a friend.

Playing games just isn't like listening to the radio while doing some other stuff. That's why Sony's overall vision with the PS4 (including the way they implement PSVita) makes a lot of sense - and the technical decisions they made based upon that vision are just very consistent with (and true to) their appraoch. I like their focus.
 
And this is speculation of yours. You have no idea how capable the PS4 audio chip is, nor do you comprehend the technical information you keep repeating. Your assumption that the PS4 will be at some kind of audio disadvantage is literally baseless. It doesn't matter how advanced you imagine SHAPE to be, you have no way to make a comparison. And even if we grant your premise, your conclusions don't follow from your argument.

this is the point Brad, I'm not imaging anything, I just listen to who contributed to build SHAPE, and I hope bkillian can reply in this discussion
he and others have already talked about this topic, what are the differences between an old audio and what new shape can do, and what can do ps4 using cpu and gpu to give similar results
I just ave not given any conclusion

from what I know from confirmed specs

X1
768 ops/cycle
Clock gpu: unknown

8 core GPU
type of GPU: almost sure Jagur / customized Jaguar
Clock: unknown, maybe 1.6-2 GHz

SHAPE with 100+ full effected 3D displaced voices (different stream each headset)

memory: 8 GB RAM

+cloud computing ( +3x the local console for a final 4x, 300.000 servers around the world)

PS4
1.8 TF machine
Clock GPU: 800 MHz (18 CU)

8 cores CPU
type of CPU: Jaguar
Clock: unknown, maybe 1.6-2 GHz

Audio: unknown

memory: 8 GB GDDR5

this is what we know for sure, right?
this open up a lot of different scenarios:


a) clock of the X1 GPU= 800 MHz, cloud computing for some reasons disappear, ps4 have a monster audio block similar to SHAPE

then we should see an advantage on graphics on ps4 in the long run (lead platform the lowest denominator as always in history)

b) clock of the X1 GPU= 1050 MHz, cloud computing for some reasons disappear, ps4 have a monster audio block similar to SHAPE

graphic almost on pair 1,8~1,6 TF on gpu, same cpu etc

c) clock of the X1 GPU= 800 MHz, cloud computing gives 4x the processing capabilities of one single local console, ps4 have a monster audio block similar to SHAPE

I see developers doing the same game for both but adding some features on X1 (IA, physics, some kind of lighing)
but first party's games shining

d) clock of the X1 GPU= 1050 MHz, cloud computing gives 4x the processing capabilities of one single local console, ps4 have a monster audio block similar to SHAPE

this will probably kill the ps4 version of multiplatform games, not on first generation because -> lead platform is the low end denominator

e) clock of the X1 GPU= 1050 MHz, cloud computing gives 4x the processing capabilities of one single local console, ps4 have NOT a monster audio block similar to SHAPE

advanced audio routines relies on one or two CU within the ps4, taking his gpu @ ~1.6-1.7 TF for graphics
as (d) but on first generation there could be already an advantage for X1

so leaving apart any clock argument, it will be complex to speculate on framerate, maybe impossible as there's too many factor that we don't know about
we don't know about the main parts, let alone what we can know about bottlenecks in the systems and other important factors

this remind me the x360/ps3 question, the ps3 was talked being 2x the x360, remenber?
reality said quite the contrary, mostly of the multiplatform games looked better on x360

this is because a lot of factor are involved, are those facts or my conclusions, Brad?


and how in the world a 50% faster gpu means 50%+ FPS?
the rest of system doesn't matter?

let's take a gpu intensive game as the first PC version of crysis (I think the game that relies more heavily on GPU in our recent history)

the 6990 is not 50% but 100%+ faster than 6970
and how this is in real world?

crysis_1024_768.gif
crysis_1280_1024.gif


so using a GPU 100%+ faster the gain is 4 FPS in 1024x768 (5%) and 24 FPS in 1280x1024 with AA on (20%)

so in the worst scenario, (a) are you really think you'll see a 30 FPS VS 45 FPS or 30 FPS VS 32-36 FPS?
 
Last edited by a moderator:
Or maybe it is a proof Intel is not good making GPUs?
That's quite an inaccurate statement, how don't get how you get there.
Intel just put AMD APU to shame, there is no way AMD can catch-up without relying on expansive (and more power hungry) GDDR5 solutions. Actually iso power Intel lets everybody behind... by a land mile, when it comes to compute performances the level of performance they provide is stellar.

How that is relevant to Durango? I'm not sure that it is that it is relevant at all those are significantly different architectures.
 
this is the point Brad, I'm not imaging anything, I just listen to who contributed to build SHAPE, and I hope bkillian can reply in this discussion
he and others have already talked about this topic, what are the differences between an old audio and what new shape can do, and what can do ps4 using cpu and gpu to give similar results
I just ave not given any conclusion

from what I know from confirmed specs



this is what we know for sure, right?
this open up a lot of different scenarios:




so leaving apart any clock argument, it will be complex to speculate on framerate, maybe impossible as there's too many factor that we don't know about
we don't know about the main parts, let alone what we can know about bottlenecks in the systems and other important factors

this remind me the x360/ps3 question, the ps3 was talked being 2x the x360, remenber?
reality said quite the contrary, mostly of the multiplatform games looked better on x360

this is because a lot of factor are involved, are those facts or my conclusions, Brad?

I don't think MS sees themselves as competing with Sony at this point as much as they are trying to move in a direction where they can have the XB1 in hundreds of millions of homes at which point they can justify the expense of IP TV exclusivity with ESPN and HBO for example.

Looking at this from the standpoint of specs misses the point, if MS was competing with Sony on specs they never would have designed XB1 this way.

Gaming will be one leg of the stool therefore the specs are good enough but again that is missing the whole point. There are other legs holding up the stool and it makes more sense to try and understand what they are.
 
Status
Not open for further replies.
Back
Top