Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
For compatibility reasons, yes. A lot of them have 64-bit versions, though. We'll have to wait and see what the consoles have in store for us, being able to program fully for 64-bit from the word go. Wouldn't surprise me to see some of the PC ports of those games require a 64-bit OS. Doesn't bother me in the slightest, I've been running 64-bit for a few years now.
 
Uhm, how much RAM do you think games will use when they will still mostly support PC gaming which is run primarily on 32bit OS where there is a limit of 2GB for application space?
This is the most astonishing paradox I've heard of in a long while. The paradox of chaos.

It sounds as if now the computers are holding back consoles, and not the other way around.

I am seeing it already. From now on after Play Station 4 and Xbox One come out, there will be new graphics settings on computers:

Low

Medium

High

Ultra

AND

Consoles

The Consoles setting matches all the features and effects provided in the console versions and only super high-end PCs can achieve that. :???:
 
Sony spent a lot of time trying to reduce the OS foot print and give developers more RAM to play with...

The OS was 120Mb on launch and that was eventually dropped to 50Mb...

There's no way that a new OS offers enough new features over what PS3 can do to warrant such a massive increase in RAM use.
That is still bigger than 360's 32MB, and look at what MS managed to squeeze out of that. Sony may need the extra space to work in initially.
 
Last edited by a moderator:
Your display planes have yet to be proven to anything more then QoS, you throw something out as a myth and having nothing but your interpretation (which has been incorrect in the past) of it to back yourself up.

They explicitly explain what they mean by QoS in the context of the display planes in the patent. It's not my "interpretation", it's precisely what they mean by that term and the entire motivation of the invention in the first place. I suggest you go back and read the patent sometime.

And my interpretation of the display planes has been incorrect how? Just because you assume that the use they describe is somehow limiting doesn't mean it is. Given how they work there is zero reason to imagine that.

Uh what are you talking about, the PS4 has at least 156GB/s to the RAM the XBONE has a max of 102GB/s. And you might not even be fill limited, it all depends on where the limitation lies.

Don't think so. X1's GPU sees the main RAM and eSRAM bandwidths in parallel, last I checked. Wouldn't that mean that so long as the DME's keep the eSRAM packed X1's GPU would effectively be able to see up to 170GB/s? Offer an explanation with your reply if ya don't mind. I'm not interested in yes/no answers.



AMD seemed to think that 153.6 GB/sec was enough bandwidth to feed a 7850 (also 32 ROPS). That configuration manages to attain 1080p60 in plenty of games, without the benefits of being a custom architecture, so I'm thinking the PS4 with more bandwidth will be fine.

Interesting. Thanks for that info.



Having 3GB reserved seems quite unnecessary - if it is indeed true.

Today. These machines will want to have modern, up to date, space age OS feature sets a decade into the future. Their choices were likely down to either 4GB or 8GB for whatever reason. If so, they either handicap themselves in both games AND their OS ambitions with 4GB cutting things really, really close (too close for comfort I bet)...or they go all in with 8GB and give both aspects some breathing room. Makes sense to me.
 
Yeah, but since when was a 2GB OS reservation considered low for a games console?

Look how much the iPhone 5 can do with only 1GB, surely double that would be enough for whatever Sony wanted to do.

Having 3GB reserved seems quite unnecessary - if it is indeed true.

Eh... the good thing about reserved memory is that you can always give it back if you don't need it.
 
Don't think so. X1's GPU sees the main RAM and eSRAM bandwidths in parallel, last I checked. Wouldn't that mean that so long as the DME's keep the eSRAM packed X1's GPU would effectively be able to see up to 170GB/s? Offer an explanation with your reply if ya don't mind. I'm not interested in yes/no answers.

The X1 has roughly the same read bandwidth (176GB/s vs 170GB/s) but has drastically lower write bandwidth (102GB/s vs ~156GB/s). If you look at the vgleaks diagram you will see the that the X1 cannot fill faster then 102GB/s due to the ROP limitation.
 
It's a silly comparison to make because a not insignificant amount of bandwidth in the Xbox One will be consumed simply shuffling data back and forth between the ESRAM and DDR3. Those are operation that don't have to happen on PS4, so that system will always command a real world advantage in bandwidth.
 
It is relatively insignificant though. ~1%?

The more you want to use the eSRAM the more DDR3 bandwidth will be taken for copying, over all you'll get a net benefit but it shows why its not really fair to add up the bandwidths when you have to copy from one to the other.

For example for a 30FPS game

The DDR3 bandwidth per frame is 68/30 = 2.27GB/f

Assuming you want to use some amount of data in the eSRAM each full write to the eSRAM (all 32MB) will take away 32MB/frame from the DDR3

so if you read to the eSRAM fully (all 32MB) 10 times a frame your DDR3 bandwidth for other stuff is.

68/30 = 2.27GB/f
2.27GB/f - (32 * 10)
2.27GB/f - 320
2324.48MB/f - 320MB/f
2004.48MB/f
Or back to GB/f
1.95GB/f

As Brad Grenz pointe out below it also uses up eSRAM bandwidth do.

To Write

102/30 = 3.4GB/f
3.4GB/f - (32 * 10)
3.4GB/f - 320
3481.6MB/f - 320MB/f
3161.6MB/f
Or back to GB/f
3.1GB/f

To then Read [As Brad Grenz pointed out again :[ i feel stupid today]

3161.6MB/f - 320MB/f
2841.6MB/f

Total bandwidth left over from just copying 320MB and no other reading/writing by GPU or CPU.

2.8GB/f + 1.95GB./f
4.75GB/f

And thats with doing 0 operations on the actual data.

In comparison assuming the same thing from the PS4.

176/30 = 5.86GB/f
5.86GB/f - 320MB/f
6000MB/f - 320MB/f
5680MB/f
Back to GB/f
5.55GB/f
 
Last edited by a moderator:
You aren't just reading from DDR3, you're also writing to ESRAM. It consumes bandwidth in both cases. So if you take that 2GB per frame, that's 60GB/s out of 68GB/s of the DDR3 bandwidth to read AND 60GB/s of the 102GB/s ESRAM bandwidth just to write the copy. Which is to say you couldn't even mathematically use data at that rate.
 
You aren't just reading from DDR3, you're also writing to ESRAM. It consumes bandwidth in both cases. So if you take that 2GB per frame, that's 60GB/s out of 68GB/s of the DDR3 bandwidth to read AND 60GB/s of the 102GB/s ESRAM bandwidth just to write the copy. Which is to say you couldn't even mathematically use data at that rate.

This is a good point as well. Its actually impossible to reach the peak bandwidth.

Does anyone have any numbers on post processing and how much data it takes, my guess of 320MB was pulled straight from my arse to use as a example, but it would be nice to know what the peak bandwidth it could have in a scenario based on something thats a little more realistic.
 
For a starting point you have to first subtract the bandwidth consumed writing and reading buffers. The fact is the overhead of copying data basically triples your bandwidth consumption so it is almost always going to be better to simply read directly from DDR3. It only makes sense to copy data to ESRAM if it is going to be read many, many times.
 
The more you want to use the eSRAM the more DDR3 bandwidth will be taken for copying, over all you'll get a net benefit but it shows why its not really fair to add up the bandwidths when you have to copy from one to the other.

For example for a 30FPS game

The DDR3 bandwidth per frame is 68/30 = 2.27GB/f

Assuming you want to use some amount of data in the eSRAM each full write to the eSRAM (all 32MB) will take away 32MB/frame from the DDR3

so if you read to the eSRAM fully (all 32MB) 10 times a frame your DDR3 bandwidth for other stuff is.

68/30 = 2.27GB/f
2.27GB/f - (32 * 10)
2.27GB/f - 320
2324.48MB/f - 320MB/f
2004.48MB/f
Or back to GB/f
1.95GB/f

As Brad Grenz pointe out below it also uses up eSRAM bandwidth do.

102/30 = 3.4GB/f
3.4GB/f - (32 * 10)
3.4GB/f - 320
3481.6MB/f - 320MB/f
3161.6MB/f
Or back to GB/f
3.1GB/f

Total bandwidth left over from just copying 320MB and no other reading/writing by GPU or CPU.

3.1GB/f + 1.95GB./f
5.05GB/f

And thats with doing 0 operations on the actual data.

In comparison assuming the same thing from the PS4.

176/30 = 5.86GB/f
5.86GB/f - 320MB/f
6000MB/f - 320MB/f
5680MB/f
Back to GB/f
5.55GB/f

so what exactly is the advantage of only gddr5 ram in ps4 compared to esram+ddr3 in xb1 ? ( does apparent bandwidth of ps4 is more? )
 
so what exactly is the advantage of only gddr5 ram in ps4 compared to esram+ddr3 in xb1 ?

It will be much easier to get higher bandwidth if you want it without having to hopscotch around the eSRAM and DDR3. Also the peak should be higher, for both read (at least a bit theoretically, in reality it'll be more then a bit) and for write (50+% more).
 
so what exactly is the advantage of only gddr5 ram in ps4 compared to esram+ddr3 in xb1 ? ( does apparent bandwidth of ps4 is more? )

Your math doesn't make sense. Nothing has to be copied in on PS4 so the bandwidth is the same as it was to begin with to get to where the Xbox One was after all the copying.

Look, assuming this 320MB dataset.

PS4 reads 320MB in a single frame.

vs

Xbox One reads 32MB from DDR3
Xbox One Writes 32MB to ESRAM
Xbox One reads 32MB from ESRAM
Repeat 9 more times for a single frame.

So in order to read the same 320MB of data takes 960MB of your per frame bandwidth budget trying to do it this way. On PS4 for the same amount of bandwidth you could access 3 times more unique data in that frame. Oh, and there are no stalls if you didn't get the right data into ESRAM when you needed it.
 
Your math doesn't make sense. Nothing has to be copied in on PS4 so the bandwidth is the same as it was to begin with to get to where the Xbox One was after all the copying.

Look, assuming this 320MB dataset.

PS4 reads 320MB in a single frame.

vs

Xbox One reads 32MB from DDR3
Xbox One Writes 32MB to ESRAM
Xbox One reads 32MB from ESRAM
Repeat 9 more times for a single frame.

So in order to read the same 320MB of data takes 960MB of your per frame bandwidth budget trying to do it this way. On PS4 for the same amount of bandwidth you could access 3 times more unique data in that frame. Oh, and there are no stalls if you didn't get the right data into ESRAM when you needed it.

You quoted the wrong person but i get what you mean, I forgot to adjust the bandwidth for the XB1 to actually read the data to the GPU from the eSRAM and as such you are correct it uses 3x the bandwidth for any eSRAM read :/ so i guess its only worthwhile if your reading it >3 times.
 
How much of a difference does using PRT's make to the picture, when you take into account the fact that the texture caches aren't of an infinite size and the 32MB of ESRAM is accessible to the XB1 GPU with lower latency?

Would the 32MB hold enough texture data to make a difference, or would it be fetching from DDR3 so often in a 'modern game' that the latency advantage would be a moot point?

There must be SOME advantage to make you want to take a 1.6B transistor hit, over and above "it should work out in the long term..."?
 
The advantage is that it allows them to have shot for a higher amount of ram then the PS4 from the start and should be cheaper to manufacture in the long run. Sure there is a latancy advantage vs off chip ram but that doesn't mean that was the reason behind using it.
 
Status
Not open for further replies.
Back
Top