Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
I don't think they're really going to reserve 3 GB for the OS, 2GB max should be fine- they're not running Windows simultaneously with the Game OS or anything like that. Nor do they need memory for Kinect voice and skeletal tracking databases.

And MS does all of what they're currently doing in 2GB, the remaining 1GB is only reserved for future use and not being currently used for anything.

So I really don't see why Sony will match them on the memory reservation when their OS seems more lightweight and their vision for the device is more focused than MS' all encompassing entertainment box.
I think it doesn't make sense to go in low and leave no option to expand if it becomes necessary to match features of the new Xbox. They can go smaller if they don't need it.
 
Ok, and how would this exact same thing not be done on X1's display planes though? That's the entire reason they were designed based on the leak and the actual patent for them.

It could happen for Xbone too, but they'd need to sacrifice more resolution. I'm just speculating in the case that devs target 1080p30 on Xbone, whatever power is left over on PS4 could be used to increase the framerate, with or without much sacrifice in visuals.

Display planes is another matter, more for UI/video overlays and such. IIRC PS4 has it's own display planes as well, just not as many. 3 vs 2 I think.
 
I think they're going by the ROP count (16 vs 32), assuming double the fillrate will double framerate if all else remains equal.

Which is just silly. Strangely enough the article in which that dev was quoted, he states exactly what I think will happen; PS4 might have a more stable framerate or slightly higher res while the One might have a dynamic resolution. Some effects might be pared back, some might be missing but they will largely look the same. No third party dev will run one at 60fps and the other at 30fps. The only place where that can be seen is in exclusives and that depends on other factors and cannot even really be used as a point of reference.
 
Sony pushing for 60fps :oops: Might be a return to the good ol PS2 days with more HFR titles.



http://www.eurogamer.net/articles/2...their-say-on-specs-self-publishing-and-tvtvtv
As long as we are guaranteed fluid movement and frame rates, everything is fine.

If a game is going to run at 1080p and 60 fps on the PS4, then dropping down the resolution to 1680x1050 on the Xbox One -if 1080p isn't achievable at 60 fps- might be a good idea.

Next gen comparison galleries and stuff are going to be really full of interesting things and details.

The architecture of both machines is more different from each other than I thought and those comparisons are going to highlight their fortes and weaknesses in unique ways.
 
Which is just silly. Strangely enough the article in which that dev was quoted, he states exactly what I think will happen; PS4 might have a more stable framerate or slightly higher res while the One might have a dynamic resolution. Some effects might be pared back, some might be missing but they will largely look the same. No third party dev will run one at 60fps and the other at 30fps. The only place where that can be seen is in exclusives and that depends on other factors and cannot even really be used as a point of reference.

Actually several EA and 2K sports games ran at 60fps on 360 and 30 on PS3. This was back in 2008.

Anyways, the situation you describe is the ideal one.. where devs target 1080p30 on PS4 and dynamic or lower res on Xbox. But sadly that probably won't be the case for whatever reason and Xbox will be the target. And the leftover power in PS4 is most easily realized in a higher framerate.
 
It could happen for Xbone too, but they'd need to sacrifice more resolution. I'm just speculating in the case that devs target 1080p30 on Xbone, whatever power is left over on PS4 could be used to increase the framerate, with or without much sacrifice in visuals.

Display planes is another matter, more for UI/video overlays and such. IIRC PS4 has it's own display planes as well, just not as many. 3 vs 2 I think.

PS4 has two planes, one for the game and one for OS. X1 has one for OS overlays, but also 2 for games. You can read the patent for it to see the pretty explicit intentions behind them in X1. Their aim is to use hardware to help devs manage things like resolution and color depth and framerates more effectively such that the most prominent display plane has as high fidelity as possible.

This notion that it was about OS overlays is a myth. It's quite literally about giving devs the ability to lock down framerates and resolutions. It's spelled out pretty bluntly in the patent.

One thing I'm curious about. PS4 doesn't seem to have any more real bandwidth to the GPU than X1 (if DME's can keep eSRAM full). So how does PS4 feed the ROPs with with so much more data each second that they will output twice the framerate again? It would seem to me that there are other limiting factors that would get in the way.

I'm asking here, so feel free to correct me on stuff. :)
 
PS4 has two planes, one for the game and one for OS. X1 has one for OS overlays, but also 2 for games. You can read the patent for it to see the pretty explicit intentions behind them in X1. Their aim is to use hardware to help devs manage things like resolution and color depth and framerates more effectively such that the most prominent display plane has as high fidelity as possible.

This notion that it was about OS overlays is a myth. It's quite literally about giving devs the ability to lock down framerates and resolutions. It's spelled out pretty bluntly in the patent.

One thing I'm curious about. PS4 doesn't seem to have any more real bandwidth to the GPU than X1 (if DME's can keep eSRAM full). So how does PS4 feed the ROPs with with so much more data each second that they will output twice the framerate again? It would seem to me that there are other limiting factors that would get in the way.

I'm asking here, so feel free to correct me on stuff. :)

Your display planes have yet to be proven to anything more then QoS, you throw something out as a myth and having nothing but your interpretation (which has been incorrect in the past) of it to back yourself up.

Uh what are you talking about, the PS4 has at least 156GB/s to the RAM the XBONE has a max of 102GB/s. And you might not even be fill limited, it all depends on where the limitation lies.
 
Has there been a case where 2 competing consoles....releasing in the same competitive timeframe...at the same competitive price...with the same competitive manufacturing process and....1 of these consoles is clearly weaker in perf.? I think Ms dropped the big balls on this one...feels like the Saturn failure or the HD2900 mis-step....probably a lot more worse this time...i believe the engineers at Redmond were much better than this....they designed the Xbox and 360 and DX....

I'm also extremely skeptical about the cloud PR fluff..btw.
 
AMD seemed to think that 153.6 GB/sec was enough bandwidth to feed a 7850 (also 32 ROPS). That configuration manages to attain 1080p60 in plenty of games, without the benefits of being a custom architecture, so I'm thinking the PS4 with more bandwidth will be fine.
 
Has there been a case where 2 competing consoles....releasing in the same competitive timeframe...at the same competitive price...with the same competitive manufacturing process and....1 of these consoles is clearly weaker in perf.? I think Ms dropped the big balls on this one...feels like the Saturn failure or the HD2900 mis-step....probably a lot more worse this time...i believe the engineers at Redmond were much better than this....they designed the Xbox and 360 and DX....

I'm also extremely skeptical about the cloud PR fluff..btw.

These new systems are in the same ballpark and will run the same games. The difference will be more consistent this time thanks to the similar hardware, but we likely saw bigger relative difference in some 360/PS3 releases. The Saturn missed out on games because of its awkwardness and low sales.
 
AMD seemed to think that 153.6 GB/sec was enough bandwidth to feed a 7850 (also 32 ROPS). That configuration manages to attain 1080p60 in plenty of games, without the benefits of being a custom architecture, so I'm thinking the PS4 with more bandwidth will be fine.

Correct me I'm wrong, but could this has to do with wanting to fill to the cache? the cache is much faster then the actual memory and I understand a lot won't fit in it, but could you do it to achieve the full fill rate?.
 
Double frame rate requires much more than just extra ROPs.

60 fps game (compared to 30 fps) is processing:
- Twice as many draw calls (2x CPU cost, 2x GPU front end cost)
- Twice as many triangles (2x triangle/primitive setup cost)
- Twice as many vertices (2x vertex shader ALU, 2x vertex fetch, 2x vertex bandwidth)
- Twice as many pixels (2x ROP, 2x pixel shader ALU, 2x texel fetch + filtering, 2x ROP bandwidth, 2x TMU bandwidth)

Now let's discuss 720p -> 1080p.

1080p is 2.25x pixel count compared to 720p. ROPs alone are not enough to output these extra pixels, unless you are fine with single colored pixels with no lighting, no materials and no textures. For each pixel you need to run the pixel shader in order to get a meaningful output. As you are running 2.25x pixel shader instances, your TMU, ALU and bandwidth (both texture read and backbuffer write) costs for pixel shaders also increase by 2.25x. You don't need extra draw calls (CPU cost), extra triangle processing or extra vertex processing, so that's good. Also your texture bandwidth usage (from pixel shader) increases slightly less than 2.25x, because the texture reads will become more coherent (as resolution increases) and thus hit the caches better. Still, it would be a pipe dream to scale the whole rendering pipeline (without any quality reductions) from 720p to 1080p with less than 2x ALU+TMU+ROP+BW.
 
You likely are correct. I think every little helps, as even a 7850 is bandwidth limited in most games I've seen benchmarked (fps always increases when bumping the memory, by varying amounts).

Very excited about both consoles, I think both have decent enough hardware to create some memorable experiences and as many have pointed out, small IQ differences are hard to spot in flight.
 
.i believe the engineers at Redmond were much better than this....they designed the Xbox and 360 and DX....

I'm also extremely skeptical about the cloud PR fluff..btw.

It's not the engineers fault - you can blame MS management for changing their strategy to focus on courting the casuals and $$$
 
Double frame rate requires much more than just extra ROPs.

60 fps game (compared to 30 fps) is processing:
- Twice as many draw calls (2x CPU cost, 2x GPU front end cost)
- Twice as many triangles (2x triangle/primitive setup cost)
- Twice as many vertices (2x vertex shader ALU, 2x vertex fetch, 2x vertex bandwidth)
- Twice as many pixels (2x ROP, 2x pixel shader ALU, 2x texel fetch + filtering, 2x ROP bandwidth, 2x TMU bandwidth)

Now let's discuss 720p -> 1080p.

1080p is 2.25x pixel count compared to 720p. ROPs alone are not enough to output these extra pixels, unless you are fine with single colored pixels with no lighting, no materials and no textures. For each pixel you need to run the pixel shader in order to get a meaningful output. As you are running 2.25x pixel shader instances, your TMU, ALU and bandwidth (both texture read and backbuffer write) costs for pixel shaders also increase by 2.25x. You don't need extra draw calls (CPU cost), extra triangle processing or extra vertex processing, so that's good. Also your texture bandwidth usage (from pixel shader) increases slightly less than 2.25x, because the texture reads will become more coherent (as resolution increases) and thus hit the caches better. Still, it would be a pipe dream to scale the whole rendering pipeline (without any quality reductions) from 720p to 1080p with less than 2x ALU+TMU+ROP+BW.
80DlR5i.png

http://www.anandtech.com/bench/Product/536?vs=549
 

Thats not a really a fair comparison in relation to X1/PS4, simply because the 7770 can only issue 1 triangle per clock, whereas the X1 GPU can 2 (same as PS4, no differences there).
Moreover the X1 GPU has an additional 102/4 GB/s to the eSRAM which schould help the ROPs (12,8 GP x 8 Bytes per Pixel = 102,4GB/s ;) ; btw the PS4 cant really use the 32 ROPs: 25,6 GP x 8 bytes per pixel would be 204,8 GB/s)

I think when we take sebbi explanation, the differences in mutli platform games will be 1920x 1080 for PS4 games and 1600x 900 for X1.
 
Thats not a really a fair comparison in relation to X1/PS4, simply because the 7770 can only issue 1 triangle per clock, whereas the X1 GPU can 2 (same as PS4, no differences there).
Moreover the X1 GPU has an additional 102/4 GB/s to the eSRAM which schould help the ROPs (12,8 GP x 8 Bytes per Pixel = 102,4GB/s ;) ; btw the PS4 cant really use the 32 ROPs: 25,6 GP x 8 bytes per pixel would be 204,8 GB/s)

I think when we take sebbi explanation, the differences in mutli platform games will be 1920x 1080 for PS4 games and 1600x 900 for X1.

Once you get into the higher levels of AA the output bandwidth of the ROP's starts to get cut in half/quarter. They probably have a good reason for going with 32.
 
Thats not a really a fair comparison in relation to X1/PS4, simply because the 7770 can only issue 1 triangle per clock, whereas the X1 GPU can 2 (same as PS4, no differences there).
Moreover the X1 GPU has an additional 102/4 GB/s to the eSRAM which schould help the ROPs (12,8 GP x 8 Bytes per Pixel = 102,4GB/s ;) ; btw the PS4 cant really use the 32 ROPs: 25,6 GP x 8 bytes per pixel would be 204,8 GB/s)

And the 7770 used GDDR5, ran with 1Ghz instead of 800Mhz and let's also not forget 10% GPU performance the QoS will grant the OS which probably makes the 2 extra CUs a non factor. So the table above will very likely look closer than some here might wish.
 
Status
Not open for further replies.
Back
Top