Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
From the grapevine, Xbox One is said to use TrustZone. PS4 may/did not. The pure VM model may not sit well with direct GPU access too. So the PS4 OS structure should be quite different.

We’ll have to wait for more info/leaks. Xbox One’s runtime model most likely won’t fly here.
 
People on forums turning speculation into fact..... never happens.
I don't believe there has ever been an official statement by Sony about the OS footprint, and I haven't seen any confirmed developers state it either.

There is a huge amount of that hasn't been revealed about both consoles. That's leading to a lot of speculation. The positive sentiment built around one is trending the speculation in a more positive direction. The opposite is true for the other.
 
Here are the slides from Guerrila Games about the Killzone Demo:

kU44me6.jpg


puhmlFV.jpg


What they refer to as "System" doesn't seem to mean "Operating System" at all. All we know is that they used a bit below 5GB for the game, but this doesn't mean the OS will take all the rest..
 
I'm pretty sure those were just hypothetical percentages, and manufacturers don't like to give out those numbers. I do remember the scuttlebutt a bunch of years ago for desktop MPUs that would have considered those percentages to be bad.
For the price range of a console component, Microsoft might be hoping for something that can gets 10-20% higher than that range, at least.

Nope, yields are actually readily available

The following from June 2012

http://www.soiconsortium.org/pdf/Economic_Impact_of_the_Technology_Choices_at_28nm_20nm.pdf

03jCb27.jpg



a double in die size leads to around 25% drop in yields.
It's probably safe to say that the ~300mm^2 Xbox one and PS4 die has ~40% yields?

Considering technology maturing it's probably 45~50% by now or something.
 
Last edited by a moderator:
Imagine a PC game on medium setting and high setting. That's basically what we can, and what I think will have. The outliers will be the 1st party and odd 3rd party that goes full retard on the ~7GB and 176GB/s.

Though the high end settings usually burn through a lot of cycles but don't often show a significant difference. In the past the difference between low/high was like night/day but today it seems the developers spend the resources on exponentially more intensive things like shadow resolution for only linear gains to user experience.
 
It's probably safe to say that the ~300mm^2 Xbox one and PS4 die has ~40% yields?

The document doesn't say it but those numbers are probably for Global Foundries, as no one else would be able to give anything for FD SOI vs bulk. At the very least their numbers will have contributed.

There was no product using GF's 28nm in 2012. Even now I'm only aware of one thing using it, and it's probably a pretty small SoC. TSMC, on the other hand, moved plenty of 28nm volume from the very start of 2012. Some of it absolutely gigantic too, like nVidia's Titan.

Rumors have been abundant that GF's 28nm has been a total trainwreck. Which it'd kind of have to be if AMD canceled a product using it (Wichita, the 28nm GF Bobcat APU) despite having no suitable replacement for another year and pretty much throwing away money due to contractual obligations to use GF.
 
Though the high end settings usually burn through a lot of cycles but don't often show a significant difference. In the past the difference between low/high was like night/day but today it seems the developers spend the resources on exponentially more intensive things like shadow resolution for only linear gains to user experience.

True, today's medium might as well be the high of games a few years back in terms of relative quality to the max settings.

I can only guess the difference would be negligible to slight if devs decide to port up rather than down. But seeing the outpouring of praise for PS4 in ease of development and reports of better tool quality, I don't think the latter choice seems too far fetched. Could be a bit of a snowball effect if MS' tool quality continues to lag. What that would mean for final visual quality on Xbox we'd have to just wait and see.
 
http://kotaku.com/the-five-possible-states-of-xbox-one-games-are-strangel-509597078

Caveat: this Xbox One development info was circulated by Microsoft to its partners at the beginning of this year. It may have changed, but based on what we saw this week, probably not in any major way.

1) Running: The game is loaded in memory and is fully running. The game has full access to the reserved system resources, which are six CPU cores, 90 percent of GPU processing power, and 5 GB of memory. The game is rendering full-screen and the user can interact with it.

2) Constrained: The game is loaded in memory and is still running, but it has limited access to the system resources. The game is not rendering full screen in this state; it either is rendering to a reduced area of the screen or is not visible at all. The user cannot interact with the game in this state. System resource limits in this state are four CPUs, 5 GB of memory, and 45 percent of GPU power if the game is rendering to a reduced area of the screen, or 10 percent of GPU power if the game is not visible.

3) Suspended: The game is loaded in memory but is not running, meaning that the system has stopped scheduling all threads in the game process. The game has no access to CPUs or to the GPU processing power, but it still has the same 5 GB of memory reserved.

4) NotRunning: The game is not loaded in memory and is not running, and the system has no game-history information about the previous execution of the game. A game would be in NotRunning state in any of these three scenarios:

-The game has not been run since the console started.
-The game crashed during the last execution.
-The game did not properly handle the suspend process during the last execution and was forced to exit by the system.

5) Terminated: The game is not loaded in memory and is not running, which is identical to the NotRunning state in terms of system resource usage. Terminated state, however, indicates that during the last execution of the game, the game process was successfully suspended and then terminated by the system. This means that the game had a chance to save its state as it was suspended; the next time the game is activated, it can load this previous state data and continue the user experience from the same position. A game, for example, can start from the same level and position in the player’s last session without showing any front-end menu.
 
Here are the slides from Guerrila Games about the Killzone Demo:
*snip pics*
What they refer to as "System" doesn't seem to mean "Operating System" at all. All we know is that they used a bit below 5GB for the game, but this doesn't mean the OS will take all the rest..
I was under the impression (perhaps incorrect) that, at the time they made the demo, they were running under the assumption that the system would only have 4GB of RAM, and that they said they still needed to trim the memory usage down from what the demo was actually running. Given that, I definitely wouldn't trust those slides as a direct indication of how much of 8GB they were using, if they didn't even know (at the time) that they had that much to work with.

In regards to "system", I imagine that's the game's own executable and whatever subroutines are running CPU-side, and probably does not include the underlying operating system.
 
Which is pretty sad as Sony can most likely just add that "feature" in if they desire, they already own Gaikai.

gakai is completely completely different, it's a compressed frames being sent solution, it doesn't augment anything locally it's a "dumb" solution so to speak. besides they dont seem to have any clue what to do with gakai and all rumors are it was just for backwards compatibility (but that must not be going well since they've gone dark)

gakai probably doesn't even have very many servers, and it's certainly doubtful sony can afford more.

with ms now saying the cloud can help with actual graphics making the one 40x as powerful as 360, this is getting to be a possible big deal...
 
gakai is completely completely different, it's a compressed frames being sent solution, it doesn't augment anything locally it's a "dumb" solution so to speak. besides they dont seem to have any clue what to do with gakai and all rumors are it was just for backwards compatibility (but that must not be going well since they've gone dark)

gakai probably doesn't even have very many servers, and it's certainly doubtful sony can afford more.

with ms now saying the cloud can help with actual graphics making the one 40x as powerful as 360, this is getting to be a possible big deal...

I don't see any reason why Gaikai servers cannot run game related code.
 
The GG KZ:SF demo postmortem does provide a pretty official "snapshot" look at the state of things during the time of the demo, however. Based on that, I'd say the safe money is on parity with the XBO on the memory and CPU split between games and OS (5 GB to 3 GB / 6 cores to 2 cores for both consoles). Not sure about GPU (looks like a 10% reservation for the One's Apps OS). Things can of course change, but that goes for both.

Yep.


I imagine Sony is reserving 10% or more of the GPU as well, likely more since they are typically less efficient as they reserved a lot more memory on PS3 for OS. I'd guess 15-20% PS4 GPU reserved for OS/apps.
 
I don't see any reason why Gaikai servers cannot run game related code.

Sony has said nothing of cloud and it's not in their planning. Likely takes years of planning, coding etc. The reason gakai servers cant run game related code is because that's not what they do.

I doubt there are many Gakai servers anyway. Company was small.
 
I was under the impression (perhaps incorrect) that, at the time they made the demo, they were running under the assumption that the system would only have 4GB of RAM, and that they said they still needed to trim the memory usage down from what the demo was actually running. Given that, I definitely wouldn't trust those slides as a direct indication of how much of 8GB they were using, if they didn't even know (at the time) that they had that much to work with.

In regards to "system", I imagine that's the game's own executable and whatever subroutines are running CPU-side, and probably does not include the underlying operating system.

They were running 4.5GB of RAM on a system with planned 4GB (plus OS overhead?)? Makes sense...

Anyways the bigger clue for me by far is the Chinese bulletin board postings. The Shadowfall stuff was just on a old dev kit.
 
Yep.


I imagine Sony is reserving 10% or more of the GPU as well, likely more since they are typically less efficient as they reserved a lot more memory on PS3 for OS. I'd guess 15-20% PS4 GPU reserved for OS/apps.

I'd guess they're reserving less as a percentage and in absolute terms as the focus seems to be less on multitasking.
 
Nope, yields are actually readily available

The following from June 2012

http://www.soiconsortium.org/pdf/Economic_Impact_of_the_Technology_Choices_at_28nm_20nm.pdf

a double in die size leads to around 25% drop in yields.
It's probably safe to say that the ~300mm^2 Xbox one and PS4 die has ~40% yields?

Considering technology maturing it's probably 45~50% by now or something.

I've read the SOI Consortium's evaluation of its own superiority to bulk. I didn't see where they described the device or devices they are using to make their numbers, or which foundry this applies to.
There's a certain slant to their conclusions, obviously.
I do share a good amount of their pessimism going forward for the foundries, although I'm not so sure I can share their optimism for their own product.

Suffice it to say, there are bad yields, and then there are the yields put forward in that document. I'm curious if these are the numbers for a top bin or non-salvage bin, because these seem very pessimistic to me.
There are larger 28nm chips out there and there's no discussion of them yielding quite that low currently.

The document doesn't say it but those numbers are probably for Global Foundries, as no one else would be able to give anything for FD SOI vs bulk. At the very least their numbers will have contributed.
That was my first reaction to the numbers, and the timing.

There are certain things that look a little convenient. The choice of restricting the 28nm design point to 0.8V Vdd is interesting, since one of the criticisms that seems to have stuck for gate-first HKMG is that having the metal gate in place during later thermal steps can cause increased variation in threshold voltage. Variation can be mitigated in part by increasing voltage, but having .8V might be taking away one of the more common methods of increasing yields.

A chip designed with fault tolerance and measures like per IC voltage customization and tuning circuits should be able to do better than that, especially since there are much bigger chips being made at TSMC, as you've noted.
 
It's probably safe to say that the ~300mm^2 Xbox one and PS4 die has ~40% yields?

Considering technology maturing it's probably 45~50% by now or something.

I don't think that number accounts for redundancy. The GPU CUs, the L2 cache, and the XBox SRAM are all pretty great targets for improving yields by introducing redundancy. I wouldn't be surprised if the final yields after taking redundancy into account is 80+%.
 
Oh, don't mind him. Everyone knows Rangers is B3D's resident Xbox fanboy (his counterpart is ultragpu) ;)

It will be interesting if PS4 has the exact same system reservation as Xbone though (3GB/2cores)

I really hope Sony gives the system RAM enough breathing space so that 4 years down the line they don't wish they hadn't given up so much. 7/1 sounds a bit extreme. I'd settle for 6/2, for 5/3 if they really want to be conservative. Same with cores: 2 is ideal.

And if they don't use it, give it back to developers later.
 
Status
Not open for further replies.
Back
Top