Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
slightly, sure. given bonaire and cape verde clock at 1ghz retail stock, and upclocked versions easily hit 1100 mhz on newegg.

cape verde/bonair tdp is 80-85 watts too, not unreasonable.

1040 seems possible, yes.

most likely it's still just 800.

I concur they'll most likely stay @800Mhz - and I really don't see them going for a ~25% GPU clock increase.

  1. Remember, as some people have mentioned before, that this isn't "just" a GPU. It's a complex CPU + GPU + a lot of custom stuff + embedded DRAM chip. Raising the GPU frequencies will affect the temperature and leakage/stability of the entire chip, bringing about tons of follow-up problems. Validating all chips for higher GPU frequencies would be the smallest of their problems.
  2. Furthermore, we're talking about a chip that's probably more than trice the size of Cape Verde (considering that some estimations on this forum tend towards a die size of ~400mm²). I guess striking a sweet binning-spot that leaves them with reasonable yields is hard enough to pull off at moderate clocks - let alone pushing the chip to more aggressive limits.
  3. Last but not least, they are openly shooting for a "nearly silent" operation of the entire system. Given the size of the box, they seem to have invested heavily in the cooling solution to pull this off. I don't see them sacrificing near-silent operation (which is a MUST for the kind of all-in-one set-top-system they're going for) on the altar of better performance.
All that being said, I think XBOX One will do very well at the clocks and performance figures originally leaked. Apart from having disappointed a lot of core-gamers by questionable PR-choices during the reveal event, Microsoft has some very cool stuff in their box and their basic ideas are solid. A lot of the early disappointment will certainly clear up after E3.

I personally still like SONY's approach better - but Microsoft's basic vision is far from being the huge failure a lot of people are prematurely making it out to be.
 
And it may not be in the direction you think. Everyone seems to be ignoring that the XB1 has a GPU with effectively 32MB of cache, compared to the PS4 in the range of 512k. So yes, as long as what they are processing in a frame is purely streaming, congruent data, then the PS4 will easily surpass the XB1. Make the data a bunch of different textures in different memory locations, or GPGPU physics calculations, or complex shaders that aren't just streaming data, then the result may surprise you. In those cases, the PS4 may take up to 10x longer to retrieve a piece of data than the XB1, stalling a non trivial amount of the GPU.
Shouldn't the PS4 being able to queues up 64 different asynchronous compute jobs moot that problem?
 
Throwing more work at the GPU assumes there's enough unreserved context to support the necessary number of jobs in flight, enough non-dependent work that is needed, that there is throughput/bandwidth that can be burned in order to hide latency, and that none of the work items is becoming time-critical.

For a number of things, like bandwidth and context, the PS4 does not have multiple times the capacity. Determining which scheme is better is going to be highly dependent on the situation.
The capacity to receive more commands doesn't mean you're able to run all of them.
 
At this point, if MS is considering doing anything with the hardware to make up some real or perceived PS4 advantage, they are best served by keeping it completely secret for as long as possible. Announcing anything before both consoles are well into manufacturing only gives Sony an opportunity to counter it (mostly in clock speeds).
 
Shouldn't the PS4 being able to queues up 64 different asynchronous compute jobs moot that problem?

Doesn't work that way.
You have a limited number of wavefronts in flight, that number is limited by the available registers and they run to completion.
The 64 queues are more to allow prioritization of work between the jobs.
i.e. I can start a long running compute Job and interrupt it if I have something that suddenly requires lower latency, even in that case nothing in the higher priority job will run until the running wavefronts complete, it just won't dispatch more of the lower priority wavefronts until all the high priority stuff is done.
 
At this point, if MS is considering doing anything with the hardware to make up some real or perceived PS4 advantage, they are best served by keeping it completely secret for as long as possible. Announcing anything before both consoles are well into manufacturing only gives Sony an opportunity to counter it (mostly in clock speeds).

I don't think they are considering any hardware upgrade. XB One will have third party support and nice services, it can be enought for sell the console.
 
And thats nothing more then a personal assertion of yours in itself.

The power difference is not minimal, it will be noticeable.

How can it be a personal assertion? You are SURE that the difference will be noticeable much less a difference. Amazing... When is YOUR game coming out?
 
How powerful is Wii compared to Gamecube? Isn't like 1.5~2.0x? is not similar to PS4-XBO power difference?

I think the difference (graphically) is not like an ocean (IMO).
 
How can it be a personal assertion? You are SURE that the difference will be noticeable much less a difference. Amazing... When is YOUR game coming out?

Yes I am sure of it, wether its down to Sonys first party being wizards or the actual hardware will remain to be seen.
 
And it may not be in the direction you think. Everyone seems to be ignoring that the XB1 has a GPU with effectively 32MB of cache, compared to the PS4 in the range of 512k. So yes, as long as what they are processing in a frame is purely streaming, congruent data, then the PS4 will easily surpass the XB1. Make the data a bunch of different textures in different memory locations, or GPGPU physics calculations, or complex shaders that aren't just streaming data, then the result may surprise you. In those cases, the PS4 may take up to 10x longer to retrieve a piece of data than the XB1, stalling a non trivial amount of the GPU.

I think this is always important to remember - there are very few solutions that are a better than the alternative in everything. The proof will be in the pudding. If anything, right now we could make a rough guess which of the two is easier to develop for and/or is more similar to PC and will benefit from that synergy (which we saw being important last gen). But who knows where future PC components and game engines go, that may reverse the situation.

That doesn't make it any less interesting to speculate though, on the contrary. But speculating with arguments or just plain asserting are two different things altogether. Which is another, very important thing to remember for people taking part in discussions on B3D. ;)
 
If anything, right now we could make a rough guess which of the two is easier to develop for and/or is more similar to PC and will benefit from that synergy (which we saw being important last gen). But who knows where future PC components and game engines go, that may reverse the situation.

Well Cerny says that having eDRAM means "ease of manufacturability" but also "added complexity for developers".
 
Last edited by a moderator:
I think this is always important to remember - there are very few solutions that are a better than the alternative in everything. The proof will be in the pudding. If anything, right now we could make a rough guess which of the two is easier to develop for and/or is more similar to PC and will benefit from that synergy (which we saw being important last gen). But who knows where future PC components and game engines go, that may reverse the situation.

That doesn't make it any less interesting to speculate though, on the contrary. But speculating with arguments or just plain asserting are two different things altogether. Which is another, very important thing to remember for people taking part in discussions on B3D. ;)

To your point, sebbi said in another thread that Haswell benchmarks will be a good indication on the gains to be had by going with a fast,low-latency, on-die cache. So if intel feels this is the architecture to use for a high-performance multimedia APU, i'm not clear on why some are treating the XB1 design as a consolation prize?
 
Do we know how much resources are reserved for system in PS4?

This is what I'm really interested in. Given that Sony, not so long ago, were targeting a 4Gb system I'd be surprised if the OS wasn't designed to fit within 1Gb overhead, or even less. But I wouldn't be surprised, given the leap to 8Gb, if Sony didn't err on the side of caution and reserve 2Gb for the OS for future features not currently on the horizon.

Similarly I'm very curious about the dual-OS setup on the Xbox One. For example, I think most people are assuming that PS4's OS (using the assistant ARM-chip) will support background uploading, downloading and streaming for games. I wonder if Xbox One's 3Gb 'Application OS' will support Xbox One's 'Game OS' in this way or if the two operating systems are completely segregated meaning that the 5Gb 'Game OS' will need to use resources offered "free" by PS4's OS. I'm also curious how much of the assumed 5Gb of 'Game OS' memory is actually usable. I presume the DirectX/the OS itself will have some kind of overhead.
 
I concur they'll most likely stay @800Mhz - and I really don't see them going for a ~25% GPU clock increase.

  1. Remember, as some people have mentioned before, that this isn't "just" a GPU. It's a complex CPU + GPU + a lot of custom stuff + embedded DRAM chip. Raising the GPU frequencies will affect the temperature and leakage/stability of the entire chip, bringing about tons of follow-up problems. Validating all chips for higher GPU frequencies would be the smallest of their problems.
  2. Furthermore, we're talking about a chip that's probably more than trice the size of Cape Verde (considering that some estimations on this forum tend towards a die size of ~400mm²). I guess striking a sweet binning-spot that leaves them with reasonable yields is hard enough to pull off at moderate clocks - let alone pushing the chip to more aggressive limits.
  3. Last but not least, they are openly shooting for a "nearly silent" operation of the entire system. Given the size of the box, they seem to have invested heavily in the cooling solution to pull this off. I don't see them sacrificing near-silent operation (which is a MUST for the kind of all-in-one set-top-system they're going for) on the altar of better performance.
All that being said, I think XBOX One will do very well at the clocks and performance figures originally leaked. Apart from having disappointed a lot of core-gamers by questionable PR-choices during the reveal event, Microsoft has some very cool stuff in their box and their basic ideas are solid. A lot of the early disappointment will certainly clear up after E3.

I personally still like SONY's approach better - but Microsoft's basic vision is far from being the huge failure a lot of people are prematurely making it out to be.
Yes, I think that's the case. I am pretty happy with the "vanilla" specs of both consoles, I have always been since we knew them and they were confirmed by people who actually worked in the hardware.

And it may not be in the direction you think. Everyone seems to be ignoring that the XB1 has a GPU with effectively 32MB of cache, compared to the PS4 in the range of 512k. So yes, as long as what they are processing in a frame is purely streaming, congruent data, then the PS4 will easily surpass the XB1. Make the data a bunch of different textures in different memory locations, or GPGPU physics calculations, or complex shaders that aren't just streaming data, then the result may surprise you. In those cases, the PS4 may take up to 10x longer to retrieve a piece of data than the XB1, stalling a non trivial amount of the GPU.
If the machine could use its fortes and completely avoid its weaknesses, I wonder if Microsoft could transform the Xbox One into a Tile-Based Deferred Rendering (TBDR) machine with their development tools.
 
Last edited by a moderator:
We've discussed OS size for PS4 earlier and back then I think we amounted to 512MB max for a 4GB system, that could easily be bumped to 1GB for an 8GB system. More seems excessive for PS4, considering that's what the iPad has in total and assuming that HDD is fast enough for some video stream caching and such ... We'll know more come E3 though I suspect. But in current conditions, that's probably what I would go for. Those 2GB of RAM extra vs the XBOne could really make a difference for Sony on the gaming side.
 
We've discussed OS size for PS4 earlier and back then I think we amounted to 512MB max for a 4GB system, that could easily be bumped to 1GB for an 8GB system. More seems excessive for PS4, considering that's what the iPad has in total and assuming that HDD is fast enough for some video stream caching and such ... We'll know more come E3 though I suspect. But in current conditions, that's probably what I would go for. Those 2GB of RAM extra vs the XBOne could really make a difference for Sony on the gaming side.
Well, there is no way in hell the PS4 is going to be on-par with the Xbox One when it comes to the use of RAM for the OS, I'm sure of that, but my expectations range from worst case 2.5GB vs 1.5GB kind of territory to just a little bit worse.

I don't see the PS4 using 3GB, but 1GB seems a bit too low if they want to add video streaming and PS Eye in the mix. I suspect the reality is somewhere inbetween...
 
Is GDDR5 the only memory type in PS4? is this shared between ARM and system CPU?

How much memory is needed for the multi-tasking in PS4 (including the ability to pause and resume a game)?

Cerny talked software next: the PS4 can pause and resume mid-game on a system level, allowing players to multitask. Say you're playing Killzone: Shadow Fall and you desperately need to open another application on the PS4 -- that is apparently something doable on the PS4. He said a second chip dedicated to managing uploads and downloads will also help with the system's usability, meaning you can download games in the background or when the system's off.
 
Thanks, I missed that during the presentation. I'm not interested in searching the web, checking twitter or making Skype calls, or anything else I would rather do on my iPad but I can see the need to to pause a game and watch something for a while, then return the game otherwise uninterrupted.

Similarly, I'd be thrilled if my missus could be using the PlayStation 4 for watching Netflix or a Blu-ray movie while I continued to game using my Vita. I'd more than happily sacrifice a couple of gig of RAM for that flexibility!
 
Well Cerny says that having eDRAM means "ease of manufacturability" but also "added complexity for developers".

How much different would it be compared to 360's dev environment though? I'd argue that the end goal shouldn't to make the device identical to PC development, but rather to make it close to what devs have the most experience with. If devs have loads of experience making similar embedded RAM considerations on 360 then it'd be smart to exploit that experience.
 
Status
Not open for further replies.
Back
Top