Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
At this point, the good ones may have multiple sources in the inner circle to help confirm the specs. We should have a more solid picture as we approach launch.

i would expect 800/1.6. i'm just hanging on to a maybe 20% chance for more.
 
Any sign of a secondary processor for the Xbox One?


if not the PS4 secondary processor taking work off of the main processor is another advantage over the Xbox One specs.

They're aiming for the exact same lower power states with WoL (+ WoV) support along with background//standby downloads and updates. It's a pretty safe bet they both have a similar solution (a low power, possibly ARM based chip) for that. Even if MS didn't, those resources would in all likelihood simply come out of the reserve set aside for the Apps OS. I doubt its an advantage that would manifest itself in any meaningful way. Besides, does the Jaguar have the necessary power gating to do what they aim to do while in a standby mode? Can the APU even do that (I honestly don't know)? If not, that's a pretty good indicator of a secondary low power chip.
 
Any sign of a secondary processor for the Xbox One?


if not the PS4 secondary processor taking work off of the main processor is another advantage over the Xbox One specs.

Nope, AFAIK besides the ARM security core what's in vgleaks is all they have.
Though SHAPE can take care of audio processing entirely, so that effectively frees up a core or two which PS4 has to use for audio.

however sadly it truly seems like ms isn't caring too much about performance, and if the chip is as big as it may be and concerns about heat i'd be kind of surprised if they bothered with any overclocking. pathetically they seem in play it safe dont care about performance mode.

Yup, Richard thinks so too:
Microsoft says it is compromised of five billion transistors (for comparison Nvidia's GTX Titan has just over seven billion dedicated to GPU power alone) - an astonishingly high amount which explains the rather conservative clocks - believed to be 1.6GHz for the cluster of AMD Jaguar CPU cores and 800MHz for the GPU. Typically, the larger the processor, the more difficult it is to keep cool, necessitating lower running speeds. In theory, Microsoft could run the chip faster and claw back some of the performance deficit against PS4 but in practice, this would impact the amount of useable chips Microsoft is able to fabricate, sending production costs (not to mention potential failure rate) spiralling. Clock-speed is pretty much the only major variable in the spec that remains wholly unconfirmed, but we're pretty confident that the speed of the chip remains unchanged.
http://www.eurogamer.net/articles/digitalfoundry-spec-analysis-xbox-one

It's probably the biggest SoC ever made
 
Nope, AFAIK besides the ARM security core what's in vgleaks is all they have.
Though SHAPE can take care of audio processing entirely, so that effectively frees up a core or two which PS4 has to use for audio.

I didn't realise we had details on the PS4's audio block, because theres something there and to dedicate silicon to a fixed function decode which takes a trivial amount of CPU power compared to the actual processing wouldn't be the greatest move in history.
 
I don't know why 800MHz is considered nearly a done deal by a lot of people, including news sites. HD 7790 with Bonaire clocks at 1GHz. I consider this significant because it's a unique die, but there's no < 1GHz (let alone 800MHz) part using it. That should mean that there are too few parts that can't hit the clock speed to be worth salvaging for weaker models. Compared to XBox One's leaked specs it has more CUs (14 vs 12) but still a pretty manageable 85W TDP. If the current generation discrete GPUs can do it I can't see why this SoC can't.

It's entirely possible that it was planned at 800MHz but MS bumped it up to 1GHz to be more competitive with Sony. It could have meant augmenting the cooling, but that's a fairly realistic late game change. This is a more honest fit with the current bandwidth number if eSRAM bandwidth scaled up, and it makes the bandwidth situation all around less bleak.
 
I don't know why 800MHz is considered nearly a done deal by a lot of people, including news sites. HD 7790 with Bonaire clocks at 1GHz. I consider this significant because it's a unique die, but there's no < 1GHz (let alone 800MHz) part using it. That should mean that there are too few parts that can't hit the clock speed to be worth salvaging for weaker models. Compared to XBox One's leaked specs it has more CUs (14 vs 12) but still a pretty manageable 85W TDP. If the current generation discrete GPUs can do it I can't see why this SoC can't.

It's entirely possible that it was planned at 800MHz but MS bumped it up to 1GHz to be more competitive with Sony. It could have meant augmenting the cooling, but that's a fairly realistic late game change. This is a more honest fit with the current bandwidth number if eSRAM bandwidth scaled up, and it makes the bandwidth situation all around less bleak.

Discrete video cards and APU's are completely different beasts.

I just had a look through the AMD APU wiki page (anecdotal i know) and not a single GPU in there APU's are clocked at or near 800mhz.

I think its safe to say that next gen console APU GPU's won't really be hitting 1Ghz.
 
Discrete video cards and APU's are completely different beasts.

The SoCs/APUs in the upcoming consoles are also nothing like the ones you'd find in notebooks in laptops, so you'd may as well consider them a third category. But the design restraints of the constituent parts should be the same or similar.

I just had a look through the AMD APU wiki page (anecdotal i know) and not a single GPU in there APU's are clocked at or near 800mhz.

Actually desktop Trinity clocks at up to 800MHz. Desktop Richland, almost out (and certainly long before XBox One), will clock up to 844MHz. But you can also look and find none that have anywhere close to 12 much less 18 GCN CUs. You can see they have totally different market targets. The Bobcat/Jaguar based ones are aiming for much lower TDP than what you can put in a console. The highest end desktop parts have a TDP closer to but still potentially lower than the highest you can put in a console part. But they have to spend a lot more of it (and die space in general) on CPUs with much stronger single threaded performance.

I think its safe to say that APU GPU's won't really be hitting 1Ghz.

Can you give any kind of technical reason why this would be the case? The max GPU clock doesn't go down just because there's other stuff on the die.
 
The SoCs/APUs in the upcoming consoles are also nothing like the ones you'd find in notebooks in laptops, so you'd may as well consider them a third category. But the design restraints of the constituent parts should be the same or similar.



Actually desktop Trinity clocks at up to 800MHz. But you can also look and find none that have anywhere close to 12 much less 18 GCN CUs. You can see they have totally different market targets. The Bobcat/Jaguar based ones are aiming for much lower TDP than what you can put in a console. The highest end desktop parts have a TDP closer to but still potentially lower than the highest you can put in a console part. But they have to spend a lot more of it (and die space in general) on CPUs with much stronger single threaded performance.



Can you give any kind of technical reason why this would be the case?

If the entire thing was designed with a 800mhz GPU and 1.6Ghz CPU in mind doesn't that mean that the thermals and also power supply (higher clock is probably going to need a higher voltage) is going to need to redone?. This isn't something that they can change over night they will really need to test it thoroughly do they have time to do so?.
 
If the entire thing was designed with a 800mhz GPU and 1.6Ghz CPU in mind doesn't that mean that the thermals and also power supply (higher clock is probably going to need a higher voltage) is going to need to redone?. This isn't something that they can change over night they will really need to test it thoroughly do they have time to do so?.

There's no reason to automatically assume that their original design (VRMs, PSU, etc) couldn't handle a higher voltage, if there even was one. Same for increased power load. The cooling may have needed to be upgrade, but who's to say that it wasn't? An overnight change it may not be but it could easily have been done by the time the last leaks were dated. It could have been a bigger change than Sony's memory bump but it's not totally unrealistic, unlike any kind of change to the SoC design.

Even if the SoC is really about 100W that's not outside the realm of possibility here. 85W Bonaire includes 1GB of 128-bit GDDR5, chances are good that the on-chip eSRAM would consume a lot less (by virtue of its basic technology and also keeping the interfaces on chip and not going through relatively heavy controllers). Adding to that the reduction of two CUs and 70W or so isn't out of the question, which would make the entire thing using 100W reasonably realistic.

Of course there's also the possibility of a number in between the two like 900MHz.

I'm not arguing that this definitely happened, I just don't know why people are denying the possibility so strongly. 200GB/s derived by adding some internal bandwidth from the CPU to the GPU is such a load of BS that MS would have some awful nerve to think they could get away with it.
 
Can you give any kind of technical reason why this would be the case? The max GPU clock doesn't go down just because there's other stuff on the die.

If it were Llano, I'd probably say yes to this happening, but Llano was a particularly bad case with SOI and very disparate design points for the CPU and GPU silicon.
The circuit design and preferred transistor properties between the two halves of that APU did not play well together, and the resulting chip was wildly variable in terms of power dissipation and clocks.

Jaguar is highly synthesizable and should be using methodologies much closer the GPU, but maybe it's still a problem. If Microsoft has a TDP cap of ~100W, Jaguar does look like it has too many bins that draw too much per core to fit 8 in the gap between Bonaire and 100W.
 
We don't really know why Llano's clocks sucked. It could have just as well been that GF's 32nm process was grossly immature. GF's 45nm process was grossly immature when the first products showed up on it too; or to put it another way, their performance got a lot better over time. It was just not as obvious because even the first products still had better performance than the 65nm ones. With scaling improvements getting worse and worse by 32nm you could no longer expect an immature process to automatically beat its mature predecessor.

I explained why in the previous post, but I don't think it's fair to estimate 85W for a hypothetical 1GHz GPU. Kabini is 25W at 4x2GHz + a decent amount of GPU. Going by how much the GPU is shown to be capable of using in Temash that could easily be 15W for just the CPU part, or 30W for 8 cores. Scale down to 1.6GHz and it'll be lower. 20-25W for the CPUs is not outside the realm of possibility, but even 30W like Anand estimated wouldn't necessarily break the bank.

Besides, that 100W number is pretty unsubstantiated right now (didn't it come from the same people who said it was 40nm?)
 
Last edited by a moderator:
If it were Llano, I'd probably say yes to this happening, but Llano was a particularly bad case with SOI and very disparate design points for the CPU and GPU silicon.
The circuit design and preferred transistor properties between the two halves of that APU did not play well together, and the resulting chip was wildly variable in terms of power dissipation and clocks.

Jaguar is highly synthesizable and should be using methodologies much closer the GPU, but maybe it's still a problem. If Microsoft has a TDP cap of ~100W, Jaguar does look like it has too many bins that draw too much per core to fit 8 in the gap between Bonaire and 100W.

kabini at the high end is 25w 4 core 2ghz with a 128 radeon cores at 600mhz.


If the apu is 100w there could be more head room. I wouldn't imagine the cores to be using much power weren't bobcats about 1w each core ?

I'm wondering if we will see ms go with a bump on the cpu but keeping the gpu at 800mhz
 
If Microsoft has a TDP cap of ~100W, Jaguar does look like it has too many bins that draw too much per core to fit 8 in the gap between Bonaire and 100W.

Anand estimated the 8 core cat at 30 watts max.

As Exo went through 70W doesn't seem totally unreasonable for the GPU part of 12 CU's at 1 ghz (because you remove 1GB GDDR5)
 
We don't really know why Llano's clocks sucked. It could have just as well been that GF's 32nm process was grossly immature.

The process was immature, but Llano's TDP bands were pretty twitchy with regards to a few hundred MHz for the CPU, even for equivalent GPUs.
Trinity was more graceful in its scaling.
 
If MS really did hit 1ghz, then I'm happy. Durango is fine, pack up and go home get ready for next gen.

Plus then we could brag about 200GB/s. Damn that's lightning fast and >PS4! Sounds so impressive! (lol)
 
kabini at the high end is 25w 4 core 2ghz with a 128 radeon cores at 600mhz.
That's one bin, which is the problem if you're trying to design for a platform instead of one SKU of many.

If the apu is 100w there could be more head room. I wouldn't imagine the cores to be using much power weren't bobcats about 1w each core ?
Of the various SKUs, I wouldn't count on any but the stripped down Hondo chips of getting near that. A system designed to use a 1W Bobcat core would be tossing the vast majority of chips manufactured.
 
PS4 also has an Audio processor.

Doubt it can do as much as SHAPE, probably just an audio decoder.
Vgleaks has it as something that can decode 200 MP3 streams simultaneously:
http://www.vgleaks.com/world-exclusive-ps4-in-deep-first-specs/

If MS really did hit 1ghz, then I'm happy. Durango is fine, pack up and go home get ready for next gen.

Plus then we could brag about 200GB/s. Damn that's lightning fast and >PS4! Sounds so impressive! (lol)

Boy, you're really having trouble dealing with Xbox being weaker aren't you - Don't worry, you'll get over it (I did) otherwise I suppose there's always next, next gen
 
Last edited by a moderator:
Doubt it can do as much as SHAPE, probably just an audio decoder.
Vgleaks has it as something that can decode 200 MP3 streams simultaneously:
http://www.vgleaks.com/world-exclusive-ps4-in-deep-first-specs/

huh?

it says
audio processing unit, ~200 concurrent MP3 streams

That says nothing about decoding.

“For example, by having the hardware dedicated unit for audio, that means we can support audio chat without the games needing to dedicate any significant resources to them. The same thing for compression and decompression of video.” The audio unit also handles decompression of “a very large number” of MP3 streams for in-game audio” he continued.
 
Last edited by a moderator:
It is one bin but its also binned not just for speed but for TDP so we really don't know how power consumption affects clock speed.

Remember kabini is a laptop part and wont have a cooling solution anywhere close to what we saw yesterday in the xbox one.

we also don't know what yields are acceptable for MS if 70% of chips come back at 1.6/800 but 60% come back at 2ghz/1ghz they might choose to take that hit for now . We don't know at what point ms is planning to move to the next micron process . Mabye it will be a mix of speeds or maybe 1.6/800 is all we get.

Hopefully we hear of clock speeds at e3
 
Status
Not open for further replies.
Back
Top