Technical Comparison Sony PS4 and Microsoft Xbox

Status
Not open for further replies.
It is one bin but its also binned not just for speed but for TDP so we really don't know how power consumption affects clock speed.
If the chip that has no salvage bins, TDP is the primary factor. It becomes a matter of how many chips that don't have too many faults that draw under that limit you can get, with the constraint that they reach the design clock.

we also don't know what yields are acceptable for MS if 70% of chips come back at 1.6/800 but 60% come back at 2ghz/1ghz they might choose to take that hit for now .
I'm pretty sure those were just hypothetical percentages, and manufacturers don't like to give out those numbers. I do remember the scuttlebutt a bunch of years ago for desktop MPUs that would have considered those percentages to be bad.
For the price range of a console component, Microsoft might be hoping for something that can gets 10-20% higher than that range, at least.

We don't know at what point ms is planning to move to the next micron process . Mabye it will be a mix of speeds or maybe 1.6/800 is all we get.
I don't like the idea of consoles with varying factory clock speeds.
 
MS could perhaps accept inferior yields temporarily if they're looking to shrink ASAP. A 20nm version could realistically hit as little as one year after release, possibly even sooner.

I think (hope?) eastmen was saying maybe it'll be one of > 1.6GHz or > 800MHz but not both. Varying clock speeds is a pretty crazy suggestion.
 
Boy, you're really having trouble dealing with Xbox being weaker aren't you -

Nope, I accepted it being weaker long ago. Now I'm just cheerleading any help it can get :p

If there's no upclock, Durango isn't any worse off. I've made peace with worst case scenario.

But it's no different than so many people really hoping Durango didn't get any upgrades pre May 21st...and from those threads there were a lot that felt really threatened by it!
 
If the chip that has no salvage bins, TDP is the primary factor. It becomes a matter of how many chips that don't have too many faults that draw under that limit you can get, with the constraint that they reach the design clock.
yes so it really comes down to what the cooling system can cool. A 100 tdp may have been what they were aiming for but if the cooling system can cool a 125 watt tdp they may have room to go higher


I'm pretty sure those were just hypothetical percentages, and manufacturers don't like to give out those numbers. I do remember the scuttlebutt a bunch of years ago for desktop MPUs that would have considered those percentages to be bad.
For the price range of a console component, Microsoft might be hoping for something that can gets 10-20% higher than that range, at least.
yes just an example. I would think MS has a goal they need to hit in regards to units shipped this year and its the more the better but we really don't know what the difference in yields would be at higher speeds. In fact ms may actually have to lower clock speeds (or even sony for that matter)
I don't like the idea of consoles with varying factory clock speeds.

I meant that they didn't have to increase both cpu and gpu speeds . They could leave the cpu at 1.6ghz and bump the gpu to 1ghz or they could choose 1.8ghz /900mhz respectively . They have a bunch of choices and I guess it depends on what gets them the best yield .
 
MS could perhaps accept inferior yields temporarily if they're looking to shrink ASAP. A 20nm version could realistically hit as little as one year after release, possibly even sooner.

I think (hope?) eastmen was saying maybe it'll be one of > 1.6GHz or > 800MHz but not both. Varying clock speeds is a pretty crazy suggestion.

Question, are the CPU and GPU clocks necessarily linked?

I hate to give the naysayers ammo, but looking at Anand's new Jag article he noted Jag power usage scales pretty badly, at 2ghz a 4core uses 25 watts where at 1.5 it uses only 15.

Screen%20Shot%202013-05-23%20at%2012.15.09%20AM_575px.png


On the one hand, Durango Jag doesn't have the power consumption of two CU's to deal with like Kabini does. Yet still it could be an issue.
 
MS could perhaps accept inferior yields temporarily if they're looking to shrink ASAP. A 20nm version could realistically hit as little as one year after release, possibly even sooner.
I wouldn't bet on a new node in its first year beating out a mature 28nm in terms of cost or volume.
28nm hasn't been considered worthy of a console attempt until this year, and 20nm isn't compelling unless competitive pressures really force your hand.

20nm at the foundries, particularly the planar variant, is not looking all that great unless you absolutely need it for reasons other than cost efficiency. It's even less compelling with its poor scaling until the FinFET hybrid node comes out.

The crossover point for AMD and Nvidia is further out than 1 year.

edit:
Question, are the CPU and GPU clocks necessarily linked?
There's likely a number of ratios they can use relative to each other or a base clock.

I hate to give the naysayers ammo, but looking at Anand's new Jag article he noted Jag power usage scales pretty badly, at 2ghz a 4core uses 25 watts where at 1.5 it uses only 15.
Every design has thresholds where diminishing returns kick in. If a pipeline targets a certain speed range, pushing the clock to the upper range starts eating into timing margins and pushes the limits of what it can do to counteract variation. To a point, this can be counteracted by punching up the voltage, which then leads to the power usage spiking.
 
Last edited by a moderator:
Question, are the CPU and GPU clocks necessarily linked?

Not at all.

I hate to give the naysayers ammo, but looking at Anand's new Jag article he noted Jag power usage scales pretty badly, at 2ghz a 4core uses 25 watts where at 1.5 it uses only 15.

Yes, a 33% increase in frequency is a big deal for power consumption. Power scales (roughly) linearly with frequency but quadratically with voltage, and voltage scales very roughly linearly with frequency. In this case they're probably staying somewhat ahead of that voltage scaling curve with binning to optimize voltage levels for the 2GHz parts. But the power scaling of only the CPU may be significantly worse than that for the whole SoC (including GPU, which only increases in frequency by 20%)

I wouldn't bet on 2GHz for the CPU cores, and I think the TDP would be better spent on GPU if they had such a choice. Although it could be useful to have 2GHz available if some of the cores are powered down.
 
I wouldn't bet on a new node in its first year beating out a mature 28nm in terms of cost or volume.
28nm hasn't been considered worthy of a console attempt until this year, and 20nm isn't compelling unless competitive pressures really force your hand.

There are any number of factors that could have dictated console release schedules. I wouldn't assume that node availability/maturity alone was driving it. You'd may as well say 65nm, 55nm, etc were never considered worthy of a new console attempt either..

20nm at the foundries, particularly the planar variant, is not looking all that great unless you absolutely need it for reasons other than cost efficiency. It's even less compelling with its poor scaling until the FinFET hybrid node comes out.

I've heard a lot of complaints about 20nm but TSMC claims up to 1.9x better density..

Or did you mean performance scaling? I've heard rumors that there are almost no performance benefits (TSMC is saying something very different) but I'm going to wait for something tangible before I really believe that. The FinFET node is apparently going to have the same dimensions.

The crossover point for AMD and Nvidia is further out than 1 year.

I said 1 year from the release of XBox One. When is that exactly? I figured it was pretty late this year.

I'm expecting to see some products using TSMC 20nm at some point in H1 2014. nVidia and AMD have traditionally been some of the first customers, including with 28nm...
 
Question, are the CPU and GPU clocks necessarily linked?

I hate to give the naysayers ammo, but looking at Anand's new Jag article he noted Jag power usage scales pretty badly, at 2ghz a 4core uses 25 watts where at 1.5 it uses only 15.

Screen%20Shot%202013-05-23%20at%2012.15.09%20AM_575px.png


On the one hand, Durango Jag doesn't have the power consumption of two CU's to deal with like Kabini does. Yet still it could be an issue.

The clock speed of the GPU also went up.
 
MS could perhaps accept inferior yields temporarily if they're looking to shrink ASAP. A 20nm version could realistically hit as little as one year after release, possibly even sooner.

How inferior? I think a lot of people are expecting both Sony and MS to be fairly supply limited early on. It would be pretty stupid to upclock the Xbox One APU if it means PS4s are out-shipping them 2:1 for a whole year, especially if it's just to achieve a fractional difference in how much weaker the GPU is. How many of the 25 million paying Gold members are going to "wait their turn" for an Xbox One to become available in a year or 18 months instead of saying, "fuck it" and just buying a PS4?
 
Not at all.

That's all I needed then. If going to 1 ghz on the GPU required going to 2ghz on the CPU I just assumed it could have been problematic based on the Anand thing.

I mean of course clocking the GPU will eat power anyway, now to throw CPU on top...
 
How inferior? I think a lot of people are expecting both Sony and MS to be fairly supply limited early on. It would be pretty stupid to upclock the Xbox One APU if it means PS4s are out-shipping them 2:1 for a whole year, especially if it's just to achieve a fractional difference in how much weaker the GPU is. How many of the 25 million paying Gold members are going to "wait their turn" for an Xbox One to become available in a year or 18 months instead of saying, "fuck it" and just buying a PS4?

At the same time the ps4 can be very limited due to the gddr5 chips its using. Even the difference between esram and more rops/cu could cause yields to be greatly different and who knows in which company's favor.
 
Nope, I accepted it being weaker long ago. Now I'm just cheerleading any help it can get :p

If there's no upclock, Durango isn't any worse off. I've made peace with worst case scenario.

But it's no different than so many people really hoping Durango didn't get any upgrades pre May 21st...and from those threads there were a lot that felt really threatened by it!

Only people who had hope was tge fanboys. Trusting that vgleaks was correct was being a realist.
 
How inferior? I think a lot of people are expecting both Sony and MS to be fairly supply limited early on. It would be pretty stupid to upclock the Xbox One APU if it means PS4s are out-shipping them 2:1 for a whole year, especially if it's just to achieve a fractional difference in how much weaker the GPU is. How many of the 25 million paying Gold members are going to "wait their turn" for an Xbox One to become available in a year or 18 months instead of saying, "fuck it" and just buying a PS4?

I'm not making a solid argument here, just saying what could be possible scenarios. There's no real point even trying to make an actual argument when we have no numbers on anything. If the consoles are supply limited due to the SoCs then no, I wouldn't expect MS to sacrifice very many (if any) of them to hit higher targets than originally planned. If the sacrifice isn't availability but just average cost per unit then it may instead be something MS is willing to swallow to some extent.
 
There are any number of factors that could have dictated console release schedules. I wouldn't assume that node availability/maturity alone was driving it. You'd may as well say 65nm, 55nm, etc were never considered worthy of a new console attempt either..
Console chips were shrunk to 65nm and 32nm, many years into each node's life span.

I've heard a lot of complaints about 20nm but TSMC claims up to 1.9x better density..
Wafer costs are going to be higher, and yields are going to be considered poor, with the relative scale of poorness being inversely related to maturation time and the margins of the product being built on it.

Or did you mean performance scaling? I've heard rumors that there are almost no performance benefits (TSMC is saying something very different) but I'm going to wait for something tangible before I really believe that. The FinFET node is apparently going to have the same dimensions.
Performance scaling is going to be comparatively weak. Leakage is going to be a problem. It gets worse unless you make more significant changes to the shrinking transistors, which everyone but Intel usually does a node too late.

The hybrid node may have some density improvement. On top of general process improvement, there are some structures, like SRAM, that can tolerate less than ideal metal layer scaling.
Intel stepped off the gas for a few nodes for its metal scaling, but highly customized and regular cells like Intel's caches kept shrinking pretty well.

I said 1 year from the release of XBox One. When is that exactly? I figured it was pretty late this year.
Full product range crossover will probably take longer than that for AMD and Nvidia.
AMD is finally releasing a 28nm APU, a year after the paper launch of Tahiti.

If we're going by foundry roadmaps, 28nm is a late 2010 process node.
Going by revenue contribution for TSMC, 28nm is a late 2011 node. This means the first 28nm console chip will be out 2 years later.

I'd be more comfortable with 2+ years for a 20nm console chip, but I'm a pessimist.
I'm not sure if they'd want to wait for the hybrid node.

I'm expecting to see some products using TSMC 20nm at some point in H1 2014. nVidia and AMD have traditionally been some of the first customers, including with 28nm...
An FPGA (Altera?) will probably be the first, by six or more months. A limited subset of GPU production may follow, with actual production crossover being later.
 
What's the basis for DF and Anandtech's assumption that the Z1's clock speeds are 1.6GHz and 800MHz for the CPU/GPU? Their entire set of conclusions regarding performance seems premised on that assumption...but MS never actually disclosed that. It's a bit strange they didn't disclose it too imho.

After all, they made a point to highlight every other area of their specs that was at parity with PS4 in an effort to draw parallels there instead of contrasts. So...why wouldn't they want to likewise include the (assumed) fact that their processors ran at the same clock?

Scott mentioned it briefly, but if MS actually did improve the bandwidth to the GPU ("almost 200GB/s" as per MS) then surely that, in conjunction with their avoidance of clock speed info, might indicate the extra data flow would be going into a GPU with slightly higher clocks than the leaks suggest?

Wtf? Did you bother to read?

Anand:
There’s no word on clock speed, but Jaguar at 28nm is good for up to 2GHz depending on thermal headroom. Current rumors point to both the PS4 and Xbox One running their Jaguar cores at 1.6GHz, which sounds about right................
On the graphics side........Microsoft can’t make up the difference in clock speed alone (AMD’s GCN seems to top out around 1GHz on 28nm), and based on current leaks it looks like both MS and Sony are running their GPUs at the same 800MHz clock. The result is a 33% reduction in compute power, from 1.84 TFLOPs in the PS4 to 1.23 TFLOPs in the Xbox One. We’re still talking about over 5x the peak theoretical shader performance of the Xbox 360, likely even more given increases in efficiency thanks to AMD’s scalar GCN architecture (MS quotes up to 8x better GPU performance)
 
I'm not making a solid argument here, just saying what could be possible scenarios. There's no real point even trying to make an actual argument when we have no numbers on anything. If the consoles are supply limited due to the SoCs then no, I wouldn't expect MS to sacrifice very many (if any) of them to hit higher targets than originally planned. If the sacrifice isn't availability but just average cost per unit then it may instead be something MS is willing to swallow to some extent.

Sure, none of us can quote real figures, but my point is you have to look at all the ramifications of a decision like that. In terms of manufacturing this console the APU production is clearly going to be the bottleneck. Everything else is a commodity. They can't make a choice to throw away millions of consoles lightly. A 50Mhz overclock is a meaningless gesture. A 20% overclock could dramatically decrease their yields, and they're no where near matching PS4 performance.
 
I'm expecting to see some products using TSMC 20nm at some point in H1 2014. nVidia and AMD have traditionally been some of the first customers, including with 28nm...

If it's good enough for Apple's A7 Q1/2014 why shouldn't it be good enough for a console with far less demand in its early stage. Wouldn't surprise me if the whole release window is forced by the NFL 400M$ deal than any technical reasons.
 
I did some research and looked at ATI/NVIDIA's presentations and they pretty much state that the modern GPU design hides latency by having lots of threads and that modern GPUs with hundreds of processors are, by design inherently good at hiding latency.

From these I get the impression that eSRAM is there purely to solve bandwidth issues, much like how it was done on the 360. The low latency is a bonus but in the big picture the low latency of the eSRAM doesn't matter much if at all.

Is this correct? or would somebody please take my argument apart :smile:

This isn't the verses thread so I'll make this brief, do you (guys) think that ps4 has gone about this problem in a better or worse way with its 8 Aces and 64 compute threads? , would spreading the load across those threads hide the latency of gddr5 and negate the supposed latency advantages durango has with esram?

If so which is going to be a better balance?
Liverpool=
-more compute resources, higher bandwidth from gddr5 and 4 times the compute threads?
-or durango = approach of spending considerable transistors on esram, move engines, lower performance/power ddr3, and less transistors on the gpu it's self?
-which is the best soc value for outright performance?
-which will prove more cost effective in short/long term?

Complicated questions I know that can't really be answered right here and now, but the rather clever people here might be able to throw some light on this quandary and in doing so offer up an interesting discussion.
 
Status
Not open for further replies.
Back
Top