The new PS3 sales pitch: Better gaming, better technology, better value

overclocked said:
Cell is cheaper *now* compared to XeCpu *then* in cost of each unit AFAIK.
Its a fact that they always will have more raw siliconcost no doubt of course.
Sorry, what is the basis for this? Any actual figures? I'm not sure what the point about "now" and "then" are about - we aren't comparing things now and then when talking about price drops.

Primary area that Sony has for cost savings in relation to XBOX is the margin that the 3rd party fab takes out from MS.

overclocked said:
In later iterations of PS3s Cell they can likely rearange the layout and cut the 8th SPE in the future for more costsavings.
I would see that as a reasonable scenario (assuming it doesn't cause significant layout issues) as well. But thats still probably around 10% of the overall die.
 
nonamer said:
Just because a chip is bigger doesn't mean it will cost more. GPUs are much bigger than CPUs, but they cost less because they can get away will cheaper manufacturing methods and are not pushed as aggressively and can have larger amounts of redundancy.
Errr, you appear to be mixing up things here, from what appears to be the PC model. On the PC CPU's will often end up a process further from graphics processors, but thats becuase the CPU manufacturers own their own fabs and push to the latest lithographies as soon as they can, but generally speaking when they are on the same processes they are using the same technologies.

The cost differences are there purely there because CPU vendors have massive margins on their products, while the graphics vendors are operating at incredibly slim margins comparatively.

Finally, redundancy has only turned up in graphics processors to any degree since the introduction of DX9. This is a model thats followed from the CPU manufacturers, who have done it for years, and the current models are akin to a sledgehammer method (hacking off a quarter of the chip if there is a defect).

Comparing CPU's and graphics processors in this way is meaningless when we are looking at consoles because the monetry model changes (they aren't looking for anything like the margins CPU vendors are), the lithography model is different (Sony will probably push towards the same nodes for RSX and Cell at similar times) and specifically with PS3 it looks like the redundancy model is different (it looks to me like there is a slim element of redundancy in RSX, but not so with Cell).
 
I've understood that heat is a big factor in CPU cost, and the Cell's SPEs low energy consumption are definitely a factor there, as they are for GPUs which typically don't run at nearly as high frequencies as CPUs do. The Cell was specifically designed to cope with the limitations of power consumption and heat production.
 
Arwin said:
I've understood that heat is a big factor in CPU cost, and the Cell's SPEs low energy consumption are definitely a factor there, as they are for GPUs which typically don't run at nearly as high frequencies as CPUs do. The Cell was specifically designed to cope with the limitations of power consumption and heat production.

And yet it only runs at 3.2GHz in PS3 while we've seen schmoo plots that indicate that they could go a lot higher if the power enveloped permitted.

Cheers
 
Gubbi said:
And yet it only runs at 3.2GHz in PS3 while we've seen schmoo plots that indicate that they could go a lot higher if the power enveloped permitted.

Cheers

You might want to read a little more on this Chip. Where the PPE uses 80 watt, The SPEs use a mere 5 watt each. The idea was to decrease power consumption and increase processor function by other ways then merely increasing the clock.
 
Arwin said:
You might want to read a little more on this Chip. Where the PPE uses 80 watt, The SPEs use a mere 5 watt each. The idea was to decrease power consumption and increase processor function by other ways then merely increasing the clock.

You probably want to count the power used by the EIB+DMA controller as well as the I/O pads as well.

Cheers
 
Arwin said:
You might want to read a little more on this Chip. Where the PPE uses 80 watt, The SPEs use a mere 5 watt each. The idea was to decrease power consumption and increase processor function by other ways then merely increasing the clock.

Rather incorrect: http://realworldtech.com/page.cfm?ArticleID=RWT072405191325&p=5

Previously, power consumption of the CELL processor was estimated to be in the range of 50 to 80 W at 4 GHz and 1.1V. In subjecting the estimates to the 46% scaling factor, we further estimate that the power consumption of the CELL processor should be in the range of 27 to 43 W at 3.2 GHz and 0.9V, and 113 to 181W at 5.6 GHz and 1.4V.

The Cell processor is a surprisingly cool processor at 3.2Ghz.
 
nonamer said:
Rather incorrect: http://realworldtech.com/page.cfm?ArticleID=RWT072405191325&p=5



The Cell processor is a surprisingly cool processor at 3.2Ghz.



Did you actually read that? Its from internal tests on 1 active SPE (subtracted results from 2). What do you think is going to happen when the other 6 are active not including the other key parts of the chip? More heat! That chart is quite literally meaningless. Let alone the authors feable attempts to calculate the power consumption assuming Sony ran CELL at the wall of its minimum voltage for 3.2GHz which they would never do for fear of stability issues.
 
Last edited by a moderator:
Acert93 said:
They are both currently on the 90nm process and CELL has yet to begin large scale commercial production. Xenon is 168mm^2 whereas Cell is 40% larger at 235mm^2.

At the same relative startline the cost is/are pretty equal.


Dave Baumann said:
Sorry, what is the basis for this? Any actual figures? I'm not sure what the point about "now" and "then" are about - we aren't comparing things now and then when talking about price drops.

No need to be sorry, the figures for ex cost would require one to have the exact numbers.
I can get back to you about that if its viable still.

The reply above accounts for your last sentence, but from another angle then
to Acert93.
The difference is IMO that i think it is of interest for many here. From a Fab pricereduction view or progress its also of interest cause these two parts is from a technical POV similar
also.

As for the pricereduction or drops i can only agree as we strictly are talking about silicon then Dave.
 
SugarCoat said:
Did you actually read that? Its from internal tests on 1 active SPE (subtracted results from 2). What do you think is going to happen when the other 6 are active not including the other key parts of the chip? More heat! That chart is quite literally meaningless. Let alone the authors feable attempts to calculate the power consumption assuming Sony ran CELL at the wall of its minimum voltage for 3.2GHz which they would never do for fear of stability issues.

The 1 active SPE test uses only 1W at the least (around 2Ghz) and 11W at the most (5Ghz). The 27-43W number is for the whole chip at 3.2Ghz. I cannot see how you can read it any other way.
 
nonamer said:
The 1 active SPE test uses only 1W at the least (around 2Ghz) and 11W at the most (5Ghz). The 27-43W number is for the whole chip at 3.2Ghz. I cannot see how you can read it any other way.

The low (43W) number is only attainable at 0.9V, that would definately be the top bin split. Since Sony is mass producing CELL for the PS3 with no other products to sink any lower yielding parts, they probably want to run it at 1.1V to keep cost down (throwing away 3/4 of the produced chip is expensive). This would yield power usage in the 70-80W region.

On top of that this whole analysis hinges on the schmoo plot being for an average bin-split CELL chip and not a golden sample (ie. top bin).

If not, why wouldn't Sony run the damn thing at 4GHz then ?

Cheers
 
Gubbi said:
The low (43W) number is only attainable at 0.9V, that would definately be the top bin split. Since Sony is mass producing CELL for the PS3 with no other products to sink any lower yielding parts, they probably want to run it at 1.1V to keep cost down (throwing away 3/4 of the produced chip is expensive). This would yield power usage in the 70-80W region.

On top of that this whole analysis hinges on the schmoo plot being for an average bin-split CELL chip and not a golden sample (ie. top bin).

If not, why wouldn't Sony run the damn thing at 4GHz then ?

Cheers

This article from last year, so it's reasonable that yields/binspitting are better by now. The power measurements are clearly real-world testing with 2 SPEs, estimated back down to 1 SPE. There's very little to go on in order to believe that this is some sort of dubious benchmark a la Apple photoshop benchmark.

Also, the PS3 will use an internal PSU and is reported to be very quiet. Adding a 4Ghz Cell processor will probably make it too hot or loud, at least relative to Sony's design goals.
 
nonamer said:
This article from last year, so it's reasonable that yields/binspitting are better by now. The power measurements are clearly real-world testing with 2 SPEs, estimated back down to 1 SPE. There's very little to go on in order to believe that this is some sort of dubious benchmark a la Apple photoshop benchmark.

Also, the PS3 will use an internal PSU and is reported to be very quiet. Adding a 4Ghz Cell processor will probably make it too hot or loud, at least relative to Sony's design goals.


He nor or I am questioning the legitimacy of the figures, its the fact that its not realistic with what Sony will use. As i said running it at the least possible voltage of .9V would cause not only a higher yield failure rate but a possible problem with stability in the system itself. Sony wont chance that especially when they are already losing money on the process. Those figures arent for the whole chip by the way. Its a guess at consumption based on 2 SPE running a synthetic benchmark. They're bunk and totally useless because they have no realistic bearing, so i suggest you stop using them.
 
nonamer said:
This article from last year, so it's reasonable that yields/binspitting are better by now.
My insider sources are telling me that IBM have found by mixing the blue and yellow fluids, they create a green fluid that aids with binspitting by as much as 25%. They're hoping to win the international award for advancement in this field, but they could do with the wind being on their side, and maybe somehow make the target a little easier to reach...
 
nonamer said:
Just because a chip is bigger doesn't mean it will cost more. GPUs are much bigger than CPUs, but they cost less because they can get away will cheaper manufacturing methods and are not pushed as aggressively and can have larger amounts of redundancy.

There was a report last year that an Intel chip costs ~$40 to manufacture. From reading around GPU processors cost $20-$50 to manufacture depending on the size so I am not sure that is agreeable with your suggestion.

Besides the comments Dave made, there is another reason their business models are different: binning. A P4 2.4GHz is the same as a P4 3.2GHz. The better chips were binned for the higher frequency, and the worse chips at the lower frequency. In the case of the P4 2.4c it was a process change and most of these easily overclocked to 3.2GHz.

Anyhow, CPU makers have the advantage of selling the same chip at a number of frequency levels, which goes back to their massive scale of production and being able to segment across many markets. That said, a $500 CPU is frequenly no different than a $100 one outside the more expensive CPU is guaranteed to work at the binned frequency. Sometimes slower binned chips can, and sometimes they cannot.

As discussed before, owning your own fabs does not mean guarantee cheaper production (but it could). Yet the fact remains Cell is 40% larger meaning 1. more defects per die and 2. fewer chips per wafer.

I am not sure how many chips can fit on a 300mm wafer, but lets assume:

100 Xenon per wafer
71 Cell per wafer

Since Cell is bigger, lets assume a slightly higher defect rate

20% defect rate for Xenon
25% defect rate for Cell

That means you would get 80 working Xenon processors per wafer compared to 53 working Cell processors. ~8:5 is a pretty significant gap to close, even if you have your own fabs.
 
overclocked said:
At the same relative startline the cost is/are pretty equal.
But they don't have the same relative startline in terms of manufacturing or in terms of die size. Are you talking about something different?

No need to be sorry, the figures for ex cost would require one to have the exact numbers. I can get back to you about that if its viable still.
We'd have to see the figures and exactly how they relate to see if they are viable...
 
Acert93 said:
From reading around GPU processors cost $20-$50 to manufacture depending on the size so I am not sure that is agreeable with your suggestion.
For a large die such as R580, you can probably quadruple that range.
 
Shifty Geezer said:
My insider sources are telling me that IBM have found by mixing the blue and yellow fluids, they create a green fluid that aids with binspitting by as much as 25%. They're hoping to win the international award for advancement in this field, but they could do with the wind being on their side, and maybe somehow make the target a little easier to reach...

Look, I'm seriously peeved by these joke responses to serious claims that are based on reality. So unless you are going to argue that the Opteron came out at 3Ghz from the start and the P4 came out at 3.8Ghz from the very start, improving bin splitting is a fact of chip manufacturing.

EDIT: at 90nm process only for the Opteron and P4.
 
Last edited by a moderator:
Acert93 said:
There was a report last year that an Intel chip costs ~$40 to manufacture. From reading around GPU processors cost $20-$50 to manufacture depending on the size so I am not sure that is agreeable with your suggestion.

The Intel CPU is much closer to 100-200mm^2 in size while the GPU iprobably closer to 300mm^2 in size. A CPU of equivalent size would not cost $40. Plus it's not clear how much of a margin the foundry is getting. Also the CPU is going to be using the latest process technology such as SOI or strained silicon it's maker can afford, whereas the GPU will use whatever process is available at TSMC or UMC. So while there is a relationship between the size of the chip and cost, it is not necessary the case that a bigger chip is more expensive than a smaller one.

Besides the comments Dave made, there is another reason their business models are different: binning. A P4 2.4GHz is the same as a P4 3.2GHz. The better chips were binned for the higher frequency, and the worse chips at the lower frequency. In the case of the P4 2.4c it was a process change and most of these easily overclocked to 3.2GHz.

Anyhow, CPU makers have the advantage of selling the same chip at a number of frequency levels, which goes back to their massive scale of production and being able to segment across many markets. That said, a $500 CPU is frequenly no different than a $100 one outside the more expensive CPU is guaranteed to work at the binned frequency. Sometimes slower binned chips can, and sometimes they cannot.

As discussed before, owning your own fabs does not mean guarantee cheaper production (but it could). Yet the fact remains Cell is 40% larger meaning 1. more defects per die and 2. fewer chips per wafer.

I am not sure how many chips can fit on a 300mm wafer, but lets assume:

100 Xenon per wafer
71 Cell per wafer

Since Cell is bigger, lets assume a slightly higher defect rate

20% defect rate for Xenon
25% defect rate for Cell

That means you would get 80 working Xenon processors per wafer compared to 53 working Cell processors. ~8:5 is a pretty significant gap to close, even if you have your own fabs.


Don't get me wrong, Cell will likely be more expensive, but this is not necessarily the case. Other factors, such as redundency in the chip and the bin spits can effect costs even on identical processes. Since the Cell has much higher redundency and probably better bin spits (at least down the road, since it can realistically hit 4-5Ghz) as well as their own fabs, it's definitely possible that the Cell could be less expensive.
 
SugarCoat said:
Did you actually read that? Its from internal tests on 1 active SPE (subtracted results from 2). What do you think is going to happen when the other 6 are active not including the other key parts of the chip? More heat! That chart is quite literally meaningless. Let alone the authors feable attempts to calculate the power consumption assuming Sony ran CELL at the wall of its minimum voltage for 3.2GHz which they would never do for fear of stability issues.

But the point is that the SPEs don't actually add much to the overall.

Re the 0.9v, consider this (from Anandtech):

"The next technology that FlexIO enables is DRSL with LVDS (Low Voltage Differential Signaling), which is a technology similar to what Intel uses in the Pentium 4 to reduce power consumption of their high-speed ALUs. We will actually explain the technology in greater detail later on this week in unrelated coverage, but the basic idea is as follows: normally the lower the voltage you run your interfaces at, the more difficult it becomes to detect an electrical "high" from an electrical "low." The reason being that it is quite easy to tell a 5V signal from a 0V signal, but telling a 0.9V signal from a 0V signal becomes much more difficult. DRSL instead takes the difference between two voltage lines with a very low voltage difference and uses that difference for signaling. By using low signal voltages, you can ensure that even though you may have a high speed bus, power consumption is kept to a minimum. "
 
Back
Top