PS3 internals

I'm sorry but the difference in pixel & vertex shaders "active" and clock differences between the two chips (it's basically the same processor!) will never make up the difference in cost for a whole board, a 256-bit memory interface and 256MB of GDDR3, all the different video outputs and the PCIe bus.
These chips are fallout chips, basically they will often fall out of your yield analysis - they are priced at more or less at cost because they would have been wasted otherwise.
 
Anyone else pondering whether the CXD2973GB controller chip

Could possibly be a rebadged Toshiba Super Companion Chip, and if so could that not have the capability to do video scaling? although probably not in conjunction with RSX, if so that seems like another chunk of wasted silicon on PS3

Picture of SCC on Toshiba Refernce set Motherboard (anyone got a better one?)
http://www.xbitlabs.com/images/news/2005-09/toshiba_kit.jpg

Details on SCC functionality
http://www.hotchips.org/archives/hc17/2_Mon/HC17.S1/HC17.S1T3.pdf

edit: This crossed my mind as soon as I saw the chip but I didn't mention until I saw the iSuppli article which lists the controller chip as being Toshiba sourced.
 
Last edited by a moderator:
Anyone else pondering whether the CXD2973GB controller chip


Could possibly be a rebadged Toshiba Super Companion Chip, and if so could that not have the capability to do video scaling? although probably not in conjunction with RSX, if so that seems like another chunk of wasted silicon on PS3

Am I not the only one who remembers that Sony specially stated that the Cell processor will handle all the video scaling capabilities!? Remember the conference were they showed many video thumb nails being scaled at different sizes and different resolutions, just using the Cell alone.
 
Surely video stream scaling is an entirely separate problem to game output scaling.

With a video stream you can easily imagine how content would be decompressed on one SPE then passed to another for scaling all within cell but with game scaling the spe's will be busy running the game and output would have to be generated on RSX then post processed by cell before being displayed, seems like a much more difficult job
 
Surely video stream scaling is an entirely separate problem to game output scaling.

With a video stream you can easily imagine how content would be decompressed on one SPE then passed to another for scaling all within cell but with game scaling the spe's will be busy running the game and output would have to be generated on RSX then post processed by cell before being displayed, seems like a much more difficult job

You’re not giving the Cell enough credit. One SPE was able to render, scale, and manipulate many video streams (2-3 streams per SPE IIRC). So game output scaling shouldn’t be a problem with the Cell, since video data (especially HD content) will always be more CPU intensive than game outputting.
 
If it was that easy i don't think we would be seeing the problems with scaling that people are reporting, but this subject is probably better left to the PS3 Scaling thread, I am though still curious as to whether there is actually an SCC on the M/B and if so whether that's something that might open up additional functionality in future revisions or whether it could be utilised in some way from Linux.
 
Surely video stream scaling is an entirely separate problem to game output scaling.

With a video stream you can easily imagine how content would be decompressed on one SPE then passed to another for scaling all within cell but with game scaling the spe's will be busy running the game and output would have to be generated on RSX then post processed by cell before being displayed, seems like a much more difficult job

The problem is not the SPE since one is "reserved" for OS operations anyways. A much harder hit would be taken by the bandwidth which is required to resample the output from RSX. This is what bothers me most.
 
Isn't scaling usually performed by a little teeny tiny TV encoder chip? Why do it on Cell when it would be so much less efficient?
 
Isn't scaling usually performed by a little teeny tiny TV encoder chip? Why do it on Cell when it would be so much less efficient?

Well, from what I've read about Stream processors, assuming Cell actually is a Stream processor, they have a good potential to be even more efficient than dedicated DSPs at many tasks. I know the Imagine architecture (a stream architecture made at Stanford) was supposed to be getting better performance per watt than DSPs, and orders of magnitude better than a Pentium M.
 
Internals look really nice on the PS3. I especially love the clean design taken in designing the circuit board as well as the love taken in making the whole package. Coming from a electric engineering background, you can really appreciate the mastery Sony has shown. Certainly makes the Xbox360 looks like a design from somebody's 2nd year ECE project.

Hmm, I don't know. I kinda thought the xb360 board looked like a master's level design. But then again, I think the important optimization factor in these things is cost. Its easy to spend a lot of money and get something to work. A lot harder to do it on a budget.

Aaron Spink
speaking for myself inc.
 
I agree that the whole estimate is a bit on the dear side... $129 for RSX when i - a consumer - can get a whole 7900 board (with memory and all) for $199 - and these are sold at a profit! There is no way a chip like RSX costs $129 to produce, that's a ludicrous statement, and that's even before taking into account the huge discounts Sony would get from having ordered millions of the bloody things. Who's making them anyway? Nvidia or Sony?

Isn't that 7900 a cut down/defective part? As for the costs....

MCM packaging adds significant test and manufacturing costs. I could easily seeing the cost of the MCM RSX being $129. Possibly more. Factor in the production yield hit for a fixed frequency target, etc and you have significantly more costs than PCB GFX.

And Sony doesn't get any discounts on them. Nvidia did the design as contract work.

Aaron Spink
speaking for myself inc.
 
I'm sorry but the difference in pixel & vertex shaders "active" and clock differences between the two chips (it's basically the same processor!) will never make up the difference in cost for a whole board, a 256-bit memory interface and 256MB of GDDR3, all the different video outputs and the PCIe bus.

Um, in a word: NO!.

Consider this. For the PC part, out of 1000 devices made, 930 can be sold in at least some form that covers manufacturing costs.

For the RSX part, out of 1000 devices made, 400 can be sold as PS3 parts that meet spec. Another 40-50 or so are lost do to MCM failures. knocking it down further to ~360 parts.


And if Sony are making the chip themselves, it's likely to be even cheaper for them to manufacture.

Depends. Sony isn't exactly known for the low cost Si manufacturing.

Aaron Spink
speaking for myself inc.
 
Considering RSX is a 240mm2 chip, that puts it at ~260 chips per wafer. Let's say 200 of those are usable. With an exagerated wafer cost of $10K, that's $50/chip. Add $10 for packaging and $40 for the memory chips, both of which are probably too high, and then finally about $5 for NVIDIA. Heck, let's add another $5 for other minor costs, such as shipping the chips around, amortizing the NVIDIA NRE/License fees, etc... And you're still only at $100.

With that die size and the frequency requirements and the fault issues, they be lucky to get 80% yield. Esp right now. More realistic is in the 150 per wafer range though possibly in the 100 per wafer range. You all underestimate what yield recovery programs such as the GS, etc, mean for the overall yields.

Then you have packaging yield effects which MCM packaging has not insignificant impacts on.
Also remember that a lot of these parts of a significant number of pins on the package which also drives up costs.

Aaron Spink
speaking for myself inc.
 
For the PC part, out of 1000 devices made, 930 can be sold in at least some form that covers manufacturing costs. For the RSX part, out of 1000 devices made, 400 can be sold as PS3 parts that meet spec. Another 40-50 or so are lost do to MCM failures. knocking it down further to ~360 parts.
Did you take the speculated redundant quad into account when estimating these numbers? Because that seems very similar in effect to me to the staggered product lines on the PC side of things.
 
I'm still trying to figure out what that $148 for "other components and manufacturing" is. I thought it could have been the PSU, but the PSU is accounted for in its own line... So it's really strange, there are no more components (surely not $148 worth of them) on the PS3 apart from the ones listed there. And manufacturing is already accounted for too. Twice.
The rest seems ok, bit inflated here and there (see RSX) but ok.

All those fancy caps everyone was ogling over? Yeah, NOT CHEAP. The number of PCBs they are using is also high. I think you are underestimating the cost of that board design with all high end components. It looks closer to a design used for high end analysis than production consumer hardware.

The rumors were that Sony was running into significant design issues with the PS3 and started pulling people from all over the company to fix things with a cost isn't an issue philosophy, and it certainly is reflected in the design and component selection.

FWIW, I think they are underestimating the cost of the heatsink/fan design as well.

Aaron Spink
speaking for myself inc.
 
Did you take the speculated redundant quad into account when estimating these numbers? Because that seems very similar in effect to me to the staggered product lines on the PC side of things.

Yep. One of the fundamental issues with yield in console designs is frequency yield. In most normal semiconductor designs, you can usually design to a fairly de-rated frequency or get yield recovery via various frequency SKUs. For example, take something like the Opteron or Core 2 Duo chip. They have a frequency range of SKUs of almost 2x. This has a significant impact of yield.

For example, say that the top bin yields at 10% but functional yield is at 95% and base market yield (ie lowest cost covered sku) of 90%. Then assuming the market is robust, I can make enough wafers to satisfy the market for the highest sku part, and still make money on the parts that don't hit said sku by selling them at lower skus with the knowledge that my effective yields will be quite high.

Early in a console design period, you want to set the clock speed high because the device will be around for a significant time span, and you can't upgrade the frequency at the later date. This generally results in the target frequency sku being set in the 10-20% range of functional parts. The theory is that over time with process manufacturing improvements, and process changes (ie shrinks) you will up that frequency yield from the 10-20% range to the 80-90% range.

Right now, I would be hard pressed to believe that either sony or microsoft were getting frequency yield for either the processor or graphics much above the 30% range and likely lower than that for sony.

So in summary the redundant quad would affect functional yield, but the primary issue currently is likely frequency yield.

Aaron spink
speaking for myself inc.
 
I see, thanks for the explanation. That would also explain why it would make sense for Sony to drop the RSX clock by 50MHz.
 
Sony doesnt use cheap components.
They even use proprietary/exclusive expensive power transistors for their TVs PSUs.
 
Back
Top