Sony to cut semiconductor capital expenditure; 45nm Cell in FY08/09

Capeta, I am being honest when I say what you just wrote makes no sense. Again, what does dmeand and 'new bigger chips' have to do with anything? Nothing - it doesn't. Why are you so fixated on that?

It has to do with everything Intel does. They are always coming up with bigger (more transistors) higher performing chips which greatly benefit die shrinks as soon as possible. They sell millions of them too.

Explain to me again the economic reasoning against utlizing a newer process for older designs - PCB redesign, is that what you're telling us?

I don't have specific economic reasons, I just know that a lot of companies don't simply move to a smaller process to save costs. A good example is Nintendo and its GC. It's still being manufactured using ancient 180nm process tech. That's just a single high profile example but there are many other examples out there. Another example is cheap MPEG2 decoders for DVD players. A lot of them are still using ancient process technology, why?

If *you* were Sony, and you have now moved Cell production to 45nm - yet you have a state-of-the-art 65nm fab line and several existing products being built on 90nm, 130nm, and above... what would *you* do with that 65nm fab capacity?

I would use it as a backup for CELL manufacturing and newly designed ICs targeted at 65nm. Transitioning to 65nm process isn't as easy as some may think. There's a maturation process which takes time and money.
 
Last edited by a moderator:
It has to do with everything Intel does. They are always coming up with bigger (more transistors) higher performing chips which greatly benefit die shrinks as soon as possible. They sell millions of them too.

Every processor benefits from a die shrink, and the main benefit is cost. Your understanding of the semiconductor business is fundamentally flawed if you think that higher transistor counts are *the* primary driver for smaller process nodes. It is, and has always been, about cost-reduction. The higher complexity chips come about from a need to differentiate and compete in a very aggressive marketplace; to this end, smaller nodes are an enabler. But always within the context of cost.
 
Every processor benefits from a die shrink, and the main benefit is cost. Your understanding of the semiconductor business is fundamentally flawed if you think that higher transistor counts are *the* primary driver for smaller process nodes. It is, and has always been, about cost-reduction. The higher complexity chips come about from a need to differentiate and compete in a very aggressive marketplace; to this end, smaller nodes are an enabler. But always within the context of cost.

Higher transistor counts mean larger chips which benefit MORE from die shrinks. There's little benefit if your chip is already small to average size hence less need for a die shrink ESPECIALLY when the demand isn't there. If you sell 10 chips a die shrink saves you maybe $10 total, sell 100 chips you save $100, sell 10,000 chips you save $10K. Is it worth a die shrink if you only could sell 10 chips?
 
I don't have specific economic reasons, I just know that a lot of companies don't simply move to a smaller process to save costs. A good example is Nintendo and its GC. It's still being manufactured using ancient 180nm process tech. That's just a single high profile example but there are many other examples out there. Another example is cheap MPEG2 decoders for DVD players. A lot of them are still using ancient process technology, why?

Ok, two things - first of all if Nintendo's GC is/was still having its CPU and GPU fabbed at 180nm, I would find that shocking. Do you have a link to that effect? I confess to not knowing myself what process it is on these days, but 180nm seems beyond the pale in my view.

As for the second example, those ancient process MPEG2 decoders are on that because they are such a commodity device, fabbed by so many different manufacturers, that no one entity has the volume requirements to battle it out with other higher-volume ICs and companies to secure the smaller node fab capacity. That is simply a tragedy of their market position and lack of internal fab capacity; but indeed, the larger the player in the decoder market, the smaller the process node I would expect them to secure.

This is fundamentally different though than Sony's position, because however high or low the volume requirements of a given product, the smaller process node is there available to be used.
 
Higher transistor counts mean larger chips which benefit MORE from die shrinks. There's little benefit if your chip is already small to average size hence less need for a die shrink ESPECIALLY when the demand isn't there. If you sell 10 chips a die shrink saves you maybe $10 total, sell 100 chips you save $100, sell 10,000 chips you save $10K. Is it worth a die shrink if you only could sell 10 chips?

I agree that larger/more complicated chips see a compounded advantage from a smaller node, mainly from a smaller die area, thus a greater ratio of dodged defect bullets relative to what an already small chip would enjoy.

But that math makes no sense; where is this $10 per chip coming from, and why are we unhappy with it? Except for the above stated reasons why larger chips enjoy a compounded benefit from a shrink, on a per wafer basis, the cost savings are the same regardless of chip size; you end up with roughly twice of whatever it was you were making before at roughly the same cost. Again, all of this in an ideal world, since newer processes always have their associated growing pains (and they seem to be getting more and more painful).

I do agree with you that if you are selling only low volumes, you have no need to bother... and probably can't afford to. That is the situation a lot of these smaller MPEG-2 players find themselves in, but it's not the situation Sony is in. Sony has a 1.9% share of the semi market; AMD 2.9%. That's perspective right there.
 
As for the second example, those ancient process MPEG2 decoders are on that because they are such a commodity device, fabbed by so many different manufacturers, that no one entity has the volume requirements to battle it out with other higher-volume ICs and companies to secure the smaller node fab capacity. That is simply a tragedy of their market position and lack of internal fab capacity; but indeed, the larger the player in the decoder market, the smaller the process node I would expect them to secure.

Yes IMO the real reason is that at a certain point they become a commodity device and it just isn't worth it to move to a smaller process even if they could lower the cost of the chips itself.

This is fundamentally different though than Sony's position, because however high or low the volume requirements of a given product, the smaller process node is there available to be used.

But that still means they'll have to retire the old fabs if they move to the new process because the demand won't be there for them to keep the old fabs in operation otherwise they'll just be idle like I mentioned previously. Again I don't see any compelling evidence to suggest they want to rush all of the in-house ICs to a new immature 65nm process. I don't see evidence that they want to cascade them. Sure eventually they'll move to smaller process but I just don't see anything needing 65nm in the next year or two that they design inhouse.
 
But that still means they'll have to retire the old fabs if they move to the new process because the demand won't be there for them to keep the old fabs in operation otherwise they'll just be idle like I mentioned previously. Again I don't see any compelling evidence to suggest they want to rush all of the in-house ICs to a new immature 65nm process. I don't see evidence that they want to cascade them. Sure eventually they'll move to smaller process but I just don't see anything needing 65nm in the next year or two that they design inhouse.


Yes, they won't rush to an immature process... but two years from now, it won't be immature. Making a move to 65nm on camera sensors and enjoying better performance characteristics and cheaper manufacturing, even if it means closing down some ancient fab line, IMO beats by a wide margin leaving your recently created $2 billion fab line idle.

But again, you say that they may continue to make Cells on the 65nm node even after 45nm production begins. Well, they might indeed. If they do, well that's what they're using it for and all is good in the world. I'm not arguing that the 65nm is going to be used for cameras and that's the end of discussion, I'm just saying that it's going to be used for something, continuously, for the next ten years or whatever.

And Capeta, ok, I provided you a link on the last page; did you read it? I want it understood before I post these images that everything debated up until now should have been self-evident from simply a well-thought out use of capital equipment. But if you want explicit mention of Sony's 65nm intent, I will provide such for you:

qfhh7c000007vrmo.jpg


qfhh7c000007vulx.jpg


qfhh7c000007vuns.jpg


qfhh7c000007vsb3.jpg


qfhh7c000007vuog.jpg



What the above slides show you is that not only is it explicit policy to keep fabs utilized as the trailblazing gaming chips move to smaller processes, but even now Sony IC demand outstrips it's ability to fab in-house. You'll notice also that R&D into CMOS sensors at 90nm and 65nm was already underway as of December 2005.
 
Last edited by a moderator:
"We could accept no compromises on the performance of the Cell microprocessor, the RSX graphics chip or other components. At the same time, we had to achieve volume production yields appropriate for consumer electronics, at a million or two million units a month. We had no choice but to revamp the packages," explained a spokesperson for Sony Computer Entertainment Corp (SCE) of Japan, as the company revealed details about the integrated circuit (IC) technology developed for its PlayStation 3 (PS3). The company developed a
multichip module (MCM) for the RSX and an internal build-up board for higher speeds in the Cell microprocessor.

Main Board Size
The RSX packs the graphics draw chip and four graphic memory chips (512-Mbit GDDR3 synchronous dynamic random access memory, or SDRAM) into an MCM (see Fig). Data is swapped between the draw chip and SDRAM at a transfer rate thought to be as high as about 1.4Gbps/pin. The reason for using an MCM was to minimize the path length between the two, maintaining the high transfer rate while reducing main board size. With the draw chip and memory interconnected on the main board, according to a source at SCE, "It would have been necessary to connect them with a 128-bit data bus of equivalent length, which probably would have resulted in a bigger main board."
This is why the line width and spacing is larger on the PS3 main board than in graphics cards with GDDR SDRAM used in personal computers, for example. SCE explained: "We are purchasing PS3 main boards from a number of suppliers to achieve high-volume production. Thinner leads and tighter spacing could cut board manufacturer yields and increase costs."
Implementing a 128-line data bus width on the PS3 with those fatter leads and wider spaces, according to SCE, would have increased the lead length to about 10cm, and would likely have caused main board growth. If main board line spacing were reduced to prevent this, crosstalk between leads would have become a problem.

Designed for Outsourcing
The use of the MCM kept the lead length between the draw chip and memory to within a few cm. The footprint of the MCM used in the RSX is 42.5mm square, significantly smaller than the main board real estate that would be used if the chips were mounted in a plane.
To keep costs down, standard parts were used as much as possible in the MCM structure, memory selection and other key decisions. The "most standard design" was selected for the MCM, according to SCE, with the draw chip and four memory chips mounted flat on the
interposer. One major reason for the decision was that this type of MCM has been manufactured in volume by SCE and other firms, and equipment was already in place. RSX assembly was implemented on a manufacturing line owned jointly by Sony Corp and Toshiba Corp, both of Japan. The firm had to adopt a standard MCM structure in order to outsource assembly.
Instead of bare chips or wafer-level packages, for example, SCE chose standard packages for the graphics memory chips, explaining they offered high reliability and the highest yield in MCM assembly.
SCE did develop a new package for the Cell microprocessor, however, featuring a core layer about 400um thick and using a thin build-up board as the interposer. The thinnest core layers in build-up boards used in microprocessors for personal computers with operating frequencies of several GHz are about 800um, meaning that SCE slashed the thickness of the core layer in half.

http://techon.nikkeibp.co.jp/article/HONSHI/20070126/126945/
 
Carl, what are your thoughts on this: Is this an "Intel" casualty? Intel has been absolutely crushing the competition in regards to process node jumps. Back in 2004 there was a lot of talk from certain informed posters that Sony was right there, if not ahead, with 90nm mass production and was headed on the way to beat them on 65nm. Retrospectively none of that happened and I get the suspecion that Sony would have preferred Cell on 65nm for launch for many reasons. My initial reaction is that as TSMC et al and such are approaching parity in process node technology the cost to have it fabbed elsewhere may be a bigger savings than pushing the process nodes as quickly as necessary to keep pace, and in turn turning over that fab space to chips that could benefit from the advanced manufacturing processes but not quite as sensative as to process delays (like a chip like Cell could be).

If this is the approach they are taking I wonder how they will keep pace in cutting edge fab technology. If they are outsourcing maybe they won't need to, and being slightly behind the curve could result in significant savings. But once you back off the pace it doesn't seem like it would be very easy to jump back in the game, at least to compete with Intel. On the other hand hadn't Sony announced a number of alliances with NEC, Toshiba, IBM, and AMD as of late in regards to process technologies?

Obviously they can develop Cell2 or whatever they plan to put in the PS4, as well as RSX2 in unison with NV, in conjunction with a company like TSMC and stay relatively up to date and on pace in regards to process technology and then use their own fabs to make a major play to capture huge segments of other markets (like sensors), and in the end coming out on top financially.
 
If this is the approach they are taking I wonder how they will keep pace in cutting edge fab technology. If they are outsourcing maybe they won't need to, and being slightly behind the curve could result in significant savings. But once you back off the pace it doesn't seem like it would be very easy to jump back in the game, at least to compete with Intel.

That is what i was wondering too. If it took them 1billion+ over the past 3 years to get to where they are now, cutting back on future capexp tells me they want to leave the process node race.

On the other hand hadn't Sony announced a number of alliances with NEC, Toshiba, IBM, and AMD as of late in regards to process technologies?

Those are probably the final negotiations before Ken was "oustered"(heard he was won over by Sony Change Management). I can feel the new Sony/SCEI is moving away from a group aggressively chasing the latest and kraziest(dev assets) to a soft seling business(marketing assets) entity. Which isn't a bad thing as history always shows the masses do not concern themselves over the difference between a nanometre and a tetraflops.
 
Carl, what are your thoughts on this: Is this an "Intel" casualty? Intel has been absolutely crushing the competition in regards to process node jumps. Back in 2004 there was a lot of talk from certain informed posters that Sony was right there, if not ahead, with 90nm mass production and was headed on the way to beat them on 65nm. Retrospectively none of that happened and I get the suspecion that Sony would have preferred Cell on 65nm for launch for many reasons. My initial reaction is that as TSMC et al and such are approaching parity in process node technology the cost to have it fabbed elsewhere may be a bigger savings than pushing the process nodes as quickly as necessary to keep pace, and in turn turning over that fab space to chips that could benefit from the advanced manufacturing processes but not quite as sensative as to process delays (like a chip like Cell could be).

I do remember those 65nm thoughts of yore. They stemmed from Toshiba/Sony advancements and claims on the eDRAM front and their joint CMOS process made public through press releases in late late 2002 and 2003. Back then, their (optimistic) roadmaps indicated a move to 90nm in 2004, and 65nm in 2005. Obviously that didn't happen, but I don't fault them for their naivity at that point in time. It was pre-90nm rollout for everyone, and with 90nm came the advent of leakage current as a primary concern, and numerous difficulties with getting the process lines up to speed relative to past process transitions - those sober realities that have set in since color processor design quite heavily in the present.

Sony succeeded in attaining 90nm in 2004 on their and Toshiba's CMOS4 process, but I think the difficulties of 65nm began to make themselves apparent shortly thereafter. Remember also that Sony's own Nagasaki fab was/is an SOI line, and has a lot more in common with IBM fab tech than their traditional Sony/Toshiba CMOS bulk process. I think in that context, Sony's ramp at Nagasaki, - though well behind schedule - is still as good as that of their development partner IBM... and what can you do? Kutaragi knew the score in May 2005 when he indicated that they would stick to 90nm for launch; indeed the prior plan had been for 65nm. But IBM, AMD, Toshiba... everyone has found 65nm harder to achieve than anticipated.

Intel.

Ultimately, I don't view Intel's clear advantage in fabrication expertise as competition to Sony's own efforts; truthfully, I felt the closest they ever came to that was Intel's abortive LCoS effort in the HDTV space. So I think Sony is sitting pretty in terms of process tech relative to their own direct competitors - granted that's not to say that I don't think the outsource model has points of merit. But just that when you're on the level with IBM, no reason at all for them to be considered 'behind' - Intel is just so clearly far out ahead.

If this is the approach they are taking I wonder how they will keep pace in cutting edge fab technology. If they are outsourcing maybe they won't need to, and being slightly behind the curve could result in significant savings. But once you back off the pace it doesn't seem like it would be very easy to jump back in the game, at least to compete with Intel. On the other hand hadn't Sony announced a number of alliances with NEC, Toshiba, IBM, and AMD as of late in regards to process technologies?

Obviously they can develop Cell2 or whatever they plan to put in the PS4, as well as RSX2 in unison with NV, in conjunction with a company like TSMC and stay relatively up to date and on pace in regards to process technology and then use their own fabs to make a major play to capture huge segments of other markets (like sensors), and in the end coming out on top financially.

Their alliance with Toshiba is ancient and longstanding; NEC is a new addition to their alliance over on the bulk process side. Sony and Toshiba then also have an alliance separately with IBM for both the Cell architecture and related process technologies; IBM has its own separate alliance with AMD and Chartered as well, so lots of threes it seems.

Sony in my opinion can stay as far out ahead as they want on in-house process technology; they run with the big boys and as long as they're willing to invest, they can stay there. Again, I'm treating Intel as above-the-fray in these terms - they simply are in another class. I think it makes sense to invest in fabs, as I think you never really have a sense of where your R&D is leading without having physical production tied to it. But then again, fab houses like TSMC are attaining such a scale that even if it costs more per chip ultimately, the hedge on risk and the ability to free billions in cap-ex for other purposes definitely in my mind warrants attention on the part of Sony.

It'll be interesting to see what they ultimately decide to do in this regard. I hope as a hardware company and innovator that Sony chooses to expand on their fab efforts, but Stringer has a duty to the shareholders in the immediate-term as well; it's ultimately up to him and his circle to determine whether new fabs, new acquisitions, research elsewhere, investments, or whatever else will lead to the highest return on investment. In favor of in-sourcing, I would like to highlight that the fact that creating their own 45nm line is implicitly still on the table hints to the clear benefits that having their own fab capacity brings; the question they're asking of themselves today is simply the classic opportunity cost question any rational being or business needs to ask of themselves when limited resources need be commited to achieve few goals from a range of many options.
 
Last edited by a moderator:
I hope they are thinking this through well and not just looking at the near future with myopic view.

Yoshihisa Toyosaki, president of J-Star Global Inc., a Tokyo-based research and consultation company, is skeptical about Sony's new policy. "In the short term, it'll work to ease cash flow," he said. "But there is no clear vision in the future."

http://www.eetimes.com/news/semi/showArticle.jhtml?articleID=197006176

It is also possible that for the next 2-3 years they decide to focus on developing the process technologies (65 nm SOI for CELL and other products is already in operation and the transition for Oita #2 and Nagasaki Fab2 to 45 nm should not be incredibly hard [once they have finished all the details on their 45 nm technology] as they planned the upgrade when they were setting up the 65 nm manufacturing lines) below 45 nm.

Then, once they have worked out their path towards 32 nm and sub-32 nm technologies they will know more about the cost for each technology implementation, they will know which fabs like UMC and TSMC will be able to provide as manufacturing process by the time they plan their next big-technology jump (PLAYSTATION 4 chips, etc...), they will see if they can extend their alliance with IBM and Toshiba to maybe NEC and others to build sub-45 nm fabs and see what is their best option for the future.

I think that PLAYSTATION 4 will be a mix of new fabs realized with allies (I do not think that even Intel will always keep go at it alone with massive new fabs realized for internal production below the 45 nm node, it is getting exponentially more and more expensive) and outsorcing.

Outsourcing is good for the short term financial gains, but for a company like Sony to hypothetically become a fab-less company and buy ALL of their components from basically their competitors would make it very hard to compete in the market. It also makes it quite more difficult to plan ahead with in-depth early info about future process transitions.

We are though moving in a direction where probably the current trend of relying on a series of "quick" and "painless" process shrinks to bring the chip production closer to the black will be semi-impossible (no pun intended)... we might go to an era where manufacturers like Sony, Toshiba, Samsung, IBM, AMD, UMC, TSMC , and NEC will have to share production of new advanced fabs to achieve volume production without risking to run the fabs at less than full capacity.
 
Last edited by a moderator:
On a related note:

http://www.eetasia.com/ART_8800450875_499489_763bec78200701.HTM
TI exits 45nm process race
Posted : 26 Jan 2007

Texas Instruments Inc. (TI) has decided to drop the costly business of digital logic process development and rely on foundry partners for its processes.

According to reports, TI development has decided to stop internal development at the 45nm node and use foundry supplied processes at 32nm, 22nm and thereafter.
 
Sleeping with the enemy?

"Sony may outsource production of Cell chips to Toshiba Corp., Yasuo Nakane, an analyst at Deutsche Securities Inc. who also attended yesterday's meeting, wrote in a report today. Nakane also advises investors to buy the stock." (link)

Interesting that Sony would deal with Toshiba with something like this. Considering their relationship has soured from the whole BD vs HD-DVD thing. It would be very ironic if what ends up killing HD-DVD is the PS3 and Toshiba themselves were fabbing the main component. It's like helping to build guns for the enemy to shoot you with.

And then on the other hand it puts Toshiba in a position where they could sabotage PS3 production. It seems like an un nessasary risk for Sony.

But just maybe this is the result of some secret handshake meetings that went on to try to repair relations and clear up this whole sticky mess and get back on a track that is mutually benificial to both companies.
 
"I do remember those 65nm thoughts of yore. They stemmed from Toshiba/Sony advancements and claims on the eDRAM front and their joint CMOS process made public through press releases in late late 2002 and 2003. Back then, their (optimistic) roadmaps indicated a move to 90nm in 2004, and 65nm in 2005. Obviously that didn't happen"

Sony's problems with the 65 nm node are no doubt why we did not see a more ambitious PS3 as some of us were talking about back in 03-04. The current PS3 is the result of 65 nm followed by a relatively quick 45 nm refresh gone wrong. Sony was heavily banking on .65 nm tech for the PS3 and by mid 2004 when it was becoming apparent that things were not going to go as smoothly as hoped and they would have to target .90nm nVIDIA was called in to throw together the RSX.

Had everything gone according to plan for Sony's process nodes there is no doubt in my mind we would have seen a more powerful BE possibly hovering around 512GFLOPS along with a largely toshiba developed cell based GPU.
 
Sony's problems with the 65 nm node are no doubt why we did not see a more ambitious PS3 as some of us were talking about back in 03-04. The current PS3 is the result of 65 nm followed by a relatively quick 45 nm refresh gone wrong. Sony was heavily banking on .65 nm tech for the PS3 and by mid 2004 when it was becoming apparent that things were not going to go as smoothly as hoped and they would have to target .90nm nVIDIA was called in to throw together the RSX.

Had everything gone according to plan for Sony's process nodes there is no doubt in my mind we would have seen a more powerful BE possibly hovering around 512GFLOPS along with a largely toshiba developed cell based GPU.

The GPU was still not going to be CELL based though.
 
Was there ever any proof to the affect that the PS3 GPU, the one sony was planning before RSX, was not cell based? I will admit I fell behind on things but from everything I saw a Visualizer GPU made most sense and from patent diagrams stemming from circa 01-02 it seems that's what Sony had in mind.
 
Back
Top