Could be more RSX info...

aaronspink said:
Um, possibly but not a lot. Sony is no better than IBM and Nvidia didn't increase heat going from IBM to TSMC.



It isn't an issue with heat sinks, the X360 has fairly large heatsinks, it has to do with volume, airflow, and noise. PS2 was on another whole different thermal level than PS3.

And PS2 wasn't originally supposed to be 180 nM, it was supposed to be 250 nM or do you think that you make a process change in less than 12 months for a chip design? The PS2 cooling and enclosure was designed around the thermal loads they would get at 250 nM.

Aaron Spink
speaking for myself inc.

Their fabs had trouble with the 180nm and thus gargantuan chips had to be made in 250nm with horrible yields and massive losses, forcing them to even airship thousands upon thousands to make the US 500k launch number, and shortages were rampant. A few details may be missing as my memory's a bit rusty, but I'm pretty sure I heard about this kinda thing, not sure if it was both gs and ee, or just one of them, though. Someone will surely know and provide us all with the info.

Dunnoh, iirc, but an image I saw a long while ago, i think, had almost half(the bottom half) of the old ps2 units being just one massive heatsink

Not sure if this is what I saw originally but here's a pic I found in How Stuff Works
ps2-heatsink.jpg

with a quote
The incredible amount of heat generated by the processors requires this huge heat sink.
 
Last edited by a moderator:
Bobbler said:
I'm curious about this too -- I don't know if there is anything that would patently make it not possible?

I know there are often quirky things, but a GPU should be relatively similar to a CPU in its density and contained structures (if not a bit more forgiving)?

SOI is a better material to construct a semiconductor out of. The downside is that SOI waffers cost more money.
 
zidane1strife said:
Their fabs had trouble with the 180nm and thus gargantuan chips had to be made in 250nm with horrible yields and massive losses, forcing them to even airship thousands upon thousands to make the US 500k launch number, and shortages were rampant. A few details may be missing as my memory's a bit rusty, but I'm pretty sure I heard about this kinda thing, not sure if it was both gs and ee, or just one of them, though. Someone will surely know and provide us all with the info.

Their fabs didn't have trouble, they always planned on releasing on 250 nM or they wouldn't have designed the parts for 250 nM. They did however plan on transitioning to 180 nM at the earliest possible point for monetary reasons.

Dunnoh, iirc, but an image I saw a long while ago, i think, had almost half(the bottom half) of the old ps2 units being just one massive heatsink

Most of what your picture shows is pretty much just an EMI shield, the actual heatsink portion is fairly small in comparison to what would be required for 200 watts between GPU and CPU. Suffice to say that the heatsink needed would be equivelent to what is found in mid to high end pcs for cpu and graphics.

Aaron Spink
speaking for myself inc.
 
Brimstone said:
The possibility does exist that Sony could fab the RSX on it's SOI process to help the GPU run faster and cooler. It's hard to believe that all the heavy investment into SOI would just be for CELL CPU's.

SOI helps somewhat but its effects do diminish at smaller process nodes, 90 nM being where is starts to lose a lot of its benefit. Even at its best, SOI only have either a 15% frequency increase or a 15% power decrease, not both.

Aaron Spink
speaking for myself inc.
 
aaronspink said:
Sony would have a pretty big challenge cooling a 100+ watts GPU + a ~100 watt CPU plus the BRD, plus the power supply, all within a box with supposedly a smaller volume than X360.

I am sure if you take the volume measurement properly PS3 would have larger volume.

To give a counter point, supposedly, XCPU is around ~80 watts and XGPU is around 40-50 watts. And they still have cooling issues, and that is with an external power supply.

Is it possible for XCPU and XGPU to have power usage greater than that ? Perhaps 80W and 50W was their target, but the final chip that they got in the end to go into production X360 use more power than what they expect.
 
aaronspink said:
SOI helps somewhat but its effects do diminish at smaller process nodes, 90 nM being where is starts to lose a lot of its benefit. Even at its best, SOI only have either a 15% frequency increase or a 15% power decrease, not both.

Aaron Spink
speaking for myself inc.

I thought SOI works well with strained silicon? IBM and AMD are going to use the "dual stress liner" technology this year, and this technique is presumed to be used in CELL.
 
Brimstone said:
I thought SOI works well with strained silicon? IBM and AMD are going to use the "dual stress liner" technology this year, and this technique is presumed to be used in CELL.

I dont know if they plan to use FDSOI at 65nM also along with the dual SL. They countinue to work on it though as far as i know down to 45nM(havent read that much in Semconductor progress lately).
The initally statement(E3) from nVidia was that they would use Sonys 8ML cuL 90nM process.
And Cell will be made on the latest progress they have in time of course.
 
aaronspink said:
Their fabs didn't have trouble, they always planned on releasing on 250 nM or they wouldn't have designed the parts for 250 nM. They did however plan on transitioning to 180 nM at the earliest possible point for monetary reasons.
I've researched the info, and it appears you were correct. What I recalled was that it was to be produced at 180nm in nagasaki, but production had to be moved back to kokubu at 250nm, and it was in relation to the launch. But alas to the US launch, and not the japanese launch.


aaronspink said:
Most of what your picture shows is pretty much just an EMI shield, the actual heatsink portion is fairly small in comparison to what would be required for 200 watts between GPU and CPU. Suffice to say that the heatsink needed would be equivelent to what is found in mid to high end pcs for cpu and graphics.

Aaron Spink
speaking for myself inc.
Again, I'm not sure if that was the heatsink of the original models.

In any case, while it's highly unlikely there's also the remote possibility that just like the spring launch and otherwordly price info is doubtful, the process info too could be in doubt. All an elaborate ploy to force MS to launch first and at a high price. Many walk into the store see nonpremium and go away and wait for the premium. By saying they'll launch in spring MS thinks twice about launching in 2k6 as they'll be launching at the same time or later should that be the case. By saying 90nm g70 specs with same frequency of cpu same amount of ram and all, MS feels safe that there won't be any major disparity in performance, it also pleases nvidia that wouldn't want their consumers to know through the rest of 2k5 that their $500 or $1000sli investment in gpus is just for a substantially inferior product to a 300-400~ console thus deterring many a purchase, thus reducing sales and their main product's pr image.
 
Last edited by a moderator:
SOI is a better material to construct a semiconductor out of. The downside is that SOI waffers cost more money.
Wafer cost alone really doesn't contribute that much to the cost of the chip. Even at a 10:1 difference in wafer price over bulk, even with a chip as huge as CELL and lousy yields below 25%, the cost difference would still probably be less than $4.00, assuming all else is equal.

I dont know if they plan to use FDSOI at 65nM also along with the dual SL. They countinue to work on it though as far as i know down to 45nM(havent read that much in Semconductor progress lately).
Wouldn't be that hard to transition to FDSOI from a design standpoint. If you can design for PDSOI, you can design for FDSOI (though the reverse is not true), and the overall benefits do increase.

I thought SOI works well with strained silicon?
More like SOI makes the creation of strained wafers easier. There's no real inherent performance advantage AFAIK that makes SSDOI have more effect than SS on bulk. Besides, trying to get power consumption down with straining is more of a chance game than a definite outcome. SOI, at least, can guarantee you power drop over bulk as long as you design for power consumption (as opposed to clock).
 
zidane1strife said:
it also pleases nvidia that wouldn't want their consumers to know through the rest of 2k5 that their $500 or $1000sli investment in gpus is just for a substantially inferior product to a 300-400~ console thus deterring many a purchase, thus reducing sales and their main product's pr image.
I don't think the sorts of PC gamers who buy SLI'd top-end GPUs care much about high-end console performance. They're either hardcore gamers who'll lap up any gaming hardware, or PC enthusiasts who only care about PC tech. The fact that a console has better performance than a PC is neither here nor there when the top-end PC space has a 6 month timeframe. Those who hang out up there are used to seeing their current gear become inferior in the blink of an eye, and are surely either happy with that or willing to shell out on the next latest and greatest GPU configuration.
 
Cell chip also gets around process problems by design

Easy to overlook in all the fuss about how the IBM-Sony-Toshiba Cell chip will challenge Intel is just how much performance improvement the radical new architecture also gets from some pretty mainstream process technology.

To ramp the blazing 4GHz, 256Gflop, low-power (48W) processor quickly to volume production, the companies opted for a 90nm process, not 65nm. The exotic raised transistor structures and more extreme spun-on low-k dielectrics originally considered got dropped along the way in favor of processes already up and working. Both the 90nm and the 65nm version will use mainstream CVD SiOC, with a k value of about 3, according to Sony semiconductor technology executives interviewed by WaferNews’ Japanese partner Nikkei Microdevices. The only process change planned for the 65nm version is reportedly to change the silicide from CoSi2 to NiSi.

Kenshi Manabe, Sony’s CTO for semiconductors, told Nikkei Microdevices that Sony will use the same production processes for the Cell chip as for its next-generation conventional embedded DRAM graphics chip for the PlayStation 2, at both transistor and interconnect levels. They will be made in the same Nagasaki fab, though the Cell chip is on silicon-on-insulator (SOI) and the other on bulk silicon.

Much emphasis was put on design-for-manufacturability from the beginning, by designing for the most effective optical proximity correction and doubling the vias to make sure one of them works. The other major changes to get the most performance improvement for the fewest manufacturing problems are SOI and strained silicon. “If we were going to increase operating frequency, not using SOI was not an option,â€￾ explained Manabe. He said that after testing both bonded and implanted SOI wafers, they found essentially no difference in either cost or yield. They used a partially depleted SOI, because keeping the Si layer completely uniform across the 300mm wafer in the fully depleted version, as required to maintain a stable threshold voltage, turned out to be too costly and complicated.

The Cell chip uses what developers argue is a relatively simple approach to create strained silicon as well, eliminating the epitaxial SiGe some others have used to create localized compressive strain on the pMOS. Instead, they use the dual stress liner system, depositing a highly tensile SiN cap layer over the whole wafer, then patterning and etching it away in pMOS regions, leaving tensile strain in the nMOS areas. Then they repeat the process with a highly compressive SiN cap layer over the wafer, and leave it only on the pMOS regions for compressive strain. With an off-current of 100nA/µm, this dual stress liner strained silicon improved the nMOS drive current by 11% and the pMOS by 20%. A microprocessor made with the technology and IBM’s Power Architecture design showed a 7% increase in operating frequency compared to other strained silicon technologies, and similar results are expected from the Cell chip. - P.D.

http://sst.pennnet.com/Articles/Art...ID=227804&KEYWORD="matrix semiconductor"&p=67

It seems dual stress liner is already used in CELL. I didn't know CELL is a 48 watt chip.
 
Good find Brimstone. And I'll tell you that if it's running at 4GHz@48 watts with the 1.1 or 1.2 core voltage required, it should run a fair bit cooler at the 0.9 or 1.0 required for 3.2 GHz.
 
Well, the XeCPU is estimated (pegged in an IBM interview off-the-cuff) at ~85 watts, so at 3.2 GHz, if the respective numbers hold true, the Cell in PS3 would have half or less the power draw of the XeCPU.
 
I didn't know CELL is a 48 watt chip.
Doesn't entirely surprise me given the Schmoo plot for the SPEs. Assuming that such figures are accurate and there's 8 working ones, that still leaves a pretty good amount of power to be occupied by the PPE, and all put together external busses. Also, I'm kind of assuming that the 48W (as well as the Schmoo plot wattages) is a TDP figure as opposed to necessarily a true max power consumption.

Still, given the drop down to 0.9 V for 3.2 GHz, even with the figure being TDP, the true max power consumption would still probably be around half the XeCPU's draw. Though we don't really know if XeCPU's 85W is a max or a TDP (or both, which is possible).
 
Last edited by a moderator:
Good point on the TDP ShootMyMonkey - that's probably what they're quoting as really only AMD makes a habit of quoting max burn numbers.

Looking at those TDP estimates though, even if we take the most conservative route of assuming a 1.1vcore for the 4GHz operation, and a drop to only 1.0vcore to maintain 3.2Ghz operation, that would still take the estimated TDP figure from ~45watts to ~30watts - and that's a pretty good number.

If the TDP is coming down from 4GHz@1.2vcore and/or going down to 3.2Ghz@0.9vcore, the power savings would be even more dramatic, so for now I'll just assume the previous conservative scenario.

But still, a TDP of ~30 watts would be a pretty good achievement on this chip.
 
Power consumption (or rather lack thereof) sound very good. I hope they can keep RSX down too, and make for a quieter system. I hate noisy computing devices!
 
xbdestroya said:
Good find Brimstone. And I'll tell you that if it's running at 4GHz@48 watts with the 1.1 or 1.2 core voltage required, it should run a fair bit cooler at the 0.9 or 1.0 required for 3.2 GHz.

Supposedly the SPEs only take a few watts each... I wouldn't be surprised to see half the wattage coming from the PPE. I don't think it would be all that surprising if the XCPU ended up eating more wattage than the Cell.

85 watts for XCPU seems a bit high though...?
 
Bobbler said:
85 watts for XCPU seems a bit high though...?

Well, as I said it came from an interview with one of the IBM researchers familar with the chip. I'm looking for it now, but it's been quoted a number of times on this forum as well, so you can be assured that I'm not trying to purposefully spread FUD here or anything.

PS - Whatever I've read, I know you've read to, because while searching I found you quoting the same 85 watts figure :cool: Link

PPS - Well, the primary source comes back into the open. It's quite the hassle to hunt these things down you know! ;)

I have read that the Cell processor was designed in part to run an RTOS -- I guess that's obvious, given its gaming focus. What other embedded processors are interesting right now?

Paul:
ARM is heavily used for embedded and Linux runs on both 64-bit and 32-bit ARM. There are even SMP ARM parts out there.

It really blew my mind when I first saw that -- a four-core, single-chip ARM, running at 350 and 550MHz, providing 1,440 Dhrystone MIPS, all on 600 milliwatts of power.

Of course, compare that to our PowerPC processor for Xbox which does 700 times as many floating-point operations per second as the four-core ARM does integer operations per second. It has only three cores instead of four, but it does use quite a bit more power. Still, 85 watts is well within range for a consumer device and not that long ago you couldn't buy a supercomputer that could do what PowerPC can now do, regardless of how much power you had available.

But again, the important question is "what does your application need?" If you're running off a battery, the ARM processor we just talked about is high power. You get only a few minutes of that kind of power from a D cell. On the other hand, if you have a wall outlet, 85 watts is trivial -- less than an amp.

Interview Link
 
Last edited by a moderator:
Back
Top