CELL Patents (J Kahle): APU, PU, DMAC, Cache interactions?

Gubbi said:
Brimstone said:
Speculation Mode


Super-Scalar, But Not as You Know It

<snip>

IMO, Sun are doomed. They have taken the eye off the single thread performance ball and they are going to lose.

They make it sound like wide super scalars are a bad idea. They argue that most of your execution units will idle since many programs have limited instruction level parallism (ILP). This is where SMT comes in. SMT takes advantage of the fact that you're likely to have an excess of execution units with all that goes with it (lots of rename registers etc.) The reason why SMT is comparatively cheap (10-15% extra logic) is because all that is added is the capability of the OOO scheduling engine to track more than one context (and probably add registers for the extra context).

For the many programs with limited ILP, Sun's approach and a wide super scalar with SMT will do approximately the same, both keeping the majority of their execution units busy by executing multiple threads. But a significant fraction of programs will have lots of ILP, and these will fly like shit off a silver shovel on our wide super scalar compared to Sun's Niagara.

Cheers
Gubbi


Microsoft with a Sun like "Throughput Computing" CPU design would face a similar problem that Sony is going to face with CELL: getting developers to rethink how they write their code. The challenge to get great performance in the next console cylce will probably end up being 50/50 split between hardware and software design and engineering.
 
Jaws said:
Won't argue that figure...still <<< 200 mm2

So?

Jaws said:
I'm using the 200 mm2 as a yardstick at 90 nm. I'm kinda on the optimitic side!

I'll say... 200mm² (no matter what the process) is a pretty expensive die to sell... Especially for a console.

Jaws said:
...Earlier in this thread I was trying to show if the PS3 BE was feasible for Sony at 65 nm and the die area was around ~ 300 mm2.

Yeah, sure in a devkit... But that's too big to sell in any numbers for a console, unless Sony get's absurdly excellent yields...

Jaws said:
The PS2 EE was 240 mm2 at 250 nm and the GS was 279 mm2 at 250nm at launch.

The PS2 never shipped with an EE or GS on those scales... Those were only available on the earliest DTLs. The PS2 shipped with a 224mm² EE and a 188mm² GS. The EE was quickly refined down to a 110mm² die by the US launch and when FAB1 finally came online (about 6 months after the US launch) the GS was down to 108mm². And you remember the shortages because of that don't you?

Jaws said:
IMHO, I'm pretty sure Sony will not release a PS3 CPU under 200 mm2 at 65nm. If MS is on an older process for Xenon, and for them to compete with the PS3 CPU, then they will need something in the range of 200mm2 at 90nm at least.

The last thing I can imagine MS doing is deciding their processor needs strictly off of die size, and certainly not going for anything that big unless IBM was cranking them out like popcorn...

Jaws said:
I've seen everything from the PPC 603 to the Power5+ bandied around!

Who would suggest a 603? Some old die-hard 3DO fan?

IIRC, the top of their family trees would still trace back to the Power1, no?

Depends... You could say all of IBM's RISC designs go all the way back to the 801. But that's just silly company heritage. But the Power1 wasn't even a PowerPC (and not even a single chip, but three).

The bastdard child was the 601 and the 32bit PPCs followed from there to the 750.

No, they are not bastard children (well maybe except the 601, but that's only because the 601 included a lot of logic to remain compatible with the old POWER ISA). The 60x, 75x, and 74xx (Motorola) were not bastard children, I don't know where you picked up that idea...

he 64 bit 620 was ultimately derived from the 601 also and was meant to give birth to the 970 but it wasn't succesful or something and IBM used the Power4 core instead to make a 64bit PPC 970.

The 620 wasn't even an IBM processor... It was largely a Motorola effort (as was the 615 and 630)... And no it was not derived from the 601, nor meant to "give birth" to the 970... And you're ignoring the P2SC, Power3 and RS64 designs as well...
 
The PS2 never shipped with an EE or GS on those scales... Those were only available on the earliest DTLs. The PS2 shipped with a 224mm² EE and a 188mm² GS. The EE was quickly refined down to a 110mm² die by the US launch and when FAB1 finally came online (about 6 months after the US launch) the GS was down to 108mm². And you remember the shortages because of that don't you?

Are you sure ? I remember that several Japanese PlayStation 2 launch units shipped with the 250 nm GS which was 279 mm^2 since they had big problems at Nagasaki with 180 nm GS chips.
 
Are you sure ? I remember that several Japanese PlayStation 2 launch units shipped with the 250 nm GS which was 279 mm^2 since they had big problems at Nagasaki with 180 nm GS chips.

Not all 250nm GS were 279mm²... And even at the 188mm² die size it was still too big for the Kagoshima fab to crank out with reasonable numbers (also the smaller GS was a hybrid like the EE+GS), until FAB1 got it's start up issues ironed out...
 
I am sorry, but SCE seems to agree with me unless you can tell me that this PDF is wrong:

http://www.sony.net/SonyInfo/IR/info/presen/eve_03/handout.pdf

(page 2 of 5, second slide in the page)




Edit: I can see that the 188 mm^2 chip is indeed a hybrid from the slides, but I do remember that some early PlayStation 2 consoles in Japan had a considerably bigger GS than later ones.

I am guessing that 250 nm GS that measured 279 mm^2 made their way into PlayStation 2.

In regards to PlayStation 3 we are talking abotu 300 mm Wafers versus 200 mm Wafers which have been used for PlayStation 2 chips so far.
 
I am sorry, but SCE seems to agree with me unless you can tell me that this PDF is wrong:

The PDF is right, but you're not reading it right...

Edit: I can see that the 188 mm^2 chip is indeed a hybrid from the slides, but I do remember that some early PlayStation 2 consoles in Japan had a considerably bigger GS than later ones.

I've got a Japanese launch PS2 and it's GS is smaller (and doesn't have the big fan either) than the one in our first DTLs. The bigger GS you saw was probably the hybrid GS and not the straight 180nm FAB1 GS...

I am guessing that 250 nm GS that measured 279 mm^2 made their way into PlayStation 2.

EE#2 and GS#2 that you see in the slides are the components that shipped in PS2s for the Japanese launch...
 
archie4oz said:
Jaws said:
Won't argue that figure...still <<< 200 mm2

So?

Huh? The whole point of me eliminating the PPC 440 core for Xe CPU is based on this assumption of a 200mm2 die as a reference...

archie4oz said:
Jaws said:
I'm using the 200 mm2 as a yardstick at 90 nm. I'm kinda on the optimitic side!

I'll say... 200mm² (no matter what the process) is a pretty expensive die to sell... Especially for a console.

Maybe at launch...but a console has to sustain a closed architecture for 5yrs+ ...

archie4oz said:
Jaws said:
...Earlier in this thread I was trying to show if the PS3 BE was feasible for Sony at 65 nm and the die area was around ~ 300 mm2.

Yeah, sure in a devkit... But that's too big to sell in any numbers for a console, unless Sony get's absurdly excellent yields...

Like I mentioned earlier, I'm an optimist ;) ...It's still close enough to a 279 mm2 GS. And Sony will likely drop to 45nm asap...

Also, earlier in this thread it was mentioned the NV40 die was 270-305 mm2 at 130 nm. Sony has economies of scale on it's side compared to the NV40 and the fact that the PS3 chipsets in a closed system will have to be competetive for 5yrs+... Using prior precedents, Sony are likely to launch a 'beast' of a die, then drop process asap, IMO.

STI have made massive Fab investments, IIRC, larger than the PS2 launch not only for GPUs, CPUs but also Fab partners for XDR memory production. It looks to me they'll have more capacity at the PS3 launch than the PS2 for larger economies of scale and to shift more units than the PS2 launch...

And as Panjev has mentioned below, they are using 300mm wafers instead of 200mm wafers for better yields per wafer. Because the BE and GPU are comprised of highly repetitive units, adding extra units for redundancy and increasing the die area is also likely to increase yields. This time around, they are wary of MS presence, unllike when PS2 was locked down. I'm sure they won't be cutting corners if they can help it...

archie4oz said:
Jaws said:
The PS2 EE was 240 mm2 at 250 nm and the GS was 279 mm2 at 250nm at launch.

The PS2 never shipped with an EE or GS on those scales... Those were only available on the earliest DTLs. The PS2 shipped with a 224mm² EE and a 188mm² GS. The EE was quickly refined down to a 110mm² die by the US launch and when FAB1 finally came online (about 6 months after the US launch) the GS was down to 108mm². And you remember the shortages because of that don't you?

Panajev has posted the PPT below...it quite clearly shows the 250 nm process extending into Fiscal Year 2000 . IIRC, that's after end of March 2000. It extends approx. a quarter into the FY2000, ~ Jun/ Jul 2000. PS2 was launched March 2000 in Japan. Unless they were binning them for fun, the EE was 240 mm2 and the GS was 279 mm2 at launch there.


archie4oz said:
Jaws said:
IMHO, I'm pretty sure Sony will not release a PS3 CPU under 200 mm2 at 65nm. If MS is on an older process for Xenon, and for them to compete with the PS3 CPU, then they will need something in the range of 200mm2 at 90nm at least.

The last thing I can imagine MS doing is deciding their processor needs strictly off of die size, and certainly not going for anything that big unless IBM was cranking them out like popcorn...

Well, if MS can't match what I've assumed the die sizes and process for Sonys chipsets, then they're not gonna match them for performance...

archie4oz said:
Jaws said:
I've seen everything from the PPC 603 to the Power5+ bandied around!

Who would suggest a 603? Some old die-hard 3DO fan?

:LOL: Can't remember where I saw it... I'll link it if I find it...


archie4oz said:
The bastdard child was the 601 and the 32bit PPCs followed from there to the 750.

No, they are not bastard children (well maybe except the 601, but that's only because the 601 included a lot of logic to remain compatible with the old POWER ISA). The 60x, 75x, and 74xx (Motorola) were not bastard children, I don't know where you picked up that idea...

I wrote bastard child not children , meaning just the 601 ;)

archie4oz said:
he 64 bit 620 was ultimately derived from the 601 also and was meant to give birth to the 970 but it wasn't succesful or something and IBM used the Power4 core instead to make a 64bit PPC 970.

The 620 wasn't even an IBM processor... It was largely a Motorola effort (as was the 615 and 630)... And no it was not derived from the 601, nor meant to "give birth" to the 970... And you're ignoring the P2SC, Power3 and RS64 designs as well...

If the 620 wasn't derived from the 601, then was it a 'clean sheet' design without using any techs from the 601? And was it at this juncture that the IBM / Motorola alliance became fragile?
 
Jaws said:
If the 620 wasn't derived from the 601, then was it a 'clean sheet' design without using any techs from the 601? And was it at this juncture that the IBM / Motorola alliance became fragile?

The 64bit 620 was derived from the 4-way superscalar 604, - but with a shorter pipeline and fewer (!!!) rename registers.

The front end of the pipeline was changed: The fetch/decode and issue stages were collapsed into a fetch/decode/issue stage by adding predecode bits to the instruction cache (making decode/issue faster). The amount of rename registers was lowered too, the reasoning being that with a shorter pipeline you could make do with fewer rename registers. The reasoning was wrong since the basic execution pipeline didn't change, the rename resources would live for as long as they did in the 604. The 604s rename resources/OOO capabilities only just covered the latency associated with instruction execution and level 1 cache access, the 620 not even that.

cheers
Gubbi
 
Like I mentioned earlier, I'm an optimist

I'll say, that's quite an understatement... And who's to say that the industry won't have the same hiccups @65nm or 45nm that it's had @90nm?

It looks to me they'll have more capacity at the PS3 launch than the PS2 for larger economies of scale and to shift more units than the PS2 launch...

Of course, because now the fabs are built (unlike during the pre-PS2 days where SCEI had to actually build them)...

Also, earlier in this thread it was mentioned the NV40 die was 270-305 mm2 at 130 nm. Sony has economies of scale on it's side compared to the NV40 and the fact that the PS3 chipsets in a closed system will have to be competetive for 5yrs+... Using prior precedents, Sony are likely to launch a 'beast' of a die, then drop process asap, IMO.

If you look at prior precidents (and not just Sony's) you'll see that launching with beastly dies isnt' very common for a consumer device. The PS2 is more the exception not the rule (and Sony will probably wish to avoid the same events)...

Sony's "economies" of scale have little to do with the NV40... Things would probably be a lot more rosier for the NV40 if the die were smaller and yeilds much better... You could say the same about the P4EE (another big die, expensive piece of silicon)...

Panajev has posted the PPT below...it quite clearly shows the 250 nm process extending into Fiscal Year 2000 . IIRC, that's after end of March 2000. It extends approx. a quarter into the FY2000, ~ Jun/ Jul 2000. PS2 was launched March 2000 in Japan. Unless they were binning them for fun, the EE was 240 mm2 and the GS was 279 mm2 at launch there.

Pana is posting IR material... It's fluff, and not exactly precise... The SCPH-10000 launched with the CXD9542GB (EE w/die area of approx. 224mm² (and should note that a later revised EE (CXD9615GB) replaced it) and the CXD2934GB (GS w/die area of approx. 188mm²). That's about as specific as I can get with you... The micrographs you see of the EE and GS on the far left are from the original models that didn't even run at the released clock speed...

ASC7PL (.18µm) became available Q4 FY99, and in Q1 FY2000 ASC7DL (DRAM) was available for full blown ASC7 GS parts (although IIRC volume didn't really pick up until 2H FY2000)...

If the 620 wasn't derived from the 601, then was it a 'clean sheet' design without using any techs from the 601?

The 620 has about as much in common with the 601 as the K8 does with the P5...

And the 750 was the last *official* collaborative AIM design...
 
Nice and juicy info Archie :).

How much would you say the Broadband Engine CPU could cost Sony to make ?

Is $140 per chip too much ?

Because for that money Intel was making (not even in ultra high volumes as this figure was estimated around Itanium 2's launch) 400+ mm^2 Itanium 2's (3 MB of cache) in their 130 nm process with 200 mm Wafers.

That figure actually is more like $130 according to people closer to Intel sources (the $140 figure was estimated by the MicroProcessor Report or MPR guys).
 
Gubbi said:
Jaws said:
If the 620 wasn't derived from the 601, then was it a 'clean sheet' design without using any techs from the 601? And was it at this juncture that the IBM / Motorola alliance became fragile?

The 64bit 620 was derived from the 4-way superscalar 604, - but with a shorter pipeline and fewer (!!!) rename registers.

The front end of the pipeline was changed: The fetch/decode and issue stages were collapsed into a fetch/decode/issue stage by adding predecode bits to the instruction cache (making decode/issue faster). The amount of rename registers was lowered too, the reasoning being that with a shorter pipeline you could make do with fewer rename registers. The reasoning was wrong since the basic execution pipeline didn't change, the rename resources would live for as long as they did in the 604. The 604s rename resources/OOO capabilities only just covered the latency associated with instruction execution and level 1 cache access, the 620 not even that.

cheers
Gubbi

From what you describe, it seems that the 620 was not a balanced design by Motorola (I understand that IBM had little input) and is that why the 620 dissapeared off the PPC radar? ...didn't follow what happened to the 620. Shame really as Motorola made some cracking CPUs in their 'golden' period with the 68xxx range :)
 
archie4oz said:
Like I mentioned earlier, I'm an optimist

I'll say, that's quite an understatement... And who's to say that the industry won't have the same hiccups @65nm or 45nm that it's had @90nm?

Nothing wrong with optimism as long as it's substantiated to a degree and balanced with a healthy dose of pessimism ;)

Well, spreading the fab investments between Sony, Toshiba and IBM also spreads and lowers the risks between them...


archie4oz said:
Also, earlier in this thread it was mentioned the NV40 die was 270-305 mm2 at 130 nm. Sony has economies of scale on it's side compared to the NV40 and the fact that the PS3 chipsets in a closed system will have to be competetive for 5yrs+... Using prior precedents, Sony are likely to launch a 'beast' of a die, then drop process asap, IMO.

If you look at prior precidents (and not just Sony's) you'll see that launching with beastly dies isnt' very common for a consumer device. The PS2 is more the exception not the rule (and Sony will probably wish to avoid the same events)...

Sony's "economies" of scale have little to do with the NV40... Things would probably be a lot more rosier for the NV40 if the die were smaller and yeilds much better... You could say the same about the P4EE (another big die, expensive piece of silicon)...

Excluding precedents outside what Sony set for itself, their 'beastly' die strategy has been both a commercial (runaway succes this gen with PS2) and technical success (still can hold it's own against consoles released 18 months+ after it's release). So why "will Sony probably wish to avoid the same events" ?

I was just stressing compared to NV40 production that Sony can generate a much higher volume demand for their chipsets. If they are to use similar die sizes to the NV40 then their access to state of the art fabs and higher economies of scale should make those sizes feasable for a PS3.

archie4oz said:
Panajev has posted the PPT below...it quite clearly shows the 250 nm process extending into Fiscal Year 2000 . IIRC, that's after end of March 2000. It extends approx. a quarter into the FY2000, ~ Jun/ Jul 2000. PS2 was launched March 2000 in Japan. Unless they were binning them for fun, the EE was 240 mm2 and the GS was 279 mm2 at launch there.

Pana is posting IR material... It's fluff, and not exactly precise... The SCPH-10000 launched with the CXD9542GB (EE w/die area of approx. 224mm² (and should note that a later revised EE (CXD9615GB) replaced it) and the CXD2934GB (GS w/die area of approx. 188mm²). That's about as specific as I can get with you... The micrographs you see of the EE and GS on the far left are from the original models that didn't even run at the released clock speed...

ASC7PL (.18µm) became available Q4 FY99, and in Q1 FY2000 ASC7DL (DRAM) was available for full blown ASC7 GS parts (although IIRC volume didn't really pick up until 2H FY2000)...

Thanks for this info...this is interesting. So you're confirming that all PS2s sold to the general public had the EE and GS at 180nm or less? A link to a source would be nice :) If this is true, then I presume that graph showing 250nm EE and GS being produced until Jun/ Jul 2000 were for the low volume PS2 TOOL devkits?

This info would shift some of my assumptions. Firstly, I'm not changing my opinon on beastly die sizes for the BE and GPU ~300mm2 and then a quick drop to 45 nm. The assumption I'm changing now, IMO, is that the PS3 chipsets released to the general public would be based on 45 nm process and hence the launch date of the PS3 will be tied to mass production of that process being ready. A release date of Q1 2006 would then be more likely than Q4 2005 if they can get 45 nm off the ground. Chipsets produced on 65 nm would likely find their ways into Cell worksations and PS3 TOOL devkits and possibly other devices.

Launching at 45 nm would have many advantages in realising the BE and GPU. IIRC, the capacitor-less eDRAM from Toshiba was designed for 45nm process and can be employed for large amounts of eDRAM on the BE and GPU (64 + 64 MB) at reasonable die sizes. It would also mean less power consumption and higher clocks would be feasable and a BE > 2 GHz+ may not be such a pipe dream! :D

archie4oz said:
If the 620 wasn't derived from the 601, then was it a 'clean sheet' design without using any techs from the 601?

The 620 has about as much in common with the 601 as the K8 does with the P5...

And the 750 was the last *official* collaborative AIM design...

Wasn't saying how 'common' they were, just that they can be traced to each other. Gubbi mentioned the 620 was derived from the 604. If the 604 was derived from the 601, then I suppose the 620 can be thought of as ultimately being derived/ traced from the 601. Also, If the 970 was derived from the Power4 and the Power4 can be traced back to the Power1, which gave birth to the bastard 601, then in some rounabout way to answer your first question, the 750 and 970 can be traced back to the Power1, which was my first answer! :)
 
Is $140 per chip too much ?

Were any Xbox or PS2 chips cost that much during its launch period ?

For a console, IMO that's too much, if I were the one that making decision, PS3 with two $140 chips is a no go.
 
Jaws said:
Gubbi said:
Jaws said:
If the 620 wasn't derived from the 601, then was it a 'clean sheet' design without using any techs from the 601? And was it at this juncture that the IBM / Motorola alliance became fragile?

The 64bit 620 was derived from the 4-way superscalar 604, - but with a shorter pipeline and fewer (!!!) rename registers.

The front end of the pipeline was changed: The fetch/decode and issue stages were collapsed into a fetch/decode/issue stage by adding predecode bits to the instruction cache (making decode/issue faster). The amount of rename registers was lowered too, the reasoning being that with a shorter pipeline you could make do with fewer rename registers. The reasoning was wrong since the basic execution pipeline didn't change, the rename resources would live for as long as they did in the 604. The 604s rename resources/OOO capabilities only just covered the latency associated with instruction execution and level 1 cache access, the 620 not even that.

cheers
Gubbi

From what you describe, it seems that the 620 was not a balanced design by Motorola (I understand that IBM had little input) and is that why the 620 dissapeared off the PPC radar? ...didn't follow what happened to the 620. Shame really as Motorola made some cracking CPUs in their 'golden' period with the 68xxx range :)

The 620 was an IBM effort, Motorola concentraded on MPUs for the embedded/personal computing market. The 620 did show up in a few Bull servers.

As for the 68K family. That family was doomed once CPUs entered the superscalar era - because the (long) variable length instructions would have to be decoded sequentially (ie. An opcode could have an extension, which again could have an extension), unlike the x86 where you know the length of the instruction by looking at the first few bytes.

So while it was "nicer" for an assembly programmer to code for it was harder to make fast implementations.

Cheers
Gubbi
 
Gubbi said:
.....
So while it was "nicer" for an assembly programmer to code for it was harder to make fast implementations.

Cheers
Gubbi

Ahhh...rose tinted glasses and the 'golden' Amiga/ Atari ST era! What devs used to squezze out them 68Ks! 8)
 
Jaws said:
archie4oz said:
If the 620 wasn't derived from the 601, then was it a 'clean sheet' design without using any techs from the 601?

The 620 has about as much in common with the 601 as the K8 does with the P5...

And the 750 was the last *official* collaborative AIM design...

Wasn't saying how 'common' they were, just that they can be traced to each other. Gubbi mentioned the 620 was derived from the 604. If the 604 was derived from the 601, then I suppose the 620 can be thought of as ultimately being derived/ traced from the 601. Also, If the 970 was derived from the Power4 and the Power4 can be traced back to the Power1, which gave birth to the bastard 601, then in some rounabout way to answer your first question, the 750 and 970 can be traced back to the Power1, which was my first answer! :)

Jaws, you can trust Archie's PPC knowledge. The 601 was very quickly put together by AIM. It was supposed to be a quick and dirty implementation of the PPC instruction architecture so that IBM could use it to continue their Power workstation series with it. It had hardware support for all Power idiosyncrasies (which later implementations did not). Among other things this mandated a unified level 1 cache (I can't remember why, just that it did). The core was primarily an IBM effort, while the bus interface was taken almost directly from Motorola's 88K family (the 88110 to be precise). There is a writeup here

Later AIM built the 602, 603 (and 603e) and the 604. But all of these were fresh designs.

The 603s were targetted at typical Motorola markets whereas the 604 was heralded as a workstation class CPU (and indeed primarily built by IBM engineers). The 620 was based on the 604 design.

Cheers
Gubbi
 
V3 said:
Is $140 per chip too much ?

Were any Xbox or PS2 chips cost that much during its launch period ?

For a console, IMO that's too much, if I were the one that making decision, PS3 with two $140 chips is a no go.

I don't have costings down to chip level but do have them down to console level manufacturing estimates at launch;

Xbox ~ $ 325 Souce...

PS2 ~ $ 475 (Don't have a link but, IIRC, analysts were around that figure)

That $140 quote from Pana for itanium2 was for a 400+mm2 die on a 200mm wafer at low volumes. If we assume to get ballpark figures that the cost is dependent on the die area and wafer size, and simple linear scaling, then a 300 mm2 BE would cost for a 300mm wafer,

400 mm2 ~ $140 for 200 mm wafer,

300 mm2 ~ $ 105 for 200 mm wafer,

300 mm2 BE on 300 mm wafers should give increased yields ~ 200/300 * 105 ~ $ 70 per BE

And if Sony decide to go 45nm for the launch process for all PS3s, then the dies will be smaller than 300 mm2 and larger volumes.

If we say, ~ $ 150 for both the BE and GPU, then that leaves, ~ $ 325 from $ 475 (if we assume Sony to be around that figure for PS3).

Next bigest component costs would be the XDR memory chips, Blu-ray drive and harddisk. Who knows, XDR memory at 100 GB/s could cost more than the BE! ...And how much can we squeeze in? A minimal size hard disk should suffice and a basic BD-ROM drive. Say another $ 250 for these, brings us to ~ $ 400 with $ 75 remaining.

Then all the minor components, motherboard, case etc. to spread around. There does seem plenty of capacity at $ 475 to absorb all these costs. 8)
 
Are you going to be really upset when it turns out the the PS3 isn't the "prefered embodyment" from the patent?

Just how much heat do you think a huge die like that at 4GHz would put out, and what would it's power requirements be? These are just as important (if not more so) as the feasability of making the things.
 
ERP said:
Are you going to be really upset when it turns out the the PS3 isn't the "prefered embodyment" from the patent?

Just how much heat do you think a huge die like that at 4GHz would put out, and what would it's power requirements be? These are just as important (if not more so) as the feasability of making the things.

I won't be dissapointed, 'cause I'm happy with my current PS2, GCN, Xbox, GBA SP, PC, MAC and PDA and plenty of unfinished games! :p

As long as they push the envelope for what consumers get in a console, I'd be more than happy. I'm expecting the preferred embodiment but not 4GHz. I know heat will be the biggest showstopper in a small console casing. But hey, why not innovate in the art of thermodynamics and cooling aswell. Some poweramps have extreme heating issues and some designs have incorporated the case with fins etc. to conduct heat away etc. I'd like to see some innovations with next gen consoles on that front. But I'd be happy with a 1GHz BE. :)

When the official PS2 specs were released early 1999, I was in 'awe' (whether they were realised or not didn't matter but it seemed consumers were going to get a bargain and they did). If I'm not 'awed' in the same way when PS3 is released, then from my POV, they haven't kept the momentum for next gen. As long as they have valid reasons, I wouldn't have an issue. :D
 
Back
Top