PlayStation III Architecture

Having 65 nm chips would be nice for Sony, but look at what ATI did with using a proven process with R300. In 2005 90 nm for TSMC should be a very mature process with Low-K, so Microsoft should have plenty of chips. If Sony/IBM do decide to utilize 65 nm chips, they could be opening themselves up for some problems.

The GS was at first manufactured at .25micron technology(if I'm not mistaken.), it was intended for a .18m process, and it changed to that process.... it is true that most of it is edram.... but stilll....

The XGPU which was downgraded many a times due to obvious yield problems.... has about 50% more transistors... and a huge FAN... yet it was manufactured at (.15microns if I'm not mistaken...).

That is more impressive than the .15 to .13 difference between the ati and nv chips...

Sony pushes sylicon to the limits knowing that better manufacturing tech in their own plants will later allow them to gain better yields...

IF 90nm is delayed to the year 2005... and the industry switches to a 3yr cycle... 65nm might not be available to the cheese factories until 2008...

Recall that the current GS has more bandwith then chips that have come a 3yrs later on superior manufacturing tech... recall that sony is going for another bandwith heavy design...

IF the gpu portions be it a custom cell or a full fledged gpu are well featured, the bandwith WHICH is becoming EVER more necessary will be available to feed a multiGHZ performance monster.

Let's all hope TMSC slows down the cheese making process, and their milk rots... ;)

EDITED
 
Vince said:
Brimstone said:
Having 65 nm chips would be nice for Sony, but look at what ATI did with using a proven process with R300.

Amazing, one foundrey slip-up and all of a sudden ATI becomes the king of lithography and back-end design. :rolleyes:

Anyone else want to jump in here and tell us how ATI is also saving the rainforest.

In 2005 90 nm for TSMC should be a very mature process with Low-K, so Microsoft should have plenty of chips.

I'll be surprised if nVidia's isn't beyond 90nm in 2005. This ties into your major fallacy of logic, that I'll address next ->

If Sony/IBM do decide to utilize 65 nm chips, they could be opening themselves up for some problems. What if the yields aren't good enough? I tend to think Sony will be conservitive and go with a proven process, than risk a platform that generates enourmous profits for them.

<hits forehead> Ok, a console has an anticipated lifespan of 5-6 years. Within 1.5 years, the processing elements within PS3 will have already been moved to a smaller process - just like with PS2. This comment is so... wrong.

If you go with a <doc evil quotation gestures> "Proven" process, then it will be designed to fit within the thermal and size budgets of that process - thus for the next 5 years, your console is at a distinct disadvantage in preformance AND will scale down linearly with the "Unproven" technology and more likely than not get similar yeilds after the first few months. Thus, you've killed -2 birds with a stone.

It would probably be wise for them to crank out as many PS 3's as possible rather than going for cutting edge manufacturing that could cause a slowdown in production.

The first year? Sony has already won the next generation - the hype will be even bigger than last time, even with a worse case shortage, it will have minimal effect. This is a non-issue from any standpoint.

Sony has already won the next generation? Yeah ok... :?

The problem Sony has with Microsoft, is that if they ever overtake them it will be hard to win back marketshare cheaply. Why even risk any possible snags with an advanced process when Sony still has the iniative? Does Sony even need to have a more powerful machine than the X-Box 2? In my opinion all Sony needs is plenty of volume to capitalize upon the "Sony hype". While Sony had a head start this generation, making it hard to really compare the consoles, it's still intresting to look at how Sony is dominating the sales charts vs to technically better rivals.

If a smaller process works out well, the decision is seen as wise. On the other hand, if complications arise, the platform that is the cash cow for Sony, becomes more vunerable to a Microsoft or Nintendo console.


<hits forehead> Ok, a console has an anticipated lifespan of 5-6 years. Within 1.5 years, the processing elements within PS3 will have already been moved to a smaller process - just like with PS2. This comment is so... wrong.

WTF are you talking about? Of course as time goes on chips get smaller to save money. Sony starts at 90 nm to assure good yields, then moves to 65 nm to save money later in the consoles lifespan. This to me is more plausible path for Sony to take.


When Microsoft is your competition there is no such thing as a "non-issue".
 
You are making the mistake I made Brimstone...

The thing is that the 90 nm and 65 nm manufacturing processes have quite different characteristics as far as transistors and e-DRAM density, heat dissipation, different materials ( which have an influence on clock-speed as well ), etc...

Take the Pentium 4 for example... if they designed with 13 um in mind ( not possible for that time, however ) they would have been able to deliver quite a different chip as they could have approached the design of the chip's architecture differently... instead things had to be left on the table and could not make it...


To go back to our situation, there is fear that at 90 nm it would be practically impossible to deliver the CPU and GPU of PS3 ( Cell based ) as the patent described it, to choose a point of reference... cuts would have to be made: maybe less e-DRAM, not as many APUs, slower clock-speed and more... money could be taken away from R&D for other PS3 components ( backward compatibility, Blu-Ray, etc... )...

There is the fear that the only way to have PS3 mass-manufacturing feasible would be below 90 nm... else Cell would come out seriously crippled and unbalanced... and that would be terrible...

No die shrink and future process could help... PS3 is designed to last 5 years, not 1-2 years...

Toshiba and Sony's 65 nm process has been completed ( libraries and logic synthesizing tools ) and both companies have been learnign quite few ideas from their collaboration with IBM and they have learned a few things on their own in the years... even with the manufacturing of PS2...

In 2 years they should be able to implement 65 nm technology in their old and new fabs...

We do not have TMSC, an external manufacturer, dropping the ball or overestimating their resources to bargain a better price... Sony and Toshiba will manufacture their chips in their own hi-tech fabs on which they have direct control...
 
What happens if I run a program written for 3.1 GHz APUs on 3.0 GHz APUs ?

If your program can meet the time budget, than it will have no problem. Problem will only occur if you fail to meet the time budget.

Maybe the time budget stays the same if we think about a program written for a slower APU running on a faster APU and widens ( if it is needed ) when we think about the inverse...

No, no the inverse won't work. You can't get something for nothing.
 
I agree, we cannot get everything... there is a problem then... there is no way to properly have your PS3 talk to a PDA and exchange data...

PS3 == 4 GHz

PDA == 1-2 GHz at the most...

it would not fit the time budget set for PS3's APUs... what I think is that software Cells that are supposed to migrate across several networked Cell devices of possibly greatly varying clock-speeds and raw performance won't be compiled with the highest speed Cell in mind, but more with the kind of "minimum spec"... something that would be able to fit just fine ( not too loose , not too short ) to what the PDA/similar device's APUs could do...
 
I agree, we cannot get everything... there is a problem then... there is no way to properly have your PS3 talk to a PDA and exchange data...

We have the real time budget to determine if the program can run on the PDA or not.

but more with the kind of "minimum spec"... something that would be able to fit just fine ( not too loose , not too short ) to what the PDA/similar device's APUs could do...

Since we are concern with real time performance here, you would want to optimised performance, not compatibility, there is nothing to be gain by this.
 
Panajev2001a said:
You are making the mistake I made Brimstone...

The thing is that the 90 nm and 65 nm manufacturing processes have quite different characteristics as far as transistors and e-DRAM density, heat dissipation, different materials ( which have an influence on clock-speed as well ), etc...

Take the Pentium 4 for example... if they designed with 13 um in mind ( not possible for that time, however ) they would have been able to deliver quite a different chip as they could have approached the design of the chip's architecture differently... instead things had to be left on the table and could not make it...


To go back to our situation, there is fear that at 90 nm it would be practically impossible to deliver the CPU and GPU of PS3 ( Cell based ) as the patent described it, to choose a point of reference... cuts would have to be made: maybe less e-DRAM, not as many APUs, slower clock-speed and more... money could be taken away from R&D for other PS3 components ( backward compatibility, Blu-Ray, etc... )...

There is the fear that the only way to have PS3 mass-manufacturing feasible would be below 90 nm... else Cell would come out seriously crippled and unbalanced... and that would be terrible...

No die shrink and future process could help... PS3 is designed to last 5 years, not 1-2 years...

Toshiba and Sony's 65 nm process has been completed ( libraries and logic synthesizing tools ) and both companies have been learnign quite few ideas from their collaboration with IBM and they have learned a few things on their own in the years... even with the manufacturing of PS2...

In 2 years they should be able to implement 65 nm technology in their old and new fabs...

We do not have TMSC, an external manufacturer, dropping the ball or overestimating their resources to bargain a better price... Sony and Toshiba will manufacture their chips in their own hi-tech fabs on which they have direct control...

I do think it's very possible for them to go with 65 nm. A 500 million transistor chip is hefty and it may require that process. If they have problems though, they'll be in a tough position.
 
Panajev2001a said:
Constant ISA and Instruction set is one step, another is backward compatibility thanks to the "absolute timer" or NOOPs insertion ( I prefer the absolute timer thing as the sleep time would be a low power "sleeping" state )...

I think the whole concept of using timers is wrong to begin with, but then ... it is an old patent, I have good hope the people from IBM will have saved them from using this.
 
Brimstone said:
WTF are you talking about? Of course as time goes on chips get smaller to save money. Sony starts at 90 nm to assure good yields, then moves to 65 nm to save money later in the consoles lifespan. This to me is more plausible path for Sony to take.

Wow, you completely missed my point - the relevent question is, WTF are you talking about?

Yeilds generally stabilize quickly if you're cooperation with the foundry is good - as I only assume it would be in this case. Thus, any short-term yeild problems are irrelevent as within 6 months: (a) Yeilds stabilize (b) Are a non-issue for the next 5 years.

Thus, in retarded speak: "You can trade short-term yeilds, for long-term preformance. If you go for short-term security, then you lose to the foe whose willing to take the short-term hit"

When Microsoft is your competition there is no such thing as a "non-issue".

Maybe in the PC market - unless their going to buy out an entire segments of the electronic industry, which we all can talk about since it'll never happen.
 
JF_Aidan_Pryde said:
By then, according to the note, Intel will be able to deliver 10.20GHz desktop CPUs codenamed "Nehalem" and produced using 65 nanometer technology.

Impressive. With a 1200mhz FSB - I like. Whats interesting is Intel will go from 5.5Ghz at the end of 2004 to 9.6Ghz at the end of 2005. Thats a big jump for one year, me = like.
 
Vince said:
Brimstone said:
WTF are you talking about? Of course as time goes on chips get smaller to save money. Sony starts at 90 nm to assure good yields, then moves to 65 nm to save money later in the consoles lifespan. This to me is more plausible path for Sony to take.

Wow, you completely missed my point - the relevent question is, WTF are you talking about?

Yeilds generally stabilize quickly if you're cooperation with the foundry is good - as I only assume it would be in this case. Thus, any short-term yeild problems are irrelevent as within 6 months: (a) Yeilds stabilize (b) Are a non-issue for the next 5 years.

Thus, in retarded speak: "You can trade short-term yeilds, for long-term preformance. If you go for short-term security, then you lose to the foe whose willing to take the short-term hit"

When Microsoft is your competition there is no such thing as a "non-issue".

Maybe in the PC market - unless their going to buy out an entire segments of the electronic industry, which we all can talk about since it'll never happen.

The technology advantage for Microsoft isn't hurting Sony at all right now. So why go after high end technology that may have lower yields? Sony can launch head to head with Microsoft with a lesser machine and probably still out sell them. The difference between the consoles during the next generation will probably be less noticable anyway.

It's hard to really compare the PS2 and X-Box because they were launched so far apart. Since the next console cycle looks like Sony and Microsoft will be launching close to each other, things will be different. Maybe Sony will need the technology edge to keep a X-Box 2 at bay.

My points are moot if they pull of 65 nm and have lots of PS3's on store shelves. If they end up having trouble with it and Microsoft has plenty of consoles on store shelves, they've just handed a nice gift to Microsoft.
 
Since we are concern with real time performance here, you would want to optimised performance, not compatibility, there is nothing to be gain by this.

No, we are concerned with compatibility... If you read again my post you will see that I was worried about several Cell devices inter-communicating and sharing data as easy as possible... different devices like a PS3 and a PDA...

The code designed to run on the PDA: OS, 99% of the applications running on it will be optimized for that processor, for that speed...

Same thing will hold true for the PS3...


The problem is the code, the apulets/software cells that travels back and forth between the PS3 and the PDA.


I have a good idea though: the two devices talking to each other could set the "communication speed" to the level both devices can talk at... basically we would have what is also known as auto-negotiation... of course we are not talking about the issue in networking terms, but a similar idea applies...

Each device would have in the OS or in the BIOS certain protocols to communicate and share data with other Cell devices...

If a device ( called A ) wants to communicate with another device ( called B ), it would send a special "probing" software cell ( feasible ) to the other device and if the APUs of the other device cannot respect the same time slice because of clock-speed or processing speed then this might happen:

The Cell device who sent the the "probing" apulet would expect a reply saying that everything is ok to continue with processing and and if the probing apulet could not be processed an error message would be sent to the original Cell device...

Device B would send a message to probe device A... something like "what's up ? what did you want ?"... when device A sent its probing apulet there was one or more APUs dedicated to receive and answering possible error messages and this kind of "what was the thing you wanted" kind of apulets...

The probe message sent by device B will be read and understood by device A as we know now that device A was the faster one of the two ( at understanding and processing those specific apulets... )... and now device A can re-formulate the requests... and communication can proceed...
 
My points are moot if they pull of 65 nm and have lots of PS3's on store shelves. If they end up having trouble with it and Microsoft has plenty of consoles on store shelves, they've just handed a nice gift to Microsoft.

Well considering that MS will prolly not launch the Xbox 2 first in Japan, but they will prolly launch it first in the U.S. when PS3 ships in the U.S., the Japanese territory will buy the PS3 launched and wait for more and even if they try to launch it in Japan at approximately the same time not too much will be different as Japan is still Sony's country... PS3 hype will be strong... very strong...

For the U.S. launch Sony should have solved any manufacturing issue if they have any...


Still I do not expect them to have such a bad shortage AGAIN... Sony too can learn from their mistakes... and judging from the amount of money they are putting in new fabs together with Toshiba, it seems they have been...

Plus you always have IBM to help out if you really needed...

The point is that it is possible if they go with a process like 100 nm or 90 nm that PS3 will be seriously crippled and thinking "aaah who cares they're Sony, they hava good brand name, they can still win next generation" is really wrong because it would be deadly to Sony and the PlayStation brand, it would be very shortsighted for Sony to do it.....

Yeah, save even $100-200 Millions or more at PS3 launch... start putting in people the idea the PlayStation brand has dropped the ball, it is crippled... yeah you might still win that generation... but you would be done the generation after that as consumer will have less and less confidence in you... your marketing power will fall...

Spending some dollars to make sure that there is no shortage even with the 65 nm process is a winning choice, not a loosing one...

If that is the only way PS3 won't be crippled... GO FOR IT...

Take the initial big loss, it will pay off later... when the 65 nm process will have very high yelds the manufacturing costs of PS3 will be dramatically decreased and you will be leaved with a forward looking architecture, powerful, versatile, with lots of features that make the consumer happy of having purchased PS3 ( like I felt with PS2 )...

go with less and maybe you will have to cut on CPU and GPU, on main RAM, on backward compatibility, on the new optical medium, other extra features, etc...


To specify a point I made... going with 90 nm means thinking about that process and all the features it offfers when you are designing the chip, laying out the execution units and local memories, etc...

You take it into account even when you consider manufacturing costs, you think about the price when you will have high yelds ( or good enough yelds ;) )... then you decide what to pack in the console and how to price the HW, what loss you can take ( maybe you will still lose some money on the HW even with better yelds... then you take into account when you can have the next die shrink, the next manufacturing process available ), etc...
 
yes there is... the apulets/software cells HAVE to be compiled for some version of Cell... with a certain minimum speed at least in mind... if a packet compiled for the 4 GHz Cell runs on a 2 GHz Cell and we keep the time slice provided by the absolute timer fixed we could end up with the 2 GHz Cell refusing to process the data as it would not respect the synchronization properly...

and this refused apulet could be the one the $ GHz device was using to start communicating with the other device...

A reminder from the patent...

All apulets are written for processing on the basis of this time budget regardless of the APUs' actual processing time or speed.

If Cell device A running at 4 GHz wants to talk and retrieve data from Cell device B at 2 GHz, it will have to send messages to it ( apulets )... those apulets at least at the beginning ( assumtion: you're going to be talking with a chip at least as fast as you ) will be code designed for device A processing: you are going to be sending an apulet with instructions on what to do, how to reply to this apulet received, etc...

If this apulet cannot be processed because it would not fit the smaller time slices the timer on device B would set, can you tell me how they exchange data ?

It could be a problem because I am thinking about scenarios in which device B would have to process a multi-APU apulet and therw ould be need of synchronization... maybe, when we are not sharing power like in a renderfarm or something ( you'd need the right software module in the OS for that let's say ) what will happen is that all this kind of communication code will require only 1 APU...

Still the the quote says "processing on the basis of this absolute timer" which could mean something like "when timer fires --> expect the result from that APU(s) to be ready" and this would still create problems as device B's single APU might have not finished the single APU task before the timer expired...



On one side they say "apulets not written with APU's effective speed and power in mind" on the other side they talk about "code that was designed for a slower generation of APUs" compared to "new code designed for a faster generation of APUs".


So it is possible while compiling specify the version of the APU ?

Won't this be a problem in markets outside PS3 ( we exclude also VCRs and TVs ) which is supposed to be replaced only like five years later...

Think about how many devices could be out there each with it s own clock-speed ? Are you going to requre the software developers to keep so many builds of the same software and having to re-compile it and re-distribuite it when any new Cll chip gets shipped... just to optimize it for the right clock-speed...

Or you can follow the path I was tracing with the last post about this issue..."auto-negotiation regarding absolute timer requirements"

Each device would have in the OS or in the BIOS certain protocols to communicate and share data with other Cell devices...

If a device ( called A ) wants to communicate with another device ( called B ), it would send a special "probing" software cell ( feasible ) to the other device and if the APUs of the other device cannot respect the same time slice because of clock-speed or processing speed then this might happen:

The Cell device who sent the the "probing" apulet would expect a reply saying that everything is ok to continue with processing and and if the probing apulet could not be processed an error message would be sent to the original Cell device...

Device B would send a message to probe device A... something like "what's up ? what did you want ?"... when device A sent its probing apulet there was one or more APUs dedicated to receive and answering possible error messages and this kind of "what was the thing you wanted" kind of apulets...

The probe message sent by device B will be read and understood by device A as we know now that device A was the faster one of the two ( at understanding and processing those specific apulets... )... and now device A can re-formulate the requests... and communication can proceed...


Still, this part of the patent risks to be a bit confusing...


BTW, it is still unclear how you effectively code the program based on a certain APU speed ( and remember they said that this setting reamins constant even when executing the apulet on a faster APU, so it must be embedded in the apulet itself )... especially after they state that ( again )
All apulets are written for processing on the basis of this time budget regardless of the APUs' actual processing time or speed.

Still criptic...
 
yes there is... the apulets/software cells HAVE to be compiled for some version of Cell... with a certain minimum speed at least in mind...

They actually worried more about the number of APUs available within the PE than the actual speed. Absolute timer, will deal with the speed.

Anyway I thought you were talking about something else before.

On one side they say "apulets not written with APU's effective speed and power in mind" on the other side they talk about "code that was designed for a slower generation of APUs" compared to "new code designed for a faster generation of APUs".

So it is possible while compiling specify the version of the APU ?

If they can, I just don't see how, they don't change the Absolute timer as devices get faster. In the example they gave, it seems as devices get faster, while the Absolute timer remains the same, those faster APUs sleeps to conserve power while waiting for their time budget to expires, thus the processing speed as a whole is limited by Absolute timer.

Think about how many devices could be out there each with it s own clock-speed ?

Hmm, they all could have the same clock speed for a generation or so.

[/Quote]Or you can follow the path I was tracing with the last post about this issue..."auto-negotiation regarding absolute timer requirements"[/Quote]

You don't need to do handshakes for devices to communicate, since pipeline is established first before any processing, so this kind of problem will be dealt at that stage.

BTW, it is still unclear how you effectively code the program based on a certain APU speed ( and remember they said that this setting reamins constant even when executing the apulet on a faster APU, so it must be embedded in the apulet itself )... especially after they state that ( again )

My first tought, when reading about the absolute timer, is that, its only for sequential synchronisation purpose and it contained within the software cell, to be used when they established the pipeline. But reading about it more, it seems to be fix.

Anyway it seems the number of APUs required, is the only available info on the header, nothing regarding speed. So I guess that's the only info they need, with regards to the APU, when transmitting software cells around.
 
The technology advantage for Microsoft isn't hurting Sony at all right now. So why go after high end technology that may have lower yields?

If Ms surpasses sony it's meaningless, but if sony surpasses ms' console it will be anything but meaningless....

The effects could be anything from lowering the sales of xbox2, MS making a saturn esque stunt(only this time lossing billions just to do a brute force quick upgrade.), cause a FULL yr to 2yr xbox2 delay... which would cause MS to take a loss for nearly a decade(between xbox1 and 2) which might be enough to force them into pulling out of the console business entirely.
 
If they can, I just don't see how, they don't change the Absolute timer as devices get faster. In the example they gave, it seems as devices get faster, while the Absolute timer remains the same, those faster APUs sleeps to conserve power while waiting for their time budget to expires, thus the processing speed as a whole is limited by Absolute timer.

I agree, Figure 28 clearly shows that and you can even give a thought about what the compiler does: if you specify you're compiling for PS3 the code you previously were compiling for the PDA ( you would have profiles for the FAMILY of Cell device and its generation... PS4 might be known as PS3 family, generation number 2... Cell-PS3 architecture ;) ) what the compiler will do is schedule the tasks differently... let's say that in an apulet we have several tasks to be processed... if we can specify which Cell architecture we are compiling it for ( no it won't introduce backwards compatibility issues, thanks to the absolute timer and the fact the devices will be faster in the future instead of slower... ) we can change the amount of operations to be performed in each task ( pre-calculating data for the next task if there are not dependency problems between the two tasks with those operations )

In theory they would change the absolute timer as infact they define the absolute timer as based of a clock independent and faster thant the APUs' clock... so if next generation of Cell device C has 6 GHz, the absolute timer would increase if we go by their word ( I know your point and I agree, just stay with me... ;) )...

They also say though that as devices get faster the absolute timer stays the same...


See the strange issue here ?

Well, the way I think it could be is related to the wording of the patent...

More specifically they say that the processor presented in the document had the absolute timer set to that certain frequency ( higher than the APUs' frequency ) and that it won't change in next iteration of Cell processors...

So my guess would be that they could have the processor present in the patent run at a very low speed: thinking about a minimal Cell device with low clock speed and then set the absolute timer with a clock signal that is just a bit higher than this low clock speed this basic Cell device is set at.

So all Cell devices manufactured ( that prototype Cell device, we derived the absolute timer from, would never be manufactured ) would have an APU processing speed that would be faster and the absolute timer would be set according to the prototype Cell device with minimal spec configuration...

Hmm, they all could have the same clock speed for a generation or so.

This is not a stupid idea... a PDA could be running at 4 GHz if realized in 65 nm and with the much lower transistor budget than a CPU like the Broadband Engine presented in the patent... but a TV would not need such a fast chip... it would be very tiny though, even smaller than the PDA chip so who knows...

Then again, since the operations the TV Cell and the PDA Cell would do compared to PS3 would be a bit less computationally intensive and during the longer sleep time period the Cell chip would consume a very little amount of power...




Ok, let's shift gears a bit ( always wanted to say that :D:D:D ) and go back to the first part of my reply to your message ( in this post, after the first quote )...

I agree with you that in the header of the software Cells we have no information regarding the clock-speed or the absolute timer setting, we have no such fields to simply state it...

We have the number of required APUs to do the job... so if your Cell device doesn't have the needed number of APUs the apulet will migrate... on that we agree...

What about the speed and the absolute timer ? Well we have no information related to these items in the apulet header, but we do have that data embedded into the code itself...


It is true when we write code ( write != compile ) we do not think about the processing speed of the machine and what we worry about is synchronization of the different APUs and different parts of the task we are executing, etc...

We basically synchronize on the absolute timer... "timer fires => APU is woken up => result is expected" sounds like multi-tasking on a modern OS, but with some of its mechanism ( of the process/task/thread scheduler ) built in HW or so it seems...

As I wrote at the beginning of this message I do believe that in the code we have a way of specifying the speed of the CPU: how else could we talk about code written "for an older/slower APU" running on a "newer/faster APU"... this also implies we can write code that gets compiled in such a way we can define it as being "tailored for a newer/faster APU"...

What changes should the amount of operations you can do for each task...

If you know you are writing code for a 8 GHz Cell device and we assume the time slice given by the absolute timer has been remaining the same since the first Cell ( "the time slice given by the absolute timer will not change" ) than we will know ( and the compiler will too ) that we can fit more work in each time slice offered by the absolute timer so that we can reach a higher level of utilization for our HW ( we are not interested about power consumption, in this example, but about performance ).

How about the code that would be taking care of communication and data sharing between our beloved 2 GHz device A and 1 GHz device B ?

well that code IMHO ( and all the software Cells that are sent to device B ) would use apulets written with a certain basic spec in mind that fitst both device A and device B ( internal ) time slices...

All, unless device B is recognized to be of same speed as device A... in that case ( think about a renderfarm... ) we want that both devices run the same code ( tailored for 2 GHz )...

How could this be done ? Well Cell OS could be modular as supporting several modules with each of the modules having different purposes ( let's talk about this in the real of communicating and sharing data with other Cell devices ):

One module would be used to communicate with the PDA-Cell family, generation 1 ( the issue of implementing newer modules and to phase out the ones that are obsolete can be dealt at another time )... this will fit also generation 2, 3, etc...

( //disclaimer: no I am not assuming the first generation of Cell devices
//will share the same closk-speed for the purpose of the point
//I am arguing now )


Another module would be used to talk with the HDTV-Cell family generation 1 ( different speed compared to PDA-Cell family generation 1 )

...

and so on...

Naturally you would have the nice and lean module that would allow the device to commnicate with other devices in the same Cell family ( the OS would have it always compiled in the kernel... the other modules could be called upon request and loaded in memory )...

I apologize for the "stream of consciousness" type of post... I will try to be more organized next time :)
 
More specifically they say that the processor presented in the document had the absolute timer set to that certain frequency ( higher than the APUs' frequency ) and that it won't change in next iteration of Cell processors...

So my guess would be that they could have the processor present in the patent run at a very low speed: thinking about a minimal Cell device with low clock speed and then set the absolute timer with a clock signal that is just a bit higher than this low clock speed this basic Cell device is set at.

So all Cell devices manufactured ( that prototype Cell device, we derived the absolute timer from, would never be manufactured ) would have an APU processing speed that would be faster and the absolute timer would be set according to the prototype Cell device with minimal spec configuration...

How do you utilsed the performance than ?

but a TV would not need such a fast chip... it would be very tiny though, even smaller than the PDA chip so who knows...

I read an article on EETimes magazine yonks ago, it was a Toshiba engineering talking about TV of the future. One of the point I remember was that in Sports like Football, they wanted to let the viewer to be able to replay a goal, from different angels, and one way to do this is to recreate the scene in 3D, and let the viewer manipulate it in real time. He estimates that it requires something around 8 billion polygon/s to get something somewhat realistic.

So I don't know why you think TV would not need such a fast chip is beyond me.

Remember high end TV has alot of margins, they can include state of the art chip in there.


How could this be done ? Well Cell OS could be modular as supporting several modules with each of the modules having different purposes ( let's talk about this in the real of communicating and sharing data with other Cell devices ):

Is Linux good enough for that ?
 
Back
Top