PlayStation III Architecture

I've been enjoying this long thread since the begining, and as some news have surfaced, I'd like to point them out:

http://forums.xengamers.com/showthread.php?s=b0e6ca8fae2dbb4b61b3f7df40ed3278&threadid=68825

Following Nintendo president Satoru Iwata's revelation last week regarding plans to develop a successor to the Nintendo GameCube, Sony has begun to divulge new details regarding the PlayStation 3. During a recent semiconductor conference held in Tokyo, Sony Computer Entertainment chief technology officer Kenshi Manabe addressed the topic of Sony's next console. Therein, Manabe-san revealed that Sony might deliver the PlayStation 3 slightly ahead of the company's original schedule. Believing Microsoft plans to launch the successor to the Xbox in late 2005, Manabe-san said Sony will be forced to respond with the release of its next console.


Did he just talk about delivering the PS3 before Xbox2? In a semiconductor conference?
 
http://www.eetimes.com/semi/news/OEG20030129S0049

YeS, it appears others agree with my predictions... the pc gpu market shall sloweth...

If this indeed takes place, it will be interesting seeing what turns out of this since the ps3 will be approx.. 10 fold the jump in technology that the ps2 was, and seeing that this time they shall be Slower... If this happens it will be nigh impossible to eclipse ps3 if u come within one year(prior or after...) it's release.
 
yeah it would help to get some ideas from archie or faf on this whole thing... there was an article on gamasutra.com on the possibility of having character meshes of like 800.000 polygons and the whole issue of memory speed not coping with the sheer t&l speed we are going to have in the future....

with 800.000 polygons per character, wouldnt that be enough for FF:TSW kind of detail?? at least character wise, i am aware that the movie renderes the same frame multiple times to achive some effects but what the hell thats a whole lot of polys just for 1 character...
 
"chip makers wil be forced to shift to new technology ever 3 years rather than 2 years as many have done since 1990s. Technologists at TSMC-aka-the cheese factory- are among those that have called for slow transitions- 90nm."-ps3 will be 45-65nm tech.


This is the part that makes me think... sure nVIDIA and ATI are experienced anmd all in 3D and their push to programmable architectures while still accellerating many functions in HW might buy them speed, but being behind in manufacturing technology is a big disadvantage...

If by 2005, nVIDIA reaches 90 nm ( or lower... maybe they will sub-contract AMD or Intel to manufacture their chips, that might happen, but it will cost them... ) and PS3 ships with 65 nm... PS3 has an advantage... smaller chips, better power consumption, higher transistor density...

nVIDIA's partners barely umplemented 0.13u ( TMSC ), while Sony partners are already working at 90 nm abd have the libraries for the 65 nm process ready...
 
Just in case...

The data in the sig. is abbreviated(aka small summary of a portion of the article.)and with added info.(not everyone knows that TMSC stands for "The most supped-up CHEESE".) So take that into consideration.
 
Panajev2001a said:
This is the part that makes me think... sure nVIDIA and ATI are experienced anmd all in 3D and their push to programmable architectures while still accellerating many functions in HW might buy them speed, but being behind in manufacturing technology is a big disadvantage...

Don't people ever look at the main B3D page? 90nm for R500 http://www.beyond3d.com/index.php#news3945
t0112img_1.gif


I anticipate Nv35 in 2H 2003, Nv40 in mid 2004, Nv45 in late 2004, early 2005. Nv4x in Xbox Next. I feel this is optimistic.
 
Vince... sorry I missed that news story :p Chillllllllllllll you hot-tempered Italian ;) ( look who's talking :LOL: )


ATI announced today that they have reached a "Broad Reaching Partnership" with Cadence for their silicon design tools. Predictably The deal looks towards the next design challenges on the 90nm process.

"We have tremendous respect for ATI and look forward to helping them solve their toughest design challenges at 90 nanometers and below," said Penny Herscher, executive vice president and chief marketing officer at Cadence. "Complex graphics and digital media chip designs demand the best technology solutions, and it's rewarding that ATI chose Cadence as its partner, for our technology leadership across the board and, in particular, the industry-leading Encounter platform. The goal of this expanded agreement is a Cadence-centric flow that meets the majority of ATI's design needs today, and that we collaborate to meet their needs in the future."

While ATI are known to be working on the 130nm process for the upcoming RV350 chip, they will not utilise the 130nm process in a high end part until R400, which is scheduled for the latter half of 2003. This being the case its likely that R500, which is probably being developed by the same team that produced R300 (Radeon 9500/9700), will be targetted at the 90nm process being discussed here and is likely due for release within 18 to 24 months.

24 Months == 2 years == 2005... in that year they would ship a 90 nm part which is not bad...


I am not dissing ATI, I am just thinking that in the year 2005 Sony could come up with 65 nm parts, having a technology advantage over ATI and nVIDIA which could mean lower losses for PS3 production ( at the beginning they will bleed a bit, like any big console manufacturer does, selling belowe manufacturing cost a bit )...

And after that when the other are marching towards sub 90 nm parts, Sony could march towards 45 nm parts...
 
I hope none of these companies rush incomplete products out the door, just to get the first to market and initial customers.

I think Microsoft, Sony and Nintendo are all looking to put a console on the market in 2005.

The first person to market will get a good amount of customers (being the new in-thing), but if the product is mediocre these other companies with better products will take over the lead within 6 months.

I don't think we'll see the year gap between releases of next-gen consoles, like the PS2 had.

Hopefully they all have solid libraries and complete systems. By complete I mean very very little need for any add-ons to support the roles. Comes with broadband adapter, dvd player, wireless controls and/or remotes, and build in storage capacity (hard-drive).

Speng.
 
ATI has shown they're very good at cramming a lot of logic transistors into a chip and pushing the boundaries of a process.
 
While this is an interesting concept, what I am wondering is: what happens if I run a program, written for an APU with speed X, on an APU of speed ( Y )... ?

X > Y

Which is the inverse of the situation the patent illustrate...

I don't think they expect APUs to slow down in the future. And they expect you to replace PS3 with PS4 an so on.
 
V3... what I am thinking is how the code designed to run on a device like PS3 ( 4 GHz ) is supposed to run on a slower device ( in terms of clock-speed )...

The problem would be that a single software Cell might loose synchronization since it is designed to cordinate say three 4 GHz APUs and it is now running on a Cell device with 3 APUs free, but running at 2 GHz...

How can a software Cell migrate on the network and being able only to be processed by machines with AT LEAST the same processing speed as the machine it was compiled for... not 1 cycle less...

It was clear, the way they presented it, that they were thinking about backward compatibility and I knew that forward compatibility would seem a strange idea... but since inter-operation between devices of different configuration and clock-speed ( but same ISA and Instruction Set :) ) is quite fundamental to a nice spreading of the Cell technology and it was hinted at quite a bit in the patent I decided to give it a thought...


And I concluded that the "absolute timer" would still make things work...

The total result would come later since the clock-speed is lower, but it would come correct and there would be correct synchronization between the APUs that are working on that same software cell would still be there as they are all downclocked by the same factor ( and the time slice give by the absolute time might be different... maybe wider )...

However I am not completely sure about this... as the patent is a bit misterious here...


Let's go over it ONE more time shall we ?

[0140] As shown in this figure, the absolute timer establishes a time budget for the performance of tasks by the APUs. This time budget provides a time for completing these tasks which is longer than that necessary for the APUs' processing of the tasks. As a result, for each task, there is, within the time budget, a busy period and a standby period. All apulets are written for processing on the basis of this time budget regardless of the APUs' actual processing time or speed.

All software Cells are written without specific attention to the APU's actual processing time or speed...

and it also says that the time budget is longer than the time needed by the APUs ( which are processing a certain software cell/object ) to complete the given task.

But then it says the following:

[0143] In the future, the speed of processing by the APUs will become faster. The time budget established by the absolute timer, however, will remain the same. For example, as shown in FIG. 28, an APU in the future will execute a task in a shorter period and, therefore, will have a longer standby period. Busy period 2808, therefore, is shorter than busy period 2802, and standby period 2810 is longer than standby period 2806. However, since programs are written for processing on the basis of the same time budget established by the absolute timer, coordination of the results of processing among the APUs is maintained. As a result, faster APUs can process programs written for slower APUs without causing conflicts in the times at which the results of this processing are expected.

What happens if I run a program written for 3.1 GHz APUs on 3.0 GHz APUs ?

Maybe the time budget stays the same if we think about a program written for a slower APU running on a faster APU and widens ( if it is needed ) when we think about the inverse...

Can you see the dilemma here ?
 
Saem said:
ATI has shown they're very good at cramming a lot of logic transistors into a chip and pushing the boundaries of a process.

Whoa? I think thats a design choice, I hardly think it's a case of ATI being very good or especially not superior at getting a high tranistor count/density of a given process.

Just as ATI did it, as could nVidia. In the same way ATI could have went 130nm, but conservativly [and wisely] chose not to. ATI is getting way too much credit for that... If the transition to TSMCs Nexsys process goes smoothly, then the tables will turn drastically again - especially if the Nv40 is as much of a departure as I hope and beleive do to MS's forward wants.
 
Some problems have no satisfactory answer, the only truely architecture independent representation of programs is in our heads ... until the AIs are smart enough to take over programming from us.
 
Mfa, I still will try to get some answers about this...

I am trying to think how they will manage smooth inter-operation of different Cell based devices like a PDA ( 1 GHz ? ) witha PS3 ( 4 GHz ? ), etc...

Constant ISA and Instruction set is one step, another is backward compatibility thanks to the "absolute timer" or NOOPs insertion ( I prefer the absolute timer thing as the sleep time would be a low power "sleeping" state )...

What I expect could be a cheap solution would be using a special "inter-devices communication" messagges ( software Cells that are designed ONLY to be used to transfer data between two devices that might be very different from each other [still you could code an application for a renderfarm of Cell processors which share the same kind of processing speed and all the software cells used would be sharing the same time slice that each APU in each Cell processor would have] ) which assume a very long time-slice, long enough to satisfy even a 200 MHz Cell chip which has only one FP Unit and 1 Integer Unit in the APUs it has ( slowest possible APUs allowed to be manufactured and labelled as Cell compliant )...
 
Panajev2001a said:
Vince... sorry I missed that news story :p Chillllllllllllll you hot-tempered Italian ;) ( look who's talking :LOL: )


ATI announced today that they have reached a "Broad Reaching Partnership" with Cadence for their silicon design tools. Predictably The deal looks towards the next design challenges on the 90nm process.

"We have tremendous respect for ATI and look forward to helping them solve their toughest design challenges at 90 nanometers and below," said Penny Herscher, executive vice president and chief marketing officer at Cadence. "Complex graphics and digital media chip designs demand the best technology solutions, and it's rewarding that ATI chose Cadence as its partner, for our technology leadership across the board and, in particular, the industry-leading Encounter platform. The goal of this expanded agreement is a Cadence-centric flow that meets the majority of ATI's design needs today, and that we collaborate to meet their needs in the future."

While ATI are known to be working on the 130nm process for the upcoming RV350 chip, they will not utilise the 130nm process in a high end part until R400, which is scheduled for the latter half of 2003. This being the case its likely that R500, which is probably being developed by the same team that produced R300 (Radeon 9500/9700), will be targetted at the 90nm process being discussed here and is likely due for release within 18 to 24 months.

24 Months == 2 years == 2005... in that year they would ship a 90 nm part which is not bad...


I am not dissing ATI, I am just thinking that in the year 2005 Sony could come up with 65 nm parts, having a technology advantage over ATI and nVIDIA which could mean lower losses for PS3 production ( at the beginning they will bleed a bit, like any big console manufacturer does, selling belowe manufacturing cost a bit )...

And after that when the other are marching towards sub 90 nm parts, Sony could march towards 45 nm parts...

Having 65 nm chips would be nice for Sony, but look at what ATI did with using a proven process with R300. In 2005 90 nm for TSMC should be a very mature process with Low-K, so Microsoft should have plenty of chips. If Sony/IBM do decide to utilize 65 nm chips, they could be opening themselves up for some problems. What if the yields aren't good enough? I tend to think Sony will be conservitive and go with a proven process, than risk a platform that generates enourmous profits for them. It would probably be wise for them to crank out as many PS 3's as possible rather than going for cutting edge manufacturing that could cause a slowdown in production.

The PS2 is beating the X-Box today despite the more advanced technology in the X-Box. Sony will sell as many PS3's as they can make for a long time. If Microsoft make a console more powerful it won't really matter. The hype Sony will have intially will be great, they just need consoles to fill the demand. They can always go for a die shrink latter to reduce costs, but the number of consoles produced in the first year I think matters more.
 
Vince-
I don't mean to sound like a smartass but it's plenty obvious that neither Nvidia nor anyone else suspected that Ati could pack 107 million transistors on a .15u process and clock it at 325 mhz with passive cooling. Nvidia was obviously not prepared for such a scenario because they didn't think it could be done.
 
Steve Dave Part Deux said:
I don't mean to sound like a smartass but it's plenty obvious that neither Nvidia nor anyone else suspected that Ati could pack 107 million transistors on a .15u process and clock it at 325 mhz with passive cooling. Nvidia was obviously not prepared for such a scenario because they didn't think it could be done.

I don't mean to be a smart ass, but biology hinders me.

nVidia was obviously screwed over by TSMC and the immaturity of their 130nm process. This isn't conjecture, it's fact. IIRC, they got screwed not only on TSMCs ramp time, but on the Low-K dielectrics aswell.

nVidia was doing what they allways do, which is push the bleeding-edge lithography and use it for an advantage. It's worked flawlessly to this point, and if TSMC didn't drop-the-ball - You wouldn't even dream of posting this.

ATI is more conservative, like 3dfx notably was, and it finally payed off. Although, historically and I'm sure it will remain constant going forward, it's a hindrence.

it's not that nVidia can't, or didn't beleieve it was possible or that ATI is some Allah reincartate that can make miracles of silicon(e) happen.

nVidia was designing with 130nm in mind when the design team sat down 2 years ago and it was the most attractive option. By the time it was evident that TSMC would become a problem, it was too late.

I find your argument quite lacking.

PS. Making miracles of silicone would be the DOA team... my fault
 
Vince said:
Steve Dave Part Deux said:
I don't mean to sound like a smartass but it's plenty obvious that neither Nvidia nor anyone else suspected that Ati could pack 107 million transistors on a .15u process and clock it at 325 mhz with passive cooling. Nvidia was obviously not prepared for such a scenario because they didn't think it could be done.

I don't mean to be a smart ass, but biology hinders me.

nVidia was obviously screwed over by TSMC and the immaturity of their 130nm process. This isn't conjecture, it's fact. IIRC, they got screwed not only on TSMCs ramp time, but on the Low-K dielectrics aswell.

nVidia was doing what they allways do, which is push the bleeding-edge lithography and use it for an advantage. It's worked flawlessly to this point, and if TSMC didn't drop-the-ball - You wouldn't even dream of posting this.

ATI is more conservative, like 3dfx notably was, and it finally payed off. Although, historically and I'm sure it will remain constant going forward, it's a hindrence.

it's not that nVidia can't, or didn't beleieve it was possible or that ATI is some Allah reincartate that can make miracles of silicon(e) happen.

nVidia was designing with 130nm in mind when the design team sat down 2 years ago and it was the most attractive option. By the time it was evident that TSMC would become a problem, it was too late.

I find your argument quite lacking.

PS. Making miracles of silicone would be the DOA team... my fault


exactly.... up there in the olympus of chip engineers and whoever else they all know what they can and cannot do in any given timeframe given the technology of the time.

it's not that the engineers at ATI (or nvidia or toshiba-ibm-sony) are more intelligent or just better than the competition in "cramming transistors into a piece of silicon"...

they just have to work around their limitations of the time, and that is the technology available and MONEY MONEY MONEY....
 
Brimstone said:
Having 65 nm chips would be nice for Sony, but look at what ATI did with using a proven process with R300.

Amazing, one foundrey slip-up and all of a sudden ATI becomes the king of lithography and back-end design. :rolleyes:

Anyone else want to jump in here and tell us how ATI is also saving the rainforest.

In 2005 90 nm for TSMC should be a very mature process with Low-K, so Microsoft should have plenty of chips.

I'll be surprised if nVidia's isn't beyond 90nm in 2005. This ties into your major fallacy of logic, that I'll address next ->

If Sony/IBM do decide to utilize 65 nm chips, they could be opening themselves up for some problems. What if the yields aren't good enough? I tend to think Sony will be conservitive and go with a proven process, than risk a platform that generates enourmous profits for them.

<hits forehead> Ok, a console has an anticipated lifespan of 5-6 years. Within 1.5 years, the processing elements within PS3 will have already been moved to a smaller process - just like with PS2. This comment is so... wrong.

If you go with a <doc evil quotation gestures> "Proven" process, then it will be designed to fit within the thermal and size budgets of that process - thus for the next 5 years, your console is at a distinct disadvantage in preformance AND will scale down linearly with the "Unproven" technology and more likely than not get similar yeilds after the first few months. Thus, you've killed -2 birds with a stone.

It would probably be wise for them to crank out as many PS 3's as possible rather than going for cutting edge manufacturing that could cause a slowdown in production.

The first year? Sony has already won the next generation - the hype will be even bigger than last time, even with a worse case shortage, it will have minimal effect. This is a non-issue from any standpoint.
 
I have to admit than when I first thought about not using 65 nm I was thinking purely at the old argument that states that yelds are generally higher with a "proven" process than with a brand new process which has barely been implemented in the fabs...

I was not thinking as a chip designer, only as a manufacturer ( and not very forward looking )...

I was not thinking at the fact that maybe going with less than 90 nm or 65 nm ( I was thinking at 100 nm ) would mean designing the chip around those stricter constraints and have to cut certain features off...

Instead designing the processor around 65 nm would help since we could afford certain design choices that 90 nm or 100 nm would not allow...
 
Back
Top