Look at this Google-cached (pulled down) PlayStation 3 page

One thing at a time ( Vince is correct in saying that SCE didn't just take Blue Gene's Cellular architecture and pasted on SIMD engines, although we know that IBM did take more than some inspiration form their own R&D labs and what they have been working on in the past 10 years or so... clearly, at least some ideas that arrived in the development of the Cellular Architecture used in the various Blue Gene projects also made its way into what Sony, IBM and Toshiba defined as Cell architecture )...

PlayStation 2 and Saturn...

Let's see... second SH-2 with 2 KB of cache and 2 KB of Work RAM, VU1 has 16 KB of Instruction Memory and 16 KB of Data memory and direct path to the GIF and the VU0 without using the FSB and messign with memory transfers...

The custom DSP, for Matrix operations, was "intelligently" put inside the DMA controller... this DSP had local registers, but not much noticeable local memory to buffer data before having to send it to RAM where the VDPs or one of the SH-2s could pick it up...

Do we need the DSP work on some data from memory ? Ok take some sleep SH-2s because you won't have work to do for a while ( doubling the clock frequency of that DSP == harder than adding a second SH-2... ah, let's add another SH-2... )... oh does the master SH-2 want to update memory so that the other SH-2 can read the data ( basic data sharing between the SH-2s ) ? Perfect, the DSP and the slave SH-2 have to sit on their butt-cheecks now...

Oh yeah... I forgot with that 2 KB of Work RAM and that unified 2 KB L1 cache the slave SH-2 must have "some" mileage...

After all, CELL is an IBM baby and SCEI is a paying customer, much like EE was a Toshiba baby and SCEI was the paying customer.

Wrong... you are assuming that SCE just paid and Toshiba delivered the processor... bull-shit, the VUs implementation did come from Toshiba, but the rest had heavvy collaboration between SCE and Toshiba engineers.

You treat SCE as if they had no real engineers... well you are wrong, but you dissing SCE is nothing new.

IBM had a lead in designing Cell, but Toshiba ( Japan 1st semiconductor manufacturer ) and Sony ( who are climbing up the ladder of Japan semiconductor producers ) did help defining what the architecture was going to be...

See you are inconsistent... first Kutaragi did made Cell the sucky thing Deadmeat thinks it is, then SCE is only a paying customer...

Both are designed on same architectural principle, a distributed processing architecture built around a powerful DMA engine shifting data around.

And a Yugo and a BMW are both built around an engine, 4 tires and a wheel...

To call the SCU a pwoerful DMA engine efficiently shifting data around is a bit of an over-statement...

It's like I took a GPU NV2A class ( no local VRAM ), a Pentium III and used an UMA approach with PC133 SDRAM... but...but the Xbox was built around the same scheme it must suck too...

Do you realize that to pass data to the SCU's DSP or to the slave SH-2 you have to stall the VDPs ( in the case that they are waiting for data coming from main RAM ) ?

On a second thought I agree with your statement and I will correct it a bit...

Both are designed on same architectural principle, a distributed processing architecture built around a powerful DMA engine shifting data around: one was designed intelligently ( PlayStation 2 ) and the other was put together at basically the last minute ( Saturn ).
 
Will IBM dump its X86 and POWER servers in favor of CELL servers???

No, but they can add another line for render-farms, server-farms and all handy-dandy GRID jobs they need to do... if that can generate money in services they will do it... if to make more money in services they have to design a new CPU and market it, so be it...
 
I REALLY dont know why some people get so worked up for something we know very little about and wont come out for at least another 2 years....

all we know is that Cell will be a HUGE thing for Sony and MAYBE other manufacturers (after all, nothing's to stop Sony licensing the thing to other people, given the right amount of money).

Deadmeat is just the usual fboy making comments on what he obviously knows very little about. i mean, the Saturn-PS2 comparison was all nice and funny at PS2 launch, but comparing saturn to PS3 is just abnormally psychotic.
Saturn was just a rush job by a COMPLETELY different company that ended up broke and not in the hardware market at all.
PS2 and PS3 are from a company that seems to do pretty much everything right whatever they do, be it for a magic chip inside the plastation hardware telling us "playstation is cool" or whatever u wanna call it. the situation is VERY different. like comparing apples to monkeys.
 
Evil_Cloud said:
Paul said:
This guy has many of your same views.

However his polygon performance bit seems a bit odd, 1G Poly's? That's a bit low, incredibly low for a 1tflops machine.

"But PS3 isn't going to do 75 million triangles per second. Oh no. It's going to do a lot more than that. I'm going to stick my neck out and say that PS3 will, at peak, be capable of 1 billion triangles per second"

No?

Well this guy better stick his neck out alot more maybe 10-20 times more for the raw polygon count
 
Re: ...

DeadmeatGA said:
Ounch, developer's worst nightmare confirmed. Sony provides no OS/compiler level parallelism abstration and IBM has no magic technology that will make multithreading(Or should we say micro-process piping) headache go away.

And this was confirmed where again?

A typical Japanese hardware, the pursuit of absolute theoritical performance with zero regards to developer convenience. The legacy of Saturn lives on with PSX3... I feel sorry for Fafalada and all the developers who will be working on this beast.

Which console was it that had developers complaining about not being allowed to "touch" the metal?
PSone had libraries from the very beginning, to make it especially easy to develop for.

Cramming more processors into single die is not the solution to performance problem. DirectX works because it makes parallel shaders largely invisible. Maybe MS will be getting its big break with Xbox2 afterall.

It’s the only way to use the die area to it full potential.
Anyway, the basic idea in this kind of architecture is "very" old. One of the first examples is the Illiac IV, from the mid sixties, for which the first sketches was made in the fifties (when tubes was still used!).

so parrellism works is MS is working on it? is it unfeasable to write DX stlye API's of other parrellell architectures
It is feasible, but Sony doesn't have the experience to do it.

And Mediocresoft has lots of experience?
Sony has been working with realtime 3d since (IIRC) 1984.
 
...

To Phil

Seriously, how can you argue CELL if you don't even see what the architecture/concept could do beyond just being a pretty CPU for the next PlayStation?
You tell me what it will do beside playing games, I just don't see it. Will CELL powered DTVs will automatically perform 16X AA to enhance TV image to almost film level?

why Sony is willing to invest that much of money into a single concept/architecture?
Because SCEI is Sony's profit generator and it is in Sony Group's interest to maintain their current market dominance. Not that CELL is going to be useful for anything else beside gaming.

Then, think about Toshiba and IBM who are investing similar amounts...
IBM is merely a contractor delivering a chip ordered by its customer(SCEI). CELL's success or failure doesn't really affect IBM that much, CELL has no place in IBM's product portfolio.(CELL workstation? CELL mainframe? Forget it) As for Toshiba, they had to get involved or lose business they already had with SCEI, they had no choice.

To Vince

You really have no clue what you're talking about, it's sad. I'm going to recommend that you read up on Ando and his vision for Sony Group.
I am able to distinguish fantasy from reality, what is possible in real world and what is just some executive's dream to go unfulfilled.

The Japanese media giant is keen to develop the potential of the TV as a gadget to deliver its vast library of music, movies and TV shows to home viewers over the net.
This issue was already settled 6 years ago, consumers already dollar-voted they didn't want PCTVs and Gateway and DELL PCTVs flopped badly on the market, neither have consumers gone for TV sets with builtin WebTV functionality. TVs and Internet do not mix. TV is for one-way entertainment. You turn on your PC for two-way entermainment.

Some visions will just remain visions.

I posted 4-5 sources pinning the Emotion Engine as going public in early 1999.
The DC hardware was completed in early 1998 and its rough spec was already well-publicized in media due to NEC Vs 3Dfx fallout by 1997. I doubt Toshiba offered a twin VU design to SCEI(Doesn't make any sense), because Toshiba proposal and LSI proposal for PSX2 CPU were roughly similar, Toshiba one being more programmable and LSI one being more fixed like PSX1. VU1 was a later date add-on, driven by Kutaragi san's desire to beat published DC specs by a substantial margin.

With the volume of publications in early 1997 describing the vector processors (eg. VU1 & VU0)
1997??? Sources please....

it's impossible for your scenario to pann out. The papers I posted serve as fact...
Please provide me with EE papers dating back to 1997.

To Panajev

I forgot with that 2 KB of Work RAM and that unified 2 KB L1 cache the slave SH-2 must have "some" mileage...
You don't understand how Saturn functioned. Saturn was not a SMP machine; the second SH-2 is configured differently from the master and serves a fuctionality similar to VU, that you would treat it like a programmable device running on its local program rather a CPU. The second SH-2 would have generated minimal bandwidth traffic if used properly.

you are assuming that SCE just paid and Toshiba delivered the processor...
Something like that.

To call the SCU a pwoerful DMA engine efficiently shifting data around is a bit of an over-statement...
Saturn was designed in 1993, not 2003. But the architectural principles were very similar. PSX2 was Sony's Saturn, except for the difference of Sony's marketting muscle and budget.

Do you realize that to pass data to the SCU's DSP or to the slave SH-2 you have to stall the VDPs ( in the case that they are waiting for data coming from main RAM ) ?
Why would VDPs stall? They are output devices, not input devices. The CPU uses the DMA engine to move a block of data into VDPs, VDPs don't read system RAM on their own.

one was designed intelligently ( PlayStation 2 )
Well, I still remember all the developer outrage against "insane" PSX2 architecture back in 2000, which is to be heard even louder when Kutaragi demos PSX3 sometime in the future.

No, but they can add another line for render-farms, server-farms and all handy-dandy GRID jobs they need to do...
X86 Linux boxes cannot be beaten for those applications. Anything non-standard costs money.

To London Boy

I REALLY dont know why some people get so worked up for something we know very little about and wont come out for at least another 2 years....
Some people read more from cryptic patent applications than other people do. I already learned all I needed to judge the system

all we know is that Cell will be a HUGE thing for Sony and MAYBE other manufacturers(after all, nothing's to stop Sony licensing the thing to other people, given the right amount of money).
Who is going to license? Why? Let's get real here, CELL is Betamax2 that no other manufacturers care about.

but comparing saturn to PS3 is just abnormally psychotic.
I never compared CELL to Saturn(I compared Sega Saturn to Sony Saturn(aka PSX2))..
 
Re: ...

DeadmeatGA said:
Will CELL powered DTVs will automatically perform 16X AA to enhance TV image to almost film level?

...and this little quote illustrates exactly the sheer lack of understanding of remotely technical subjects for even suggesting such a thing. The rest of his postings should be equally subject to uncertainty for validity. The whole basis behind 16x AA is that you can indeed render the native image at 4x by 4x the resolution of the end product. How would a DTV fed by a digital videostream be able to do this? The videostream is the end product. The best you could do from there is 1x AA, which is essentially just blurring the image a bit. If it were possible to receive and process the "16x resolution scheme", wouldn't it make sense to just present that resolution (or whatever highest resolution is compatible)? Maybe the exception could be if you happen to be receiving an uber, uber HDTV signal and wanted to show it at 640x480 for some insane reason...but then why would you even be concerned with video quality in the first place for doing such a thing? The point is, achieving 16x AA from a DTV signal won't have much to do with what hardware is present, Cell or otherwise. Perhaps, in some crazy new compression format to appear in the future, but we view all hypothetical, crazy new things with great pessimism, don't we?

All of this is besides the point, as the original statement was worded to mean exactly what it meant- "enhance TV image to almost film level". So some how he expects this TV to magically be able to upscan a signal resolution to "film level" (using a severly data-reduced digital signal as a starting point, no doubt)?

What could happen is a low-powered Cell functions as the general purpose processor in a DTV. Any moderately complex modern TV needs one for basic operation (a general processor, that is). This should be of no surprise (except to DM). Being a Cell processor with network compatibility, it certainly is a possibility spare cycles could be used in the function of a separate game console (on some distributed-computing project, of course, most likely not realtime gaming), for example. Naturally, the more Cell appliances you have laying around, the better the effect on whatever ends up being the cycle-hog (assuming this admittedly crazy architecture does work in the end). If a simple Cell unit could become as obiquitous in use as a general processor in various appliances (as say an ARM or some Motorola job), then I'd say the possibility for such a networked scenario is certainly possible.
 
I can see the rationalisation in yours posts but..

Dreamcast was completed by early 1998 and Kutaragi had its full spec by then. Of course he wanted to beat Sega numbers by a large margin, and this is why he ordered Toshiba to throw in VU1.

and

Dreamcast was completed by early 1998 and Kutaragi had its full spec by then. Of course he wanted to beat Sega numbers by a large margin, and this is why he ordered Toshiba to throw in VU1.

creeps the hell outta me, by striking me as disturbingly paranoid. while you offer some welcome analysis can you support the above claims with more than cricumstancial evedence?
 
...

...and this little quote illustrates exactly the sheer lack of understanding of remotely technical subjects for even suggesting such a thing.
It was meant to be a joke for hardware junkies.....

"enhance TV image to almost film level". So some how he expects this TV to magically be able to upscan a signal resolution to "film level" (using a severly data-reduced digital signal as a starting point, no doubt)?
1. Take the standard HDTV frame decoded to some resolution, say 800x600.
2. Blow up the frame to 1600x1200.
3. Do a 4x4 interpolation.
4. The "processed' frame looks sharper than the original, presuming the TV screen supports higher resolution.
5. Works with any video source.

What could happen is a low-powered Cell functions as the general purpose processor in a DTV.
Why would any manufacturer want to bother with CELL when single chip DTV decoding solutions are dirt cheap???? Your mentality is like that of NOUN guys, that "Consumers will pay more to play games on other DVD players". Well no one did, and NOUN went bankrupt.

Any moderately complex modern TV needs one for basic operation (a general processor, that is)
ARM has the embedded market covered. ARM cannot be beaten.

Being a Cell processor with network compatibility, it certainly is a possibility spare cycles could be used in the function of a separate game console
Yap, and my TV running on some ARM chip should be playing Tetris on its idle cycle. Why haven't TV set makers figure that out???

Naturally, the more Cell appliances you have laying around, the better the effect on whatever ends up being the cycle-hog
Doesn't work like this. The merit of console gaming is consistency, it is not acceptable to have a game run at 30 FPS on one TV set, and 60 FPS on Sony TV set.

assuming this admittedly crazy architecture does work in the end
A big assumption, that is.

can you support the above claims with more than cricumstancial evedence?
Evidences are gone. You know them if you were around 97~98. All I can definately tell you is that Toshiba and LSI competed for PSX2 CPU project with their respective proposals, both designs being fairly similar in capability with Toshiba one being more programmable, Kutaragi picked the Toshiba design, the final outcome of Emotion Engine is not what Toshiba initially proposed in spring of 1997, etc....

http://www.informationweek.com/649/iufin.htm
 
Evidences are gone. You know them if you were around 97~98.

then your point on this matter is moot.

All I can definately tell you is that Toshiba and LSI competed for PSX2 CPU project with their respective proposals, both designs being fairly similar in capability with Toshiba one being more programmable, Kutaragi picked the Toshiba design, the final outcome of Emotion Engine is not what Toshiba initially proposed in spring of 1997, etc....

the link you posted tells me that's true. do you have any specific details on toshibas original design Vs LSI.

I'll see if I can find any links.
 
Re: ...

DeadmeatGA said:
"enhance TV image to almost film level". So some how he expects this TV to magically be able to upscan a signal resolution to "film level" (using a severly data-reduced digital signal as a starting point, no doubt)?
1. Take the standard HDTV frame decoded to some resolution, say 800x600.
2. Blow up the frame to 1600x1200.
3. Do a 4x4 interpolation.
4. The "processed' frame looks sharper than the original, presuming the TV screen supports higher resolution.
5. Works with any video source.

??? You end up with the worst looking, blurry 1600x1200 image evah! What's the point? Just show it in the native 800x600. ...and you want to try this from heavily compressed digital video as the source??? Talk about magnifying your artifacts!

Why would any manufacturer want to bother with CELL when single chip DTV decoding solutions are dirt cheap???? Your mentality is like that of NOUN guys, that "Consumers will pay more to play games on other DVD players". Well no one did, and NOUN went bankrupt.

Naturally, you would ASSUME that a Cell would only be useful as a DTV decoding chip (not that it couldn't). What I said was a general processor. Something has to drive all those sophisticated electronic functions and draw all those pretty onscreen menus. We're long past the age where a TV is just a collection of knobs and a power switch that leads through a set of fixed logic pathways.

ARM has the embedded market covered. ARM cannot be beaten.

Perhaps, but competition and introduction of new paradigms can be a good thing. Mind you, the point was not that ARM has to be beaten. All I said is that a Cell chip could conceivably go into appliances where ARM has gone.

Yap, and my TV running on some ARM chip should be playing Tetris on its idle cycle. Why haven't TV set makers figure that out???

Was the ARM developed for distributed/networked scenarios? Your comment is nonsensical.

Doesn't work like this. The merit of console gaming is consistency, it is not acceptable to have a game run at 30 FPS on one TV set, and 60 FPS on Sony TV set.

Naturally, you make the faulty assertion that this could only benefit fps, but a benefit is a benefit...
 
...

You end up with the worst looking, blurry 1600x1200 image evah
1600x1200 is always better than 800x600. You see more detail than 800x600, even if 3/4th of pixels were artifically created.

What I said was a general processor.
ARM is also a general purpose processor.

Something has to drive all those sophisticated electronic functions and draw all those pretty onscreen menus.
ARM has been doing that just fine all these years.

All I said is that a Cell chip could conceivably go into appliances where ARM has gone.
Why use CELL when ARM costs $1 a piece and is proven to work? Do you need 1 Telaflop to change channels? Why pay $200 and even more expensive software suits??? It is not logical.

Perhaps, but competition and introduction of new paradigms can be a good thing.
I welcome competition and new paradigm. But the new paradigm must be auto-parallelism, not "We throw in a bunch of processors in a die for a massive marketting hype, now you code slaves figure it out how to work this thing" kind of paradigm.

Was the ARM developed for distributed/networked scenarios?
Is CELL then??? CELL does not do networking automatic, developers had to code in the networking. You do networking cheaply and more reliably on proven ARM platform than an unproven and beastly complex CELL.
 
Re: ...

DeadmeatGA to Vince said:
The DC hardware was completed in early 1998 and its rough spec was already well-publicized in media due to NEC Vs 3Dfx fallout by 1997. I doubt Toshiba offered a twin VU design to SCEI(Doesn't make any sense), because Toshiba proposal and LSI proposal for PSX2 CPU were roughly similar, Toshiba one being more programmable and LSI one being more fixed like PSX1. VU1 was a later date add-on, driven by Kutaragi san's desire to beat published DC specs by a substantial margin

HA! I usually don't like giving out individuals information, but to show just how ignorant and insanely biased you are... it's understandable.

First lets start with the basics. This is what Howard did at Toshiba:

[url said:
http://poly.polyamory.org/~howard/resume.html[/url]]11/96 to 1/99: Manager, Synthesis & Physical Design, Processor Development
Toshiba America Electronic Components, San Jose, CA
Led the logic synthesis and physical design of the processor for the Sony Playstation 2 "Emotion Engine" (first commercial 128-bit microprocessor). 7 people in my team. Developed and managed back-end schedule with only 14% slip. Much travel to Japan and coordination with Japanese managers & engineers. Wrote Logic Design Rules spec and co-wrote RTL Design Guidelines. Spearheaded use of Ambit (synthesis) and Chrysalis (formal verification) tools. First silicon was 100% functional; second silicon (speedup) shipped in development systems; third silicon (shrink & cost reduction) has shipped over 23 million units.

Ok, now lets look at this:

1996: November: Left HAL after 4 years and started working at TAEC on a high volume embedded processor for a consumer application

1998: August: And at the end of the month, we finally taped out the chip I've been working on since I joined Toshiba

1999: January: Finished my job at Toshiba.
1999: March: Can finally reveal that the chip I was working on is the Emotion Engine in the Sony Playstation

Amazing the people you run into online in the Go! community.


DeadmeatGA said:
Evidences are gone. You know them if you were around 97~98. All I can definately tell you is that Toshiba and LSI competed for PSX2 CPU project with their respective proposals, both designs being fairly similar in capability with Toshiba one being more programmable, Kutaragi picked the Toshiba design, the final outcome of Emotion Engine is not what Toshiba initially proposed in spring of 1997, etc....

Um, alrighty buddy. More like, "Evidences never existed." Revisionist history is funny as shit to people with a clue:

[url said:
http://www.time.com/time/asia/magazine/2000/0320/japan.sony.html[/url]]He called together a few dozen engineers from all over the world, including Toshiba's team, to a secret meeting in the city of Ito in 1996. There, Kutaragi divulged his dream of turning PlayStation into a platform for connecting an increasingly wired world

So, we know the program started in 1996 when Kutaragi layed out the specs. You stated that by 1998, Kutaragi would have known the DreamCast specs for CPU when the CreamCast's design was completed. That means that in 1998, SCE/Toshiba must have changed the front-end specifications of the EE.

The chip taped out in August 1998. That leaves under one year for the front-end work, resynthesis, netlisting, subsequent development and 3 tape-outs (2 revisions) of a CPU. IMPOSSIBLE.

You do realize that the psycological test wasn't a joke, I was seriously trying to help you in the diagnosis.



Onto Cell:



Why would any manufacturer want to bother with CELL when single chip DTV decoding solutions are dirt cheap???? Your mentality is like that of NOUN guys, that "Consumers will pay more to play games on other DVD players". Well no one did, and NOUN went bankrupt.

Um, no. The incentive is easy interoperability with other devices utilizing Cell from Sony Group and Toshiba initially. The incentive is a scalable architecture that's can be used in all a companies home electronics and has the ability to interoperate and access the broadband internet to access digital media.

That's why. Because Sony's intention is to make the Sony Group into a broadband connected commodity that is self feeding and self sustaining. Sony Movies is the hottest film producer in Hollywood and holds enormous resources of digitized TV shows, movies, etc. Sony Music is huge in itself....

They, unlike any other company in existence, not only produces the digital media at the front end but they sell the back-end products. What Cell and Broadband will do it interconnect the backends between themselves and allow for Broadband to connect the digital creation with the digital presentation.

This is unprecedented if it's pulled off. No other company is in this position to control the entire spectrum of media transmission, nor has one been since the invention of the printing press. Not even AOL-TimeWarner had this potential in it's heyday.

People buy commodities. Sony is a commodity and it's about to increase this many-fold.

Yap, and my TV running on some ARM chip should be playing Tetris on its idle cycle. Why haven't TV set makers figure that out???

Because: (a) The present day architectures aren't designed as Cell is - both in terms of hardware and software. (b) Who said it's going to share resources like that for RT apps? (c) What other company could?

Doesn't work like this. The merit of console gaming is consistency, it is not acceptable to have a game run at 30 FPS on one TV set, and 60 FPS on Sony TV set.

No shit sherlock, nor did Sony or myself in this discussion ever state that developers would be programming to this paradigm.

Who is going to license? Why? Let's get real here, CELL is Betamax2 that no other manufacturers care about

Well, I'll tell you that you're quick to judge and ignorant of the industries want for cross-platform standards for electronics. Sony is already doing great with their Linux derivative being designed by Sony, Samsung, Philips and LG Electronics, etc. And the extention to Sony's OpenMGX designed with Philips/Intertrust.

The industry wants to sell their product. People want their products to be simpler to use together (eg. anti-Microsoft) and they want pervasive computing and allways connected. This is the direction cell is moving in.
 
Re: ...

DeadmeatGA said:
Why use CELL when ARM costs $1 a piece and is proven to work? Do you need 1 Telaflop to change channels? Why pay $200 and even more expensive software suits??? It is not logical.

It is scalable. But then if you actually read the patent or overcame your psycological problems you'd have realized this by now.

Infact the patent shows examples of chips based off the Cell architecture (which it is, an architecture not a specific chip) that are under 1/5th the size of the BE. And they can scale lower is necessary, although for the 2005 window this is great.


I welcome competition and new paradigm. But the new paradigm must be auto-parallelism, not "We throw in a bunch of processors in a die for a massive marketting hype, now you code slaves figure it out how to work this thing" kind of paradigm.

You don't know this. You don't know anything about the architecture and especially the software programming of it. I don't know the details, so I can be sure as hell you don't.

I feel that the XBox Next and Nintendo 5 must be so much harder to program because their preformance is so much greater and the effort to extract this preformance must scale linearly.


< Tries to slit wrists with plastic spoon >

Is CELL then??? CELL does not do networking automatic, developers had to code in the networking. You do networking cheaply and more reliably on proven ARM platform than an unproven and beastly complex CELL.

Bullshit again. Prove it, I want emperical comperasons based on numbers - not words. Isn't anyone else sick of your useless ranting based on your ideals and not any shred of evidence?

Software Cells anyone?
 
Deadmeat said:
Hi, you should be the one to bitch most about the PSX3 architecture here. Afterall, it is you who will be struggling on this beast, not us.
I need to see something of the actual architecture before I have something to bitch about though. I'll leave speculative bitching over old patents to you.
Though mind if I ask - this patent has been floating around for months, why start bitching about it now? It hasn't changed in any way since we first saw it.

I can clearly tell one of PSX2's VUs was a last minute add on in an attempt to boost geometric processing numbers against DC. I suspect it is VU1.
But VU1 IS the geometric/T&L processor in PS2.
It's both easier to use for that purpose and faster at it then VU0 (which should be obvious since VU0 wasn't designed to function as a T&L unit to begin with).
To anyone with ANY realworld usage experience it's painfully clear VU0 was the worse thought out part of the two, however, architecturally VU0 is even less likely to be an afterthought since it's actually coupled as coprocessor to R59k and extends the CPU instruction set, as well as working as a standalone unit.
 
My take DeadMeat

I think PS2 was more based on the response from PS1 than any other architectures ( Otherwise they might have stuck colour blending on!!! )
The design acts on the breakdown of scene graph data into static ( background and simple geometry ) and complex ( animated or physically modelled ) with VU1 probally having more in common with the 3D transform engines on SGI reality engines, and the HW opengl pipe... and VU0 being an extension of all of the requests/complaints about the original GTE on the PS1

I do agree that the 'cell' and 'ps3' need to be considered as a console though - that's how most people will use it, and if the architecture appears elsewhere
it's a bonus ( like the ATI chips in the new high end SGI machine... )

If you wanted to compare architectures PS2 probally shares more with the N64 than the DC..
 
Re: ...

Doesn't work like this. The merit of console gaming is consistency, it is not acceptable to have a game run at 30 FPS on one TV set, and 60 FPS on Sony TV set.


it would be funny though!!! i mean, imagine Sony marketing their new Cell powered range of TV's... NEW CELL-WEGA WIDESCREEN TV LETS U PLAY YOUR FAVORITE PS3 GAMES AT DOUBLE THE FRAMERATE!!!

AND COUPLED WITH THE NEW SONY CELL-POWERED MP3 PLAYER, U WILL GET 30% INCREASE OF PERFORMANCE!!!

:LOL: :LOL: :LOL:

come on... in the next generation, the only pc-like options we will have will be the choice of video output (interlaced or HDTV), the sound output (DD or DTS) and THAT'S IT.
and depending on the BR functionality, all recording options.

everything else must still be easy to use for Mr Joe Dunnowottodo.....

Cell powered TV's will probably only be able to "communicate" with PS3 so that the PS3 will automatically output the best signal for that specific TV without us even knowing it. Cell powered surround sound systems will communicate with it so that the best sound output will be fed through, without us even knowing it....

unless i'm wrong, the video out of any console is what it says, OUT, there is nothing coming in. signal goes from ps3 to TV and thats it.
 
small point

Actually Deadmeat, Saturn SH2's did operate in crude SMP manner.
Master/Slave was more bus arbitration than control...

However most efficient use tended to be with locked D$ so second SH2 could run geometry and animation calculations while first worked on game logic.
 
Back
Top