CONFIRMED: PS3 to use "Nvidia-based Graphics processor&

Status
Not open for further replies.
rabidrabbit said:
I seriously think people are overestimating the effect that next gen consoles will have on content creation and how much more time and staff the next gen games need.
Of course they'll require more art, more detail, larger worlds, longer and more complex code...

But it's not that the whole world around the next gen consoles stops, that only the consoles get more powerful and capable, while the content creation tools stay the same...after all they all more or less follow the same general technology curve.
The tools for modelling, animation, motion capture, programming... they all evolve too.
You're one of few that does. Content creation is expanding on an exponential curve, and there is not a developer in the world who is not chewing his fingernails at least a bit over the next gen.

There is definitely no sign that the tools are evolving on the exponential curve, and some have claimed they may not really be improving at all.

Greg Costikyan's take on it, which resonates with most people in the industry.
 
Li Mu Bai wrote:
There is no longer a fully cell-based EE2)

huh? :?


did you mean fully Cell-based GS3 or Visualizer? EE2 was meant to be a creative workstation CPU for 2002-2003.


you confused me, now let me confuse you :devilish: ;)


EE3 was supposed to be another creative workstation CPU for 2005-2006 as well as the CPU for PS3. as was the GS3, for graphics. BE might take the place of EE3 if BE isnt the same as EE3. likewise, the Visualizer might have taken the place of GS3 if Visualizer was not GS3. and now, an Nvidia or Nvidia-Sony GPU might take the place of Visualizer, that is, if the Nvidia or Nvidia-Sony GPU is not an implementation of Visualizer.

now collectively, we've probably confused the crap out of alot of people. and very possibly, I don't know what the hell I'm talking about :p
 
Things will change for the PC platform when enough tools and programs can be migrated over to a better architecture. There are too many disparate fields to pull that off all at once, but a new architecture that can power some new killer app for an industry to increase its productivity to an undeniable degree is what could cause such a switch. That will happen most directly in performance-critical industries, like CG studios and network servers, and will spread to broader use with time.
 
Dio said:
rabidrabbit said:
I seriously think people are overestimating the effect that next gen consoles will have on content creation and how much more time and staff the next gen games need.
Of course they'll require more art, more detail, larger worlds, longer and more complex code...

But it's not that the whole world around the next gen consoles stops, that only the consoles get more powerful and capable, while the content creation tools stay the same...after all they all more or less follow the same general technology curve.
The tools for modelling, animation, motion capture, programming... they all evolve too.
You're one of few that does. Content creation is expanding on an exponential curve, and there is not a developer in the world who is not chewing his fingernails at least a bit over the next gen.

There is definitely no sign that the tools are evolving on the exponential curve, and some have claimed they may not really be improving at all.

Greg Costikyan's take on it, which resonates with most people in the industry.
Ok, I'll believe it then. That was just my uneducated guess, I've never done any work for any game ever, and only occasionally done some modelling with 3D sw like ProE, Max...
Even if the developing sw and hw were evolving at a same rate as consoles, they'd still need to invest money on that new stuff, plus educate etc...

...still, if next gen games are mostly beat'em-ups, Racing games, fps...
is there really that much content that they need?

-A Tekken needs some ten characters, some ten 'levels'. The characters already are very detailed, with clothes, jewels, acessories... it already takes considerable artistic talent to model such human faces, a bigger poly budget would (I think) only make the artists work easier as he/she would not need to find shortcuts so that the model would still look as good as possible with a more limited resources. I think they aren't modelling them vertex by vertex, builing them by stitching poly by poly, they use more advanced modelling tools.
Isn't modelling more like drawing or painting today, than sculpting. Isn't a realistic looking round wheel easier, and faster even, to model with 5000 polygons, than to try and find a way to make it with 24 polygons and a low res 32 x 32 texture
-A racing game needs hundreds of cars (already a huge task in this gen, GT4), tracks (ok, trackside detail has a lot of room to improve).
-A fps game needs some enemies, about five or so distinctive "worlds" where much of the modelling and texture work will be recycled inside a world anyway.

Is it really that much content in a game that follows much the same basic designs as the genres today. A GTA type of game, or a MMORPG would of course need as much content as possible, but that type of games will probably not be seen early on consoles lifecycle.

I think the devs should be able to make visually very impressive next gen games with about the same resources as today, if they concentrate on the initial (graphical) impact. That would of course mean they'd have to sacrifice on the size of a game, but that would no necessarily be a bad thing, if the gameplay is innovative and enjoyable with high replayability.

For example a next gen Madden game. If (as) one is released at launch, very likely (imo) most of the resources have been allocated on the visual side of the game, as there is little (need) to invent on the gameplay, thus such game would not need really that much more resources than in this gen. wouldn't much of the modelling been also already done, couldn't they recycle the models that were used this gen for pre-rendered intros etc...?
 
Lazy8s said:
Things will change for the PC platform when enough tools and programs can be migrated over to a better architecture. There are too many disparate fields to pull that off all at once, but a new architecture that can power some new killer app for an industry to increase its productivity to an undeniable degree is what could cause such a switch. That will happen most directly in performance-critical industries, like CG studios and network servers, and will spread to broader use with time.

Well obviously, it will be a slow jump, just like it's been sloooooowly jumping in the last few years.

Now that you mention CG Studios, any word on Sony buying Alias|Wavefront, and what that will mean to the workstation/CG market? Especially now that they seem to be marketing their own workstations based on Cell technology.
 
Devourer said:
Qroach said:
Failure? nobody said that. Why is it people in here jump from one extreme to teh next??

Well, you wrote that Sony missed their ambitions. So basically they overhyped a machine they won't be able to deliver. To me that means failing on one's intentions and lying to their customers. Sorry if I read that wrong.

Oh grow up! If you don't know how the pre-system buildup game works, then don't come here and play. EVERYTHING being talked about here is speculation. Sony have promised *nothing* to its consumers. Everything is liquid until Sony announces the PS3. Heck, everything is liquid until it's on the store shelves.
 
Yep, to us, the internal "race" for the GPU production should have been a mystery. Obviously it's more convenient for many people if there is a story out to make people talk. A lot.

There were some companies who were running for the position. Toshiba was trying, Nvidia was trying and for all we know, there could have been more companies involved.
One of them won because it proved to be the most knowledgeable and capable. It's not a disappointment for Sony, it's commendable that they went as far as admitting what was obvious to everyone, that is Ati and Nvidia are just too good at their own game than Sony, who have a lot less experience than them in graphics technology. So, what better move than embrace one of them?

Sony had ideas, they never promised anything, and Nvidia complemented those ideas obviously, so there's the reason for the partnership.

It's not a failure because now the ARE Sony. They work with Sony on the GPU and the GPU will have Sony's written on it, together with Nvidia's name.

Does that explain things better?
 
Sony has never done a console design all by themselves.
Why suddenly now as nVidia is on board, some people count that as failure on Sony's technological design abilities???
Did Sony even ever consider using the "Cell" as a graphics processor purely as it is in their CPU, are they using "Cell" at some form with the collaborational nVidia design? ... we don't know yet. But isn't it safe to assume "Cell" will have something to do with the Sony/nVidia graphics solution ;)

PS they had.... Toshiba? MIPS?
PS2 they had... also Toshiba, MIPS, LSI....
PS3 they have Toshiba, IBM, nVidia, (MIPS?, LSI?)...

Edit: hehe :) I seem to repeat much of what l-b says... can't help it, I guess I just like to `suck up to him' ..... and lick his boots... :oops: :LOL: .....
 
rabidrabbit:

> ...still, if next gen games are mostly beat'em-ups, Racing games, fps...
> is there really that much content that they need?

Yes.

> Is it really that much content in a game that follows much the same
> basic designs as the genres today.

It's not just the amount of assets but also the complexity of the assets that will increase. Stuff that could be done in days last gen takes weeks this gen and will take months next gen.
 
rabidrabbit said:
Edit: hehe :) I seem to repeat much of what l-b says... can't help it, I guess I just like to `suck up to him' ..... and lick his boots... :oops: :LOL: .....

If only you were the only one... I'm physically and mentally drained. :devilish:
 
PS. QRoach and Johnny Awesome, are you guys the same poster or just love me enough to both quote me exactly the same, in the same format, in your sigs?

It's just funny to see you flip flop from one generation to the next. Anyway, I didn't even realize he has the same quote?
 
rabidrabbit said:
Sony has never done a console design all by themselves.
Why suddenly now as nVidia is on board, some people count that as failure on Sony's technological design abilities???
Did Sony even ever consider using the "Cell" as a graphics processor purely as it is in their CPU, are they using "Cell" at some form with the collaborational nVidia design? ... we don't know yet. But isn't it safe to assume "Cell" will have something to do with the Sony/nVidia graphics solution ;)


I don't see "CELL" as a failure. In context of what has been debated countless times in the past though, people have advocated "CELL" as a rulebreaker in a price-to-performance ratio. The rules don't seem to have be re-written yet.


Here is an old quote where Vince responded to something I wrote.





Brimstone wrote:

There is no way Sony can best ATI or Nvidia at what they do transitor for transistor. It looks like the CPU for the Xbox 2 will be a Power PC 350 variation to me. Consider the heat restrictions for a console CPU it should be a good fit. I don't see how IBM will have a more efficent CPU for Sony's PS3. Overall the CPU's and GPU's will be close enough, but Sony is tied to Rambus XDR at the moment. The economy of scale favors Microsoft with GDDR-4 in 2006. All Nvidia and ATI higher end cards will be using those modules. Micron and Samsung will be fabbing GDDR-4 at full bore.


Vince responds:


Yeah, right. IMHO, STI will beat ATI transistor-for-transistor and STI's CPU will be atleast an order of magnitude better than what Microsoft gets. And GDDR-4 won't have economies of scale over XDR. nVidia and ATI high-end cards sell arbitrarily small absolute numbers with GDDR having no large use outside of 3D cards, even combined with XBox2 it's unlikely to come close to the size of the mean base size that the PlayStation platform has acheived historically. Hell, the size of XDR adoption in Cell-based, non-PS3, apps is likely to exceed the use of GDDR-4 by both ATI and nVidia alone.


http://www.beyond3d.com/forum/viewt...der=asc&highlight=transistor&start=20


It was assumed by many that the GPU for the PS3 would be "CELL" based. So in the context of Beyond3d and what was debated, Sony using a nVidia created GPU is failure to live up to expectations.
 
Brimstone said:
So in the context of Beyond3d and what was debated, Sony using a nVidia created GPU is failure to live up to expectations.

:rolleyes:

Who's expectations? Beyond3D isn't a single collective of one opinion.

We also don't know enough about the GPU to say if some people might be disappointed..
 
^^ The point is, who actually expected Sony to use a Cell GPU, or at least a completely independant architecture? Sony never promised anything, so why should this be deemed a failure on their part?
Just because Vince (grossly ;) ) miscalculated something doesn't mean that Sony has failed to live up to what they promised, cause they never promised anything, never released specs or performance expectations apart from that very old 1Tflop comment from crazy Ken Kutaragi, which doesn't concern the GPU anyway.

When the CPU only gets up to 999.99GFlops, instead of 1000, then you can say that Sony failed to live up to expectations.

On the GPU part, no one knew anything, and now that we do know something, the usual suspects (the ones complaining about the poor IQ in, apparently, every device Sony has ever created since the Betamax) take the opportunity to get on Big Bad Sony's case once again, this time for their "failure", instead of being happy to see that at last their next console will have proper IQ guaranteed..... :|
There's just no way to please some people!!!
 
V3 said:
Sony has been working on CELL for more than double that amount of time ... if they had intended to work with NVIDIA from the start, they most likely would have began working with them sooner.....

From that patent you can see, that they had two type of cell chip in mind, one for data plane processing, the other for graphics processing.

On this board, people always question who are going to design that part, specifically the one with pixel engine. At first it was assumed it was just going to be another Sony graphic chips, some more patents searches gave Toshiba as possibility too. Also rumours from various and questionable sources points to NVIDIA. These rumours been around for quite sometimes too.

On this board, this was once the achilles heels for PS3 because Sony was assume to be doing it alone. Now its official that NVIDIA got the deal. So I wonder where they'll poke next :)

Now the next interesting question is, how does the Power core and Synergistic core from STI, help NVIDIA in designing the the graphic processing cell, if at all. That's the interesting bit.

Say that the GPU that was racing nVIDIA's solution as the GPU for PlayStation 3 was aa Pixel Shading only part, that the plan was still doing all VS work on the CELL based CPU and all the PS work on the GPU.

I am saying... not that I know anything at all :devilish:.

Wouldn't you at least wonder why, if the GPU were CELL based like the Visualizer why then that GPU was doing only PS work and it could not share VS work at all with the CPU ?

I mean... Apulets can migrate and should be processable by any CELL based chip with enough APUs as the Apulet specifies (the PEs in the Visualizer were supposed to have 4 APUs each).
 
Panajev2001a said:
V3 said:
Sony has been working on CELL for more than double that amount of time ... if they had intended to work with NVIDIA from the start, they most likely would have began working with them sooner.....

From that patent you can see, that they had two type of cell chip in mind, one for data plane processing, the other for graphics processing.

On this board, people always question who are going to design that part, specifically the one with pixel engine. At first it was assumed it was just going to be another Sony graphic chips, some more patents searches gave Toshiba as possibility too. Also rumours from various and questionable sources points to NVIDIA. These rumours been around for quite sometimes too.

On this board, this was once the achilles heels for PS3 because Sony was assume to be doing it alone. Now its official that NVIDIA got the deal. So I wonder where they'll poke next :)

Now the next interesting question is, how does the Power core and Synergistic core from STI, help NVIDIA in designing the the graphic processing cell, if at all. That's the interesting bit.

Say that the GPU that was racing nVIDIA's solution as the GPU for PlayStation 3 was aa Pixel Shading only part, that the plan was still doing all VS work on the CELL based CPU and all the PS work on the GPU.

I am saying... not that I know anything at all :devilish:.

Wouldn't you at least wonder why, if the GPU were CELL based like the Visualizer why then that GPU was doing only PS work and it could not share VS work at all with the CPU ?

I mean... Apulets can migrate and should be processable by any CELL based chip with enough APUs as the Apulet specifies (the PEs in the Visualizer were supposed to have 4 APUs each).

I know you lurking PS2 devs are under NDA's! :devilish:

CPU= 32bit vertex engines
GPU= 32bit pixel engines

And IIRC, S|APUs dont have to be 4-way SIMD based for CELL to work with Apulets (Software Cells). The patents only say preferrable...

Am I hot or Am I cold! :p
 
london-boy said:
^^ The point is, who actually expected Sony to use a Cell GPU, or at least a completely independant architecture? Sony never promised anything, so why should this be deemed a failure on their part?
Just because Vince (grossly ;) ) miscalculated something doesn't mean that Sony has failed to live up to what they promised, cause they never promised anything, never released specs or performance expectations apart from that very old 1Tflop comment from crazy Ken Kutaragi, which doesn't concern the GPU anyway.

When the CPU only gets up to 999.99GFlops, instead of 1000, then you can say that Sony failed to live up to expectations.

On the GPU part, no one knew anything, and now that we do know something, the usual suspects (the ones complaining about the poor IQ in, apparently, every device Sony has ever created since the Betamax) take the opportunity to get on Big Bad Sony's case once again, this time for their "failure", instead of being happy to see that at last their next console will have proper IQ guaranteed..... :|
There's just no way to please some people!!!

I only quoted Vince because he carried the "CELL" flag in many debates on this forum. It's not about right or wrong (I know for sure I'm incorrect on issues 10 times as much as Vince at least), but the goal of the research. The reasearch will continue on well after the PS3 is launched I'm sure.
 
I've had this itch for quite a while, and now i want to make it public and ridicule myself...

What if Ati was also "running" for the position? I'm sure they would love to have a piece of Sony's cake, so if the position was open from the beginning, i'm pretty sure Ati got or made a few phone calls down the line... If Nvidia got in the party, i'm not sure why Ati would be left out...

That would have meant that IBM and Ati were going to be in all 3 the consoles in the next generation... :oops:

Only speculating...
 
Jaws said:
Panajev2001a said:
V3 said:
Sony has been working on CELL for more than double that amount of time ... if they had intended to work with NVIDIA from the start, they most likely would have began working with them sooner.....

From that patent you can see, that they had two type of cell chip in mind, one for data plane processing, the other for graphics processing.

On this board, people always question who are going to design that part, specifically the one with pixel engine. At first it was assumed it was just going to be another Sony graphic chips, some more patents searches gave Toshiba as possibility too. Also rumours from various and questionable sources points to NVIDIA. These rumours been around for quite sometimes too.

On this board, this was once the achilles heels for PS3 because Sony was assume to be doing it alone. Now its official that NVIDIA got the deal. So I wonder where they'll poke next :)

Now the next interesting question is, how does the Power core and Synergistic core from STI, help NVIDIA in designing the the graphic processing cell, if at all. That's the interesting bit.

Say that the GPU that was racing nVIDIA's solution as the GPU for PlayStation 3 was aa Pixel Shading only part, that the plan was still doing all VS work on the CELL based CPU and all the PS work on the GPU.

I am saying... not that I know anything at all :devilish:.

Wouldn't you at least wonder why, if the GPU were CELL based like the Visualizer why then that GPU was doing only PS work and it could not share VS work at all with the CPU ?

I mean... Apulets can migrate and should be processable by any CELL based chip with enough APUs as the Apulet specifies (the PEs in the Visualizer were supposed to have 4 APUs each).

I know you lurking PS2 devs are under NDA's! :devilish:

CPU= 32bit vertex engines
GPU= 32bit pixel engines

And IIRC, S|APUs dont have to be 4-way SIMD based for CELL to work with Apulets (Software Cells). The patents only say preferrable...

Am I hot or Am I cold! :p

APUs were supposed to have the same ISA all across and if you want to call the GPU CELL based, if you want your GPU to be CELL based is because you do want to share the workload dynbamically between CPU and GPU.

Why would a CELL based GPU only do Pixel Shading work and could not share Vertex Shading workload ?
 
Status
Not open for further replies.
Back
Top