PS3: The GPU that will be vs the GPU that could have been

xbdestroya said:
wco81 said:
What is the fillrate of the RSX?

Someone mentioned the Xenos fillrate is 34 gigapixels?

No word that I can find - only '100 billion shader ops' and '51 billion dot products.'

I s there word on how many dot products xenos is capable of? we know how much XeCPU can do but not the GPU.
 
Alpha_Spartan said:
I simply think it's another example of Sony PR writing a check that it's ass couldn't cash. I'm actually surprised that they went with Nvidia. These Japanese companies tend to have alot of pride and want to do their own thing. They learned a lesson from PS2's woefully under-powered hardware.

Woefully under-powered hardware? Uhm...
 
Alpha_Spartan said:
I simply think it's another example of Sony PR writing a check that it's ass couldn't cash. I'm actually surprised that they went with Nvidia. These Japanese companies tend to have alot of pride and want to do their own thing. They learned a lesson from PS2's woefully under-powered hardware.
Pride? According to you, it's as if Nintendo has been without pride for years :rolleyes:

Microsoft's fund in XBOX planted and nourished the new fruits called PC-dev-turned-console-developers and nVIDIA-experience-in-console-field. Then Sony harvests them when they get mature :LOL: Makes sense, doesn't it?
 
Great post, Acert93!

I would like to add, that the developers might not have to use the vector units from Cell as individual threads, but just as independend stream processors who are smart enough to keep their own memory and command queue filled when the CPU makes sure it's all set up in advance. That way, you could just use a single thread to run the program loop and off-load all the calculations, without having to go fully multi-threaded.

I have no idea how the load balancing would look like if you would do that, though. We'll just have to see how it works out.
 
Re: PS3: The GPU that will be vs the GPU that could have bee

xbdestroya said:
'Now, obviously it just didn't work out - I'm looking for people's thoughts on what they think the reasons for that were.
I can't see anythinh obvious here.
All we know is some patent, that's all.
You can't say something has not worked as expected just cause they didn't implemented what they patented.

BTW, I think that Sony went to NVIDIA for a last minute deal cause they simply overstimated themselves.
 
Man, some people are still really angry about those PS2 jaggies aren't they. Let the ghost go buddy, let it go. :p

And wasn't the Nvidia deal a contingency of some sort? People keep phrasing the switch to Nvidia as "last minute" but Ive never heard anything concrete on the issue. If Sony is as "prideful" as some think would they even bother with a plan B?
 
Re: PS3: The GPU that will be vs the GPU that could have bee

nAo said:
xbdestroya said:
'Now, obviously it just didn't work out - I'm looking for people's thoughts on what they think the reasons for that were.
I can't see anythinh obvious here.
All we know is some patent, that's all.
You can't say something has not worked as expected just cause they didn't implemented what they patented.

BTW, I think that Sony went to NVIDIA for a last minute deal cause they simply overstimated themselves.

Well, you're right, but I think it's safe to say that Sony and Toshiba were probably working on the graphics solution on one level or another before Sony went with NVidia, and in that context, I was just saying 'It's obvious it didn't work out.' No need to be so snappy! :p

Anyway nAo, knowing that you were pretty familiar with those said patents back in the day, and just in general, what did/do you think about their plausibility on the whole? Was Sony naive all along to think that they could outflank the traditional rasterizers performance-wise, or was/is it an idea who's time simply has not come due to limitations?

(Basically asking the question I asked Gubbi: is there a future for software rendering?)
 
Re: PS3: The GPU that will be vs the GPU that could have bee

xbdestroya said:
Software rendering. Ray-casting based; possibly raytracing as well.
But what I want to know is: what happened?
Advantage seen in traditional rasterizer/hardware?

It's probably related to Cell not meeting initial performance expectations. Sony was talking about 1000 times PS2 performance first, 200-300x a year later, and ended up with ~35-40x.
If they've ended up with something as fast as they originally wanted, then two Cell CPUs could have been able to do some very interesting stuff. Raytracing or REYES, hardware accelerated, on a machine that sports some real 1-2 TFLOPS performance, would've been a good contender against the more traditional Xbox360.

Also, beyond the probablility of failing their goals, they've most likely also realized, that such a radically different architecture would've got them into a real disadvantage in terms of games. Developers would require lots of learning to get into this new rendering architecture, they'd probably need to rebuild their content creation pipeline, and so on... all this would take too much time and money for most studios, and they'd probably chose to develop for the Xbox instead.
Others already discussed here that all contenders are probably going to end up pretty close to each other in hardware with a PowerPC-based architecture and a SM3.0+ graphics system. Easier to develop for, easier to hire experienced people, easier to write multiplatform games... Developers probably also pushed Sony to this.
 
65nm was never going to be ready in time. At each stage of ramping up a new process to readiness, there are months of testing and formula tweaking. The giant implanters that are used in developing wafers spend most of their time running pilots for calibration purposes for a long time after even some little part of their recipe has to be changed, and that's just one part of the fabrication process. Announcing the initiation of a new phase of pre-readiness is still months from actually being able to get passed that phase, and trying to launch these systems off of "risk production" is not very practical.

The era when specialized graphics processing won't still yield performance benefits over generalized processing is still a way's away, so CELL based graphics processing wouldn't have been competitive.
 
*thumbs up*

Thanks for the angles guys.

@ Laa-Yosh I think the Cell actually hit it's performance target - it being the Broadband Engine (4 cells) that was supposed to hit 1 TFlop, but I definitely see where you're coming from. For certain it must have been clear to Sony from an early stage that they wouldn't fit that in there, and then with the 'Visualizer' on top of it all...

Anyway I'm just a fan of novel architectures, which of course is why I have an interest in what became of the mythical Graphics Synthesizer 3 in the first place, and it's would-be feasibility and strengths.

PS - Any thoughts on what Sony would have used/been left with if NVidia hadn't been there to make the RSX? I find that nearly as intriguing a question as my original - I mean, they probably would have gone with some variation along the GS evolution path, right?
 
JF_Aidan_Pryde: Do you have a link?

The broadband engine as we know it and discussed it here (the patent) refered to a 4 CELL setup (each holding 8 SPEs/APUs). Thanks.
 
Re: PS3: The GPU that will be vs the GPU that could have bee

Laa-Yosh said:
xbdestroya said:
Software rendering. Ray-casting based; possibly raytracing as well.
But what I want to know is: what happened?
Advantage seen in traditional rasterizer/hardware?

It's probably related to Cell not meeting initial performance expectations. Sony was talking about 1000 times PS2 performance first, 200-300x a year later, and ended up with ~35-40x.
If they've ended up with something as fast as they originally wanted, then two Cell CPUs could have been able to do some very interesting stuff. Raytracing or REYES, hardware accelerated, on a machine that sports some real 1-2 TFLOPS performance, would've been a good contender against the more traditional Xbox360.

Also, beyond the probablility of failing their goals, they've most likely also realized, that such a radically different architecture would've got them into a real disadvantage in terms of games. Developers would require lots of learning to get into this new rendering architecture, they'd probably need to rebuild their content creation pipeline, and so on... all this would take too much time and money for most studios, and they'd probably chose to develop for the Xbox instead.
Others already discussed here that all contenders are probably going to end up pretty close to each other in hardware with a PowerPC-based architecture and a SM3.0+ graphics system. Easier to develop for, easier to hire experienced people, easier to write multiplatform games... Developers probably also pushed Sony to this.

they can always try again with PS4 :)
 
xbdestroya said:
Anyway I'm just a fan of novel architectures, which of course is why I have an interest in what became of the mythical Graphics Synthesizer 3 in the first place, and it's would-be feasibility and strengths.

PS - Any thoughts on what Sony would have used/been left with if NVidia hadn't been there to make the RSX? I find that nearly as intriguing a question as my original - I mean, they probably would have gone with some variation along the GS evolution path, right?


agreed.

I think we would have seen a Graphics Synthesizer 3 or Visualizer with pixel shader 2.0 capability, tons more fillrate and bandwidth, with eDRAM.
 
I think the mistake they made was predicting 65nm would be used with a fast shrink to 45nm. That would've given them a 2 CELL PS3 and a 2 CELL Visualizer. ;)
 
Acert93 said:
wco81 said:
What is the fillrate of the RSX?

Someone mentioned the Xenos fillrate is 34 gigapixels?

I thought Xenos was 16 Gigapixels with 4x AA??
16 GSamples/s with 4xAA, being 4 GPixels/s.

If RSX is anything like G70 on the ROP side (which is actually unlikely), it will have a theoretical 11.2 GPixels/s and 22.4 GSamples/s. However, unlike Xenos it will not be able to sustain that rate due to bandwidth limitations.
More likely numbers are 8.4/16.8 or 5.6/11.2.
 
It would be nice if they removed some ROPs to add more ALUs ;)
But I don't think they had the time to do it...
 
Re: PS3: The GPU that will be vs the GPU that could have bee

xbdestroya said:
what did/do you think about their plausibility on the whole? Was Sony naive all along to think that they could outflank the traditional rasterizers performance-wise, or was/is it an idea who's time simply has not come due to limitations?
I think Sony and Toshiba issued patents about plausible technology, but they realized too late that they don't have the expertise to develop a completely new programmable GPU.
I don't know if the GPU Toshiba developed for the PS3 was employing some exotic architecture/rendering algoritmh, but I know it wouldn't had been advanced as current GPU regarding vertex and pixel shaders.

(Basically asking the question I asked Gubbi: is there a future for software rendering?)
I believe there is a future for 'software rendering'.
There're still a lot of fixed function hw on current GPUs (this is not a bad thing!!) and that means sometimes in the future we might get semi-programmable rasterizers and ROPs ;)
 
Acert93 said:
Sony with one PPC core and 7 extremely fast vector units? or

MS with 3 PPC cores with beefed up VMX units?
You're doing it again, Acert o_O
SPE's are not JUST vector units! They're processors. They can do floating point, they can do integer. They can do any code a general purpose processor can AFAIK. They are not specialised for general purpose, branched code, so take a penalty when running it, but you can't go around saying, like MS are, that Cell offers only a single core for general processing code. That's like saying a P4 can't do vector maths just because it's key strength is general purpose code.
 
Back
Top