PS3 hardware design choices - good or bad? *spawn

A full G80 would certainly have been outside of the consoles reach. But a fully customised GPU incorporating elements of the G80 design the same way Xenons incorporated elements of R600 would have been possible if started early enough.

Xenon has maybe half the raw performance of R600 with a less advanced featureset but it does it within console TDP limitations. If NV had provided something along those same lines for PS3, i.e. a DX9 variant of G80 with about half the raw performance it would have bested the 360.
 
A full G80 would certainly have been outside of the consoles reach. But a fully customised GPU incorporating elements of the G80 design the same way Xenons incorporated elements of R600 would have been possible if started early enough.

Xenon has maybe half the raw performance of R600 with a less advanced featureset but it does it within console TDP limitations. If NV had provided something along those same lines for PS3, i.e. a DX9 variant of G80 with about half the raw performance it would have bested the 360.

G80 wasn't ready, Nvidia took there time making sure it was perfect before they showed it off to world, it was this care and attitude of perfection that ultimately lead G80 to become the monster it was, remember that it completely destroyed the first 2 generations of DX10 ATI cards.

PS3 also cost a lot and Sony was already taking a big lots with it so can you imagine the loss they would of had to endure if they went fully custom?

It could also be a simple case of Nvidia just saying no G80 as it wasn't ready, PS3 and G80 did launch days apart but PS3 was showed off a year earlier at E3 with working hardware, at that moment in time I doubt Nvidia would of even had test silicon for G80.

PS3 was just unlucky with timing, a year later and they could of had a G80 derivative, a year earlier and they could of got a custom part with unknown performance made or a part from ATI that was based on Xenos.
 
Last edited by a moderator:
It wasn't quite that big. Only 384m transistors compared with 304m for RSX and 337m for Xenos + daughter die. R580 + Xenon comes in at the same transistor count as Cell + RSX.

I was going by die size at 90nm - Xenos is something like 175 mm^2 + edram die, RSX was 240 mm^2, R580 was 350 mm^2. Big old thing, and unlike Xenos you can't split it into 2 dice that will yield better (one with lots of redundancy).

As it's die size and process that determine how expensive your chip is to fab, it seems that R580 would have mullered them on manufacturing costs. It would have needed to swell further to accommodate PS3 specific system stuff, and further still if they'd added redundancy like they did with RSX (and with a chip that big yields would probably be bad so they'd need it).

R580 was a great chip though. I regretted buying a 7900 GTX instead of an X1900 XT, as the X1900 XT continued to be able to run games well long after the 7900 GTX was desperate to be replaced. Cell has surely worked wonders for RSX.

Yes agreed. This is where the customisation would have to come in. Clock speed could maybe be dropped to 500Mhz, ROPS halved and the memory bus reduced to 128bit or maybe 192bit which would cost them, but on balance of the money saved on Cell R&D it may have worked out cheaper overall. Regardless though memory bandwidth would be the biggest challenge.

Maybe they could have saved overall with a downspecced R580 derivative. Reducing clocks and memory bus would seem like a good idea, as you say. Not sure what off-the-shelf CPU cores they could have used or modified though, as without Cell there would have been no PPUs to build Xenon from. A64 had its own memory controller, and P4 was hot and slow while Intel would have mugged them for C2D. Low clocked Power 5 maybe?
 
Last edited by a moderator:
Overall the PS3 was packing more combined CPU/GPU transistors than the 360 (because of Cell). However it still arguably struggles to keep up today despite that advantage so something must be wrong somewhere.

Not that the numbers of transistors really matters but how they are used to begin with.. but lets play, where doesn't it keep up today?
 
My vote goes to "bad choice" no matter TBE merits on insulation.
I'm close to share Bill Gates early comments.
The cell is the result of too much compromises. Resident dev here ( fafalada?) Stated multiple times that the Vu isa in the EE was better suited for graphics than than the SPU one.
The rsx doesn't look like the matching gpu for something that wanted to be ( sony's pov thry were not alone) the EE heir.
Still it does the job but looking to the expenditures it's less than a feat. Millions have been spent in software R&D either by sony or various editors.

Definitely the 360 is not a perfect design, but the ps3 is even less optimal.

I would say that sony missed the means to achieve what they wanted. They had to do compromise with the cell, they could not afford (either time and/or money) a custom gpu. That or they developed the system on failed premises.

I believe (I opened a thread a long time ago) that they should have stick to what they knew .
A natural ps2 heir for me would have consist of OoO low power mips cores (2) + a few wide simd units.On the gpu side I think powerVR could have been a good partner, something akin to a super kiro gpu. Vertex processing would be done on the cpu/simd, pixel handle by the deferred render.
 
It's a bit different than the topic subject buy why bad decision were made may prove more.intersting still we are lacking intel.
I believe that Ken Katuragi took the hit with dignity when I suspect there must have been quiet some pressures surrounding the project.
He may had yo deal with shifting timeline, shrinking budget line, and may be unreasonnable performance goals. I meam the EE had amazing fp performances and best that by order of magnitude may have been a crushing burden on the project.
I don't know and I don't expect from sony on the matter and KK ( not a whiner) anything but secrecy.
 
My vote goes to "bad choice" no matter TBE merits on insulation.
I'm close to share Bill Gates early comments.
The cell is the result of too much compromises. Resident dev here ( fafalada?) Stated multiple times that the Vu isa in the EE was better suited for graphics than than the SPU one.
The rsx doesn't look like the matching gpu for something that wanted to be ( sony's pov thry were not alone) the EE heir.
Still it does the job but looking to the expenditures it's less than a feat. Millions have been spent in software R&D either by sony or various editors.



That's becuase the VUs were essentially designed as geometry processors. The SPUs OTOH, were far more general purpose in nature; there's an order of the magnitude more of what you can reasonably accomplish on them than what would ever be achievable on the VUs. Plus the VUs as designed would never scale to 2GHz let alone 3.2GHz...
 
Did I spoke of reaching 3 GHz somewhere?
It would have sense to have vpu 2.0 the ps3 is a console pushing mostly graphics.
By the way you have no idea which clock speed.they could have reached, clearly depends on design.
I woukd have favored low clock way wider (8 or 16) simd with graphic oriented isa.
Sony and Toshiba woukd have been up to the task to deliver this without bringing IBM in the party.
I also believe that evolving from EE philosophy a tbdr (dealing with fragment processing only) would.have been a good match. It would look to me as a more "coherent" design.
I don't know much of powervr policies but in 2005 and prior they were not in their nowadays situation they may have welcome a partnership with Sony. It would have proved way more worth it for Sony than the exagerated focus on the Cpu.
 
Last edited by a moderator:
Did i miss something, or isn't it considered a "fact" that the RSX ended up being a stopgap solution because whatever else was planned (supposedly a Toshiba solution) flopped/dropped? Maybe another victim of the HD-DVD / Blu-Ray war?
 
I don't think it's known at which point the super companion idea were abandonned.
It"s not even sure it was intended for the ps3 as there was more plan around the cell than the ps3.
Looking at how it turned Sony were right to abandon it.
 
I was going by die size at 90nm - Xenos is something like 175 mm^2 + edram die, RSX was 240 mm^2, R580 was 350 mm^2. Big old thing, and unlike Xenos you can't split it into 2 dice that will yield better (one with lots of redundancy).

I think R580 was about 315mm^2 but point taken. I'd have thought they could keep it within that size though even with some level of redundancy if they stripped out half the ROPS and some other PC specific logic.

It would no doubt have cost more than RSX or Xenos but the big question is would the extra cost over RSX have been offset by the savings of not developing Cell?
 
G80 wasn't ready,

It was a lot more ready when PS3 launched than R600 was when Xbox 360 launched. So I see no reason why Sony couldn't have got something equivalently as close to G80 as what Microsoft got to R600 had they chosen to go that route. And since G80 was a fair bit faster than R600, it stands to reason that this theoretical PS3 GPU would have been a fair bit faster then Xenos.

PS3 also cost a lot and Sony was already taking a big lots with it so can you imagine the loss they would of had to endure if they went fully custom?

Yes but one of the main reasons it cost so much is because of all the R&D they poured into Cell. My argument is that they could have foregone Cell and put that money into a bigger, better custom GPU.
 
It was a lot more ready when PS3 launched than R600 was when Xbox 360 launched. So I see no reason why Sony couldn't have got something equivalently as close to G80 as what Microsoft got to R600 had they chosen to go that route. And since G80 was a fair bit faster than R600, it stands to reason that this theoretical PS3 GPU would have been a fair bit faster then Xenos.

Last I checked PS3 wasn't finalized the day before it launched so I have no idea why your keep bringing up G80 being ready or nearly ready when PS3 launched as that's completely irrelevant.

Infact unless you have any information from Nvidia giving information as to how far they were with G80 at the time of PS3 launch then it's a moot argument.

We have no idea how far Nvidia had got with G80 when PS3 was first launched.
 
pjbliverpool said:
It was a lot more ready when PS3 launched than R600 was when Xbox 360 launched. So I see no reason why Sony couldn't have got something equivalently as close to G80 as what Microsoft got to R600 had they chosen to go that route. And since G80 was a fair bit faster than R600, it stands to reason that this theoretical PS3 GPU would have been a fair bit faster then Xenos.

Yes but one of the main reasons it cost so much is because of all the R&D they poured into Cell. My argument is that they could have foregone Cell and put that money into a bigger, better custom GPU.

BR was more of a cause than Cell R&D, why the PS3 was delayed, expensive and came with a weaker GPU. The cost and retail price of the console didnt factor the R&D spent on Cell. If Sony didnt have to worry about Blu Ray it would have likely came with both a more powerfull GPU and CPU
 
Were not many of the choices made for the PS3 post-Cell adoption, just patch fixes? The addition of RSX, addition of GDDR3 memory........

What if Sony had gone for an expanded Cell with on die TMUs and ROPs that would've been effectively an APU? It would've been a much more simple system configuration wise. No need for the RSX, no GDDR3 (double the XDR, which yes I know would've been pricey). In the end the system config would be so much more simple, no having separate processes between the RSX and Cell for graphics.

I keep thinking that route would've been a much better one in the long run, even with the probably difficulties in producing such a large die with 2 PPUs, 16 SPEs, as well as 20+ TMUs, and 8+ ROPs..........It could've been done, at least I think so.

Being first out the door was a powerful position for MS. They effectively could define nextgen on their own terms, so their hardware vision was certainly more clear than Sony, who had to scramble in relation to what MS was doing.
 
Last I checked PS3 wasn't finalized the day before it launched so I have no idea why your keep bringing up G80 being ready or nearly ready when PS3 launched as that's completely irrelevant.

Infact unless you have any information from Nvidia giving information as to how far they were with G80 at the time of PS3 launch then it's a moot argument.

We have no idea how far Nvidia had got with G80 when PS3 was first launched.

but you didnt answer the xbox360 xenos being similar to R600 argument despite the fact that xbox360 hardware was finalized almost 3 years earlier than R600.

arguing that nvidia coudnt provide sony's PS3 with a better more customized GPU with some features taken directly from the G80 project is simply very difficult to defend.

basically I have seen mainly 2 arguments trying to defend this undefendable position in this thread :

1- the G80 wasent ready for the ps3 timeline. The ps3 hardware was finalized 1 year earlier than the G80. It is only the ps3 commercialization that got delayed because of Blu ray and CELL :

the delay of the ps3 is an argument against sony for not upgrading the RSX, not in favor of sony. the delay was a great opportunity for sony to improve their RSX and implement some new G80 features. the RSX was the first hardware piece finalized for ps3 (if I remember correclty march 2006) long before sony finalized the CELL version they would put on ps3 (deciding to use only 7 SPEs instead of 8 SPEs).

After march 2006, sony didnt even try to touch the RSX design for obvious financial reasons, they didnt want to spend more money onto the ps3 project which its development budget was skyrocketing and caused them a lot of trouble. this was the problem not that they couldnt ask nvidia to improve the RSX it is just they didnt want to do so.

2- a G80 derived GPU for ps3 was simply too big for ps3, too much heat and power consumption :

the idea is not to put on ps3 a derived fully featured G80 GPU, of course that would have been very difficult to achieve, but just taking some good features and put them on a customized ps3 GPU, exactly like what happened for the xbox1 GPU being in middle ground between geforce 3 and geforce 4. or what happened to xenos in xbox360.


to summarize : the RSX was sony's fault, sony put such an unbalanced underpowered outdated GPU a la RSX into ps3 though they have seen how xenos performed better in xbox360, but sony didnt react, they decided to go with RSX anyway, just not to spend more money.

saying that sony didnt have another choice is really far from the truth, the RSX was finalized march 2006, 5 months after the xenos commercialization in xbox360. the ps3 commercialization was delayed, the G80 was being finalized by nvidia and it was a great opportunity for sony to ask nvidia to improve the ps3 GPU. They didnt use this opportunity, they decided they already spent too much money onto the ps3 project and went with RSX anyway.

now saying that financially it was a good solution to go with the cheap to buy from nvidia RSX, thats a different subject........
 
Do you really, honestly believe that in a span of 6 months, Sony could have approached nVidia and said, "we want an upgraded GPU using ideas from your next-gen GPU" and nVidia could have designed and produced the custom part and got it mass produced and all for an insiginficant sum taht wouldn't have damaged Sony's financials any more than their investment in Cell and BRD and RSX already had?

There's an argument that Sony could have picked to weight their console more GPU heavily, deliberately creating an imbalanced designed because the workloads of the games would gravitate that way. That may well be true. It's only hindsight that tells us that though. Looking at Sony's reasoning in wanting flexible, programmable horsepower, in the belief developers would use it to create more 'immersive' games, they were right to strike a balance. It's not Sony's fault that developers chose to use that CPU power to make better visuals instead of making more varied games using AI and physics and whatnot, and dressing them in less pretty graphics. We'd need to see what none-graphical workloads Cell could do in games to really appreciate what PS3 could have been. Would properly realistic, living worlds be possible with simplified PS2+ graphics?
 
Do you really, honestly believe that in a span of 6 months, Sony could have approached nVidia and said, "we want an upgraded GPU using ideas from your next-gen GPU" and nVidia could have designed and produced the custom part and got it mass produced and all for an insiginficant sum taht wouldn't have damaged Sony's financials any more than their investment in Cell and BRD and RSX already had


sony already took huge risks and spent a lot of money developing a very complex expansive CPU and optical drive, why not finish the job and spend even more money to develop a great GPU too ? this would not only make multiplatform games easy tp develop and look and perform better in ps3, but could justify the premium price asked for ps3 from consumers.....It is no surprise that when consumers start to finf out that graphically ps3 didnt offer any advantage for multiplatform games compared to xbox360, sony was obliged to drop the price of ps3 and take huge losses....

It is very possible that if sony spent more money for its GPU, the image of ps3 as the most powerful console could have been sustained in the consumers eyes, and paradoxically sony could have benefited financially from this image and recover its losses due to the development costs of ps3 by selling more ps3 and at higher prices for a longer period of time.


There's an argument that Sony could have picked to weight their console more GPU heavily, deliberately creating an imbalanced designed because the workloads of the games would gravitate that way. That may well be true. It's only hindsight that tells us that though. Looking at Sony's reasoning in wanting flexible, programmable horsepower, in the belief developers would use it to create more 'immersive' games, they were right to strike a balance. It's not Sony's fault that developers chose to use that CPU power to make better visuals instead of making more varied games using AI and physics and whatnot, and dressing them in less pretty graphics. We'd need to see what none-graphical workloads Cell could do in games to really appreciate what PS3 could have been. Would properly realistic, living worlds be possible with simplified PS2+ graphics?

the problem is not that developers didnt want to use the CELL for other things than graphics (physics, AI, simulations, interactive worlds....) . to the contrary they would have loved to ! especially the first party developers ! The problem is that RSX was so unbalanced, weak, bandwidth limited and crippled in some key graphical aspects (geometry, post processing effects, vertex shading) that developers were obliged to use the CELL to help it. if they didnt do so, they would end up not only with less graphically impressive games than xbox360, but with a strugglling inefficient GPU which its power is greatly unused and wasted on tasks that it couldnt run very well.

so yes it was sony's fault not providing developers with an adequate GPU so they could use the CELL for more interesting tasks than just helping the GPU.
 
I think R580 was about 315mm^2 but point taken. I'd have thought they could keep it within that size though even with some level of redundancy if they stripped out half the ROPS and some other PC specific logic.

Nvidia thought R580 was 352 mm^2 (but maybe they were exaggerating for PR purposes):

http://www.nvnews.net/vbulletin/showthread.php?t=65835

G71 swelled from just under 200mm^2, to 240 mm^2 with RSX on the PS3, and that's despite losing 8 ROPs and half its video memory bus to Cell. R580 for the PS3 could have been creeping up on 360 or 400 mm^2 (depending on which R580 figure you use) with another 40 mm^2 added on.

It would no doubt have cost more than RSX or Xenos but the big question is would the extra cost over RSX have been offset by the savings of not developing Cell?

Maybe, but it seems that the big losses come from being stuck manufacturing something that is less efficient per dollar than something your competitor makes. If spending extra in R&D could have got them a better GPU they probably would have, but by the point they knew what they were up against time seems it time could have been the bigger issue.

It was a lot more ready when PS3 launched than R600 was when Xbox 360 launched. So I see no reason why Sony couldn't have got something equivalently as close to G80 as what Microsoft got to R600 had they chosen to go that route. And since G80 was a fair bit faster than R600, it stands to reason that this theoretical PS3 GPU would have been a fair bit faster then Xenos.

I'm pretty sure Xenos wasn't based on R600, but on an older architecture that never made it's way into the PC space as it was deemed too complex or too difficult to manufacture or something like that. ATI showed it off to MS when they were looking for something for the 360, and it was adapted and updated for 90nm and Microsoft's next console. It later fed in to the development of ATI's unified shared stuff in the PC space.

I remember it being something like this:
R400 (canned) -> R500 (Xenos) -> R600
R420 (X800) -> R520 X1800 -> R580 (X1900) -> R600

It's not that MS got R600 early, it's that they spent years working with ATI on getting an alternative line of technology to R520 and R580 because it far better suited their needs. I doubt G80 would have offered superior performance per Watt to what MS got, and even if it had, Sony would have to have been working closely with Nvidia for a long time to get it.

That's assuming Nvidia would have been prepared to use lots of resources developing highly customised architectures for partners in return for small cuts per chip, of course.
 
I would say a lot of this debate centers around the fact that the Xbox 360 was undercooked in terms of readiness for release and the PS3 was overcooked. They both ought to have been released about the same time and they each had their own paradigms, customish GPU vs custom CPU.

Personally the only major mistake IMO with the PS3 design given the way things went is the fact that they went with an 8 SPE design cut down to 7 when it reality it probably would have made better sense to use a 6 SPE design without the deactivated SPE.
 
Back
Top