End of Cell for IBM

200+ mm^2 @ 90nm of quality PC-like CPU design would've allowed a broader range of workloads to perform better, benefitting PS3 game quality more.

Any silicon sacrificed to graphics on a CPU could've yielded better performance as part of the GPU.
 
200+ mm^2 @ 90nm of quality PC-like CPU design would've allowed a broader range of workloads to perform better, benefitting PS3 game quality more.

Any silicon sacrificed to graphics on a CPU could've yielded better performance as part of the GPU.

Assuming it could have been part of the GPU as the GPU isnt gimped just because the cpu can help it. The GPu is a full fledged GPU but its capabilities wont be the ceiling of graphical quality for the system. The cell doesn't seem to be bad at other game elements besides graphics at all. The main problem actually with the PS3 is just bandwidth for certain graphical tasks and the split memory pool right? The execution wasn't perfect but I like how its turning out; I still have something special to look forward to years after its release.

Maybe its my love for rpgs thats speaking but I like to see improvement over time and you have to admit it will peak pretty soon if its easy to develop for. It would then just become a case of just... making the game. There would be nothing separating quality developers from those just pumping out the same old. What we are talking about could simply be an environment that pushes devs to do better but, to me, it seems to work.

I'm no dev so i don't quite sympathize with your desire for an easier system :p If it means you make fewer games with higher quality so be it, I aint got endless cash...
 
I think the idea was that the Cell would allow the PS3 to do things like KZ2 and U2 (and GT5 probably).

Having the designers put effort in Cell, and not so much in the GPU means that up to this point roughly 90% of all multiplats perform 10-20% worse on the PS3.
But who cares about multiplatform games when hardware exclusives will blow them graphically out of the water anyway?

It will only get better from this point on, as all developers share techniques and what not.

Not as a comparison, but looking at games like Gears of War 1, or PGR3 (or 4) in retrospect, I think those bars have not been raised (on the 360) since. Not that it really matters, as those games are great looking, even today.
The PS3 was designed to go beyond that, so if that means having generally worse 3rd party games, so be it. I wouldn't want to trade U2 or KZ2 graphics for all the multiplats in the world ;)
 
200+ mm^2 @ 90nm of quality PC-like CPU design would've allowed a broader range of workloads to perform better, benefitting PS3 game quality more.

Any silicon sacrificed to graphics on a CPU could've yielded better performance as part of the GPU.

What is the power, heat and performance characteristics of such a unit ? Would it be able to post-process graphics or run PSEye ?
 
There is also the question of what this broader range of workloads is. If the cell was lacking in terms of processing power required for parts of games multiplats would be complete hell. The cell CAN do it and extra. We dont hear comparisons mentioning less physics or poorer sound, they don't say AI is not as good on the ps3 version. What is needed for gaming is there.
 
How this wasn't obvious to the Kutaragi regime is incredible, yet decisions like that are why they got themselves displaced at Sony. At least now, the company's hardware engineers are free to select the most competitve solutions from the open market -- in the handheld sector, anyway, where the legacy won't be as difficult to handle -- and not forced to use inferior, homegrown tech.

I'll re-enter the thread here to start, and say straight down the line I disagree with every statement here - which is nothing personal on you Lazy8s. :)

But the fact is that were hardware engineers were calling shots before, now they are slaved to the bean counters, and the architecture created was hardly inferior in the least. It was designed with a purpose and a vision of the future in mind - in fact that vision of the future is coming to pass as we speak, it is just the realities of fate that the Cell architecture launched larger and hotter (and the PPE...) than ideal, and it had a knock-on effect on the success in the PS3 (due to cost) and in 'other embedded' due to thermals and lag in porting to bulk CMOS.

I think the architecture still rocks, and given the clear industry direction towards 'Cell-like' computing, it would be great to see the SPU2's and such STI was messing with. As a tech enthusiast, I have my fingers crossed. But of course the reasons they would go more standard are clear as well; those reasons are neither driven by 'superior' options per se nor hardware engineers calling the shots in my book though.

********************************************************

As far as Hirai and his quotes about lifespan and learning curve, etc etc, that's just obviously some PR BS right there, because the idea that you would purposefully build in complexity into your system for its own sake gets multiple eye-rolls from me. We all know the truth of it, even if in some strange world maybe Hirai himself doesn't; it was just hard to program for given the nature of where the industry and the tools 'were.' Sometimes I think Sony execs feel the need to spin everything into seeming like an on-purpose calculation, when just admitting there were issues wouldn't tarnish the current success/image at all, and not raise any of the "oh brother..." flags that comments like these do. To say nothing of the fanboys around the Internet that actually buy into it.
 
The PS3 was not made difficult to program to preserve a ramp up on the learning curve, it was made difficult to program so that it would have a better chance at remaining price/power performant as the rest of the industry continues along the Moore's law curve.

The PS3 was more difficult because historically that's been the Japanese way of doing consoles, convoluted hardware with poor tools and documentation thrust upon the cattle (developers) to spend endless hours sleeping under their desks trying to figure it out. Even the PS1, which is possibly the easiest Japanese console to dev for that I can think of, didn't have proper english documentation for a long time. Sony ultimately didn't count on going up against a US console developer that actually understands coders, and who provided a well balanced well documented architecture very well suited to a variety of games, as well as providing in person help from day one and killer tools. Combine that with some poor hardware choices on Sony's part and they paid a heavy price for it. I doubt Sony will make the same mistake again on PS4. If they do then you will once again see Sony console holders playing worse versions of games 4 years into the console cycle, but I think they would be mad to allow that to happen again.


semitope said:
There is also the question of what this broader range of workloads is. If the cell was lacking in terms of processing power required for parts of games multiplats would be complete hell. The cell CAN do it and extra.

Where would this spare power be? The 360 has three ppu's and three vmx units, so some spu's have to be spent to account for that. The 360's gpu is a generation ahead of the PS3's, so some more spu's have to be spent to account for that as well, sometimes possible and sometimes impossible. So where is this spare spu power? You guys talk about it as if there are six idling spu's still waiting to be used. Games max them out regularly now, how else do you think it's possible to have the PS3's bottleneck ridden architecture approach the 360 on multi plat games?


E2K said:
But who cares about multiplatform games when hardware exclusives will blow them graphically out of the water anyway?

Depends who you ask, multi plats look the best to me this gen. Fanboys will always pick their console games as the best looking, but try blind tests with non fanboys and the results will be totally different. Multi plats are far more important than exclusives now anways. The majority of games purchases are multi plat, entire business are built on that fact. Making the life of multi plat game devs difficult is dumb at best, financial suicide at worst, something that should be abundantly obvious with even the most casual glimpse at this gen.
 
Last edited by a moderator:
As far as Hirai and his quotes about lifespan and learning curve, etc etc, that's just obviously some PR BS right there, because the idea that you would purposefully build in complexity into your system for its own sake gets multiple eye-rolls from me. We all know the truth of it, even if in some strange world maybe Hirai himself doesn't; it was just hard to program for given the nature of where the industry and the tools 'were.' Sometimes I think Sony execs feel the need to spin everything into seeming like an on-purpose calculation, when just admitting there were issues wouldn't tarnish the current success/image at all, and not raise any of the "oh brother..." flags that comments like these do. To say nothing of the fanboys around the Internet that actually buy into it.

His wording doesn't exactly have to be looked at like that. He never said it was intentionally made difficult but it could suggest ease was second to power. As PR i've never actually heard another person understand and appreciate the statement so I'd say it failed. Ppl got pissed when that was said IIRC.

Where would this spare power be? The 360 has three ppu's and three vmx units, so some spu's have to be spent to account for that. The 360's gpu is a generation ahead of the PS3's, so some more spu's have to be spent to account for that as well, sometimes possible and sometimes impossible. So where is this spare spu power? You guys talk about it as if there are six idling spu's still waiting to be used. Games max them out regularly now, how else do you think it's possible to have the PS3's bottleneck ridden architecture approach the 360 on multi plat games?

I was speaking of the cell alone as a processor. It is able to do all the processing that multiplatform games are using the 360 CPU for in addition to aiding the RSX in graphics tasks. That means it can do the work of a console CPU and extra. You are speaking of porting there aren't you?

Yes it depends on who you ask. No multiplatform game has made a significant impression on me this year beyond anything I have seen before
 
Last edited by a moderator:
The PS3 was more difficult because historically that's been the Japanese way of doing consoles, convoluted hardware with poor tools and documentation thrust upon the cattle (developers) to spend endless hours sleeping under their desks trying to figure it out. Even the PS1, which is possibly the easiest Japanese console to dev for that I can think of, didn't have proper english documentation for a long time. Sony ultimately didn't count on going up against a US console developer that actually understands coders, and who provided a well balanced well documented architecture very well suited to a variety of games, as well as providing in person help from day one and killer tools. Combine that with some poor hardware choices on Sony's part and they paid a heavy price for it. I doubt Sony will make the same mistake again on PS4. If they do then you will once again see Sony console holders playing worse versions of games 4 years into the console cycle, but I think they would be mad to allow that to happen again.




Where would this spare power be? The 360 has three ppu's and three vmx units, so some spu's have to be spent to account for that. The 360's gpu is a generation ahead of the PS3's, so some more spu's have to be spent to account for that as well, sometimes possible and sometimes impossible. So where is this spare spu power? You guys talk about it as if there are six idling spu's still waiting to be used. Games max them out regularly now, how else do you think it's possible to have the PS3's bottleneck ridden architecture approach the 360 on multi plat games?




Depends who you ask, multi plats look the best to me this gen. Fanboys will always pick their console games as the best looking, but try blind tests with non fanboys and the results will be totally different. Multi plats are far more important than exclusives now anways. The majority of games purchases are multi plat, entire business are built on that fact. Making the life of multi plat game devs difficult is dumb at best, financial suicide at worst, something that should be abundantly obvious with even the most casual glimpse at this gen.

I would say Sonys financial problems stem from the cost of the hardware and time to market rather than difficulty in programing for it. If 360 and PS3 launched on the same date and price im sure PS3 would be doing a lot better, easy to program for or not. Its a bit of a stretch to look at sonys position today and blame it on being harder to program for. I would also put differences in multiplat games gown to 360 having a superior GPU rather than it being easier to work with.

Its obvious that PS3 was not designed to be hard to program for. Its a tradeoff between performance and ease of use. Im sure a single core cpu would be the easiest to program for but doesnt meen its the best solution.

Going by the logic in your post you could look as last gen, were the most difficult console to program blew everything else away yet the easiest lost its maker billions, and come to the conclusion that being hard to program for is the best option!
 
Last edited by a moderator:
The PS3 was more difficult because historically that's been the Japanese way of doing consoles, convoluted hardware with poor tools and documentation thrust upon the cattle (developers) to spend endless hours sleeping under their desks trying to figure it out. Even the PS1, which is possibly the easiest Japanese console to dev for that I can think of, didn't have proper english documentation for a long time. Sony ultimately didn't count on going up against a US console developer that actually understands coders, and who provided a well balanced well documented architecture very well suited to a variety of games, as well as providing in person help from day one and killer tools. Combine that with some poor hardware choices on Sony's part and they paid a heavy price for it. I doubt Sony will make the same mistake again on PS4. If they do then you will once again see Sony console holders playing worse versions of games 4 years into the console cycle, but I think they would be mad to allow that to happen again.

That's too bad. I remember Sony making a lot of noise about their standards support and improved software stack with the launch of the PS3.. Collada, OpenGL ES, yada yada yada. It sounded like they had invested significantly in all of that stuff.

Microsoft had so many resources in terms of operating system, development tools, DirectX, etc., it's hard to imagine how any other manufacturer on the planet could have fielded a competitive development suite, really. Given especially that Sony with the PS2 had effectively no operating system to speak of, no networking infrastructure, and no significant means of doing experimentation and development on that platform, that left them having to introduce and develop everything with the PS3 hardware.

None of which would seem to have much to do with Cell in particular.. surely they'd have had the same problem with any hardware they fielded unless they waited until NVidia had G80 ready, or adopted a Microsoft OS for the PlayStation.
 
I would take Joker's commentary on public vs private a bit further and say that Uncharted 2 'maxing out' Cell isn't even maxing it out in reality, and that should be readily recognized. Uncharted 2 is not the prettiest that could ever be on the system, and at the same time games not using "100%" are not automatically inferior either - extending to multiplatform. There always seems to be this desire to cast things in terms of absolutes.

Anyway, I really would like to see the architecture continue, and the PS4 would be a great vehicle towards either a spiritual or actual Cell successor. It's harsh to think that they might go elsewhere simply on the basis of 'ease,' as I do believe that tool-wise and experience-wise, the industry will be comfortable enough with the programming models by the time PS4 rolls around. But the sadder reality in the end may simply be that 'good enough' is indeed good enough when it comes to corporate goals, there's not a mad scientist in charge at SCE anymore, and a commodity part would probably get the job done fine for cheaper.

I'm keeping my fingers crossed for the sake of architectural variety and vision in the semiconductor industry though. The 65nm and 45nm cycles have been tough ones for the industry, and we've seen a lot of architectures buckle under the pressures of the market.
 
Microsoft had so many resources in terms of operating system, development tools, DirectX, etc., it's hard to imagine how any other manufacturer on the planet could have fielded a competitive development suite, really.

That's true, although that sometimes works against them because you have all sorts of internal groups trying to get their technologies shoved onto new products, leading to bloated products that try to do so much yet don't do anything particularly well. They managed to stay lean and focused with the 360 though, somehow.


None of which would seem to have much to do with Cell in particular.. surely they'd have had the same problem with any hardware they fielded unless they waited until NVidia had G80 ready, or adopted a Microsoft OS for the PlayStation.

I think Sony could have done much better with their existing hardware, they just lacked interest in helping the studios out. Take a look at Gpad for example, their equivalent of Pix. It's done entirely by the ICE team at Naughty Dog, which is actually a very small team of engineers. Was there really no way for Sony, a huge company with ample resources, to assemble a dedicated tools team from day 1? Instead they were still stuck on the old business model of let the devs sort the mess out. They throw staggering amounts of dollars at making PS3 exclusive games, all they had to do was take a tiny slice of that and dedicate it to a team that did absolutely nothing but make PS3 development easier. So instead of us struggling to figure out how to debug pixels and the spu's, we could have hit the ground running with GPad and spurs back in 2005. I think it would have made a world of difference, possibly letting us have 2009 quality PS3 multi plats in 2007. It's possible that they just expected exclusives to pull them through, but it's pretty obvious that exclusives don't have the power this gen that they had last gen, it's all about the multi plat games which are consistently the best sellers and best earners.
 
In other words, I'd prefer great art with average tech instead of great tech with average art. Point being that "best graphics" is not cut and dry, and ultimately I think art direction trumps tech.

Yeah, I recently replayed the GC RE games with the pre-rendered backgrounds and Silent Hill 3 on the PS2. It's just astonishing how well those aged and it's all due to impeccable art direction that few games nowadays rival. SH3 has character models that IMO trumps a lot of current gen games. I can go on with other examples, even with PS2 and Xbox games where the latter trumped the former technically, but yeah, I completely agree there :cool:

Having said that, something like U2 where it combines a great art direction and tech really stands. It's much more impressive technically and is prettier than RE5, but I preferred the latter as a game and to an extent, the character modeling of the main characters (and they used fewer polys to boot). KZ2 is admittedly an acquired taste from an art direction point of view, though superb tech-wise.


Given that most games bought are multi platform, I'd say making a development platform multi platform friendly should be a top priority. The old strategy of putting out bizarre hardware with poor tools and support, and relying on years for devs to figure it all out is a dead strategy.

I don't think it was really a strategy. Kutaragi was just using SCEI to realize his vision as an engineer. Looking at the differences between the PSX and PS2, he just got carried away further with the PS3. Though the PS2 was a bigger nightmare initially than the PS3 from what I recall. I still remember Inafune of Capcom saying "Basically, there are no libraries".

Back then, they had leverage because of the huge lead they had and this gen would've been the same had the PS3 launched at an appropriate price for a console. It didn't, and Kutaragi is out, and that's why we're hearing of SCEI actually reaching out to developers now to design something that's first and foremost a game console.

It just really seems wasteful not to go with a Cell successor after all the money invested in R&D. However, if the costs of developing proper libraries exceed a certain amount, it's probably best to go with the easier route and make developers happy with good support.
 
The PS3 was more difficult because historically that's been the Japanese way of doing consoles, convoluted hardware with poor tools and documentation thrust upon the cattle (developers) to spend endless hours sleeping under their desks trying to figure it out.

I agree with you but it seems that Sony learned the lesson already with the PSP. Programming the PSP is way easier than doing so with the PS2. You don't need to care about DMA setup anymore, for example.

While I don't think making things unnecessarily complex is a good idea you can only hide complexity so much. I think the Cell is a good design and, once you master it, the results can be very good. A Cell2 with a good performing GPU (AMD?) would deliver very good IQ if we take into account what devs like Guerrilla and Naughty Dog have been able to do with a mediocre Nvidia part.
 
I think it's kind of ridiculous to suggest that the bar has not been raised above gears 1 or pgr3. Gears 2 and pgr4 were substantial improvements imho. Capcom games have looked progressively better with each new major release. Rare has progressed from Kameo to Banjo. I don't doubt that Halo Reach and Alan Wake will be stunning as well and way beyond the launch titles you mentioned. Also, the beauty of exclusive titles is that you can't see how they would look on another console. How do you know that KZ2 and Uncharted 2 couldn't look just as good if not better on the 360? We'll never know.

From what Evan Wells has said, it seems that one of the biggest tech secrets of Uncharted 2 is the guaranteed hard drive allowing better streaming than a 360 developer could count on.

It would be interesting to see what could be accomplished on the 360 if Microsoft would permit developers to pass cert with games that require the hard drive.
 
From what Evan Wells has said, it seems that one of the biggest tech secrets of Uncharted 2 is the guaranteed hard drive allowing better streaming than a 360 developer could count on.

It would be interesting to see what could be accomplished on the 360 if Microsoft would permit developers to pass cert with games that require the hard drive.

Only they would never allow a game like that to pass cert unless it had a very good reason for it such as being an MMO and requiring additional content or the one football manager sim for the console.
 
A high utilization of resources, which is realized in part by architectural approaches which can range from multithreading/superthreading, pipelining, dependency decoupling, VLIW, instruction level parallelization, OoO, and other latency absorbing mechanisms is the common trait found in every winning processor design.

Designs that eschew the proper balance of control logic for higher peak performance within a relatively narrow set of conditions always find themselves performing worse under real-world workloads.
 
Games max them out regularly now, how else do you think it's possible to have the PS3's bottleneck ridden architecture approach the 360 on multi plat games?

And yet, there's no way to avoid 360 bottlenecks (lack of BD and HDD drives) and aproach the fidelity of PS3 exclusives. So who made better design - MS or Sony?

And Joker - as much as I respect your opinions and always waiting for your insightful and very informative posts, I have no doubt, that you are strongly prefering 360... Try to make it less obvious
 
A high utilization of resources, which is realized in part by architectural approaches which can range from multithreading/superthreading, pipelining, dependency decoupling, VLIW, instruction level parallelization, OoO, and other latency absorbing mechanisms is the common trait found in every winning processor design.

Designs that eschew the proper balance of control logic for higher peak performance within a relatively narrow set of conditions always find themselves performing worse under real-world workloads.

I agree those are traits found in the current architectural paradigm, but the world is changing - five years from now will folk say that x86 or some GPGPU evolution is the 'successful' model? Recognizing that both will still play a role regardless of the others' ascendancy - but the point being that approaches are changing as the "need" for a forgiving CPU reaches a point where further advances offer limited returns in most environments, where the further applicability of massively parallel/simpler core environments gain in utility.

The irony for the Cell is that if the PPE was more robust, we probably wouldn't hear so many complaints, but maybe the forced learning with the SPE's wouldn't have taken place - a la EmotionEngine vector units. I do think myself that Cell represented an advance picture of the industry direction. And indeed some of its real-world performance achievements certainly reflect that. If there had been the OpenCL, strength of tools, RapidMind efforts, etc etc back in 2006 that there are now going into 2010, I don't think it would have been begrudged as it was.

I don't disparage the idea of beefy superscaler cores as being a great thing mind you, just wanting to defend the architecture as not being from some fringe school of thinking with no real world merits.
 
joker454 said:
Where would this spare power be? The 360 has three ppu's and three vmx units, so some spu's have to be spent to account for that. The 360's gpu is a generation ahead of the PS3's, so some more spu's have to be spent to account for that as well, sometimes possible and sometimes impossible. So where is this spare spu power? You guys talk about it as if there are six idling spu's still waiting to be used. Games max them out regularly now, how else do you think it's possible to have the PS3's bottleneck ridden architecture approach the 360 on multi plat games?

If the design and code is more tailored to the PS3 architecture, then the developers can benefit from the architecture more.... like what the Crysis engine developers were able to do.

The question is whether it's worthwhile to spend more time to experiment.

I also don't think there is a general answer to the spare SPU power question. It would depend on the approach. The "best" invention are probably the "simple ones" like nAo's HDR format, and the Saboteur's AA. They may not be suitable for all situations, but they take advantage of PS3's architecture to achieve very visible output with little resources. Because they are "simple", they may be suitable for 360 too. The easiest and least efficient/effective "catch-all/brute force" solutions usually spread an existing algorithm to all 6 SPUs. The toughest ones rearchitected the entire pipeline (e.g., KZ2's pipeline). They all have their ways to optimize for PS3.

I don't think people are talking "as if there are six idling spu's still waiting to be used". They based it on the developers' existing output. Afterall despite all these complains, many great PS3 developers have already blown our mind with their contribution. Some of them are still trying.

I remember earlier on you mentioned that the PS3 has problem keeping up with 60fps @ 720p in a baseball game. In the end, MLB The Show 2009 became the best baseball game today (60fps, 1080p, realistic lighting and mo-capped animation, SPU-based custom crowd cheers via voice input). They probably did a lot of tricks to prevent bottlenecking the architecture. Then again, the game consoles are small, closed platform. Optimizing for the hardware is the bread-n-butter.

Now in MLB The Show 2010, they are promising more improvements, including rumored motion sensing controller support (More SPU work !). [size=-2]Ask them, not us, where they get the extra SPU power.[/size] I actually expect to see more improvements in the years to come.



Now back on topic... I'd think Sony may need to include some kind of SPU compatibility in PS4 to run PS3 software. In this gen, the developers rely more on DLCs and online gaming to extend their revenue. The user base is also more fragmented compared to the PS2 era. Breaking it off all of a sudden will likely have more impact to their bottomline. But this is just an "instant" guess. I have no insider info.
 
Back
Top