Was Cell any good? *spawn

Status
Not open for further replies.
Right but we're not talking about having part A+B vs just A. We're talking about using the area and power resources that went into the SPEs to do a better CPU
In terms of CPU, how "good" of a CPU you could fit on that size die in 2005/2006 is a wash. The comparisons I've heard by you game programmers between it and Xenon have been pretty inconclusive, with some of you preferring the former and others preferring the latter.
and/or a better GPU.
[/quote]
If they were already having yield issues with the RSX, to the point where they cut pixel ALUs by 25%, what makes you think they could have gone with an even larger die? The problem with the GPU seems to have been nVidia. This is the second time they barely modified a PC design rather than really orienting a design around the specific challenges a console intended to stick around for 5-7 years presents--and maybe that's due to how Sony approached them, I don't know. Either way, I don't think there's any evidence that a more traditional CPU would have resulted in nVidia doing something particularly different than they did. Even if they had...what could you do with the XGPU and Cell? Would it be more or less than what you can do with the current 360 design?

Anyway, I think we can all agree that it absolutely wasn't an economically sustainable design and failed to meet Sony's goals.
 
If they were already having yield issues with the RSX, to the point where they cut pixel ALUs by 25%

Good thing that was never the case though, the 32 pixel pipes that never made it due to mfg woes rumour is gunk. What the PS triple was supposed to have though, instead of RSX, is quite interesting (intellectually), but too bad that's not going to be sprayed out into the open soon, AFAICT.
 
GPGPU will of course be even better for these tasks in future games, but it wasn't available 7 years ago. It's easy to criticize Cell now when we have better options (wide vector general purpose CPUs and compute capable programmable GPUs)...
I think this is the crux of it. Sony were right in wanting compute performance. There wasn't anything on the drawing board, certainly back at the turn of the millenium when they were considering this, that'd suggest they'd have a hardware platform to serve these tasks. The choice to design a processor for the job was a fair one. However, the rest of the industry evolved and added GPU compute and expanded on x86 to accomodate the new workloads, and Cell's particular focus no longer stands out. It sits uncomfortably in the middle between the two extremes - lacking the peak throughout of a GPU and the comfort and versatility of a conventional CPU - so doesn't really belong any more. I can still see value in the idea of small, potent cores though, even if the way things (both software engineering requirements to a budget and the evolution of other processors) have developed so that such a processor won't have a place alongside CPUs and GPUs.
 
Cell forced developers to think about their data layout and memory access patterns. Efficient usage of Cell required batch processing of data (SoA vector processing of large access pattern optimized structures). This resulted in a lot of research around data driven architectures and data oriented design (data items flowing though a pipeline of processing steps). I don't see this as a bad thing at all, because I personally see this as a very good design decision. When done right, this kind of design improves performance a lot, and at the same time improves code quality/maintainability, and reduces multithreading problems (less synchronization errors, improves threading efficiency and provides automatic scaling).
See here's the thing though, while that may be true for game developers, none of what Cell did or what people are calling "data oriented design" is anything new. It has *always* been a better-performing way to write code, even on the big cores as you have noted. And frankly, this has been well-known for decades by anyone programming for throughput. Ironically it's C structures and pointers that have tended to make people use less efficient data access patterns, even though C is getting being heralded in this latest "movement" as the second coming for high-performance programming (*snort*).

So sure, the outcome of people starting to write better code is a good one, but people have been writing that style of code for a long time before Cell...

However GPUs with flexible compute capabilities weren't available at that time and neither were general purpose CPUs with very wide vector units. Sony wanted to have lots of compute performance, and Cell was pretty much the only choice.
Meh, I don't buy this. Like I said, the time delta between PS3 release and G80 was small enough that they would have known about the architecture.
 
I think this is the crux of it. Sony were right in wanting compute performance. There wasn't anything on the drawing board, certainly back at the turn of the millenium when they were considering this, that'd suggest they'd have a hardware platform to serve these tasks. The choice to design a processor for the job was a fair one. However, the rest of the industry evolved and added GPU compute and expanded on x86 to accomodate the new workloads, and Cell's particular focus no longer stands out. It sits uncomfortably in the middle between the two extremes - lacking the peak throughout of a GPU and the comfort and versatility of a conventional CPU - so doesn't really belong any more. I can still see value in the idea of small, potent cores though, even if the way things (both software engineering requirements to a budget and the evolution of other processors) have developed so that such a processor won't have a place alongside CPUs and GPUs.

I'm not so sure that such hybrid devices are going to be out of place any time soon. Tilera are producing stacked cpu's using small fast cores similar to the SPUs. They are incredibly powerful but also incredibly esoteric to develop on. Which isn't a bad thing, the industry needs its non conformers otherwise it would stagnate really quickly. Though I think I am getting too old to keep up now, I still remember bending my head around OCCAM and the Transputer. I'm sticking with a Raspberry PI for now... Once it finally arrives (~18 weeks for delivery)
 
This has been a great thread to read, some really interesting perspectives here.

Meh, I don't buy this. Like I said, the time delta between PS3 release and G80 was small enough that they would have known about the architecture.

Wouldn't resources be a possible issue in this case? Wouldn't it take a lot more work to design a G80 type card into the PS3 than the 7800 currently in the system?

Are you saying they could have dropped Cell when they signed on with Nvidia and just used a 4 core Xenon-like CPU with a G80? Sounds like a wasted investment considering how much they already put into creating the Cell at this point.
 
Last edited by a moderator:
Meh, I don't buy this. Like I said, the time delta between PS3 release and G80 was small enough that they would have known about the architecture.

PS3 was late to the market becasue of Bluray supply, therefore it should be considered as late 2005 design not late 2006 when G80 was released simultaneously with PS3. Also I don't think that Nvidia would willingly share its newest unified technology with Sony, especially when they asked for GPU so late in the designing process (as replacement for second Cell).

For me G80 was no go for PS3
 
I was responding to the notion that Sony was somehow oblivious to the direction of GPUs and GPU computing. Even if they had completely nailed down everything a year earlier, if they had zero knowledge of the potential of a GPU that was only a year or two out then they have major problems. It's just not very difficult to predict that sort of time frame... no major architectural shifts happen out of the blue with no warning. They should have been fully aware that unified shaders, high orthogonality, efficient branching, etc. was all on the way.

And let's be totally clear on one thing - I'm not complaining at all about how "hard" Cell is to program, or it being "different" or any of that. In fact I completely agree with the folks who are saying that it's really not that hard. So let's keep those Sony marketing comments out of this discussion... I'm assuming that everyone here knows how to write good code on Cell and other processors so let's move past that.

My point is that the hardware - particularly the lack of caches - makes it totally infeasible to do irregular/random access work, which is *kind of important* for efficiency. Without that, it's really no better than a GPU. And for all the stuff that you want to argue "oh I'm just going to use it like 7-8 serial 3.2Ghz processors" I've got news for you - you can't just throw serial streams of instructions and branches at it and expect it to run full tilt. It's fundamentally a throughput processor (like a GPU) and using it like a CPU will always be less power-efficient than using an architecture designed for those sorts of instruction streams.
 
Meh, I don't buy this. Like I said, the time delta between PS3 release and G80 was small enough that they would have known about the architecture.
The design of PS3 is another discussion to whether Cell had anything to contribute, but even then, unless you expect Sony to scrap all their designs from the previous years and just throw together a G80+ some CPU at the last minute, there's no legitimate reason to think Sony could have gone with a G80 system.

The thing with technology is it's always advancing, so if you wait just a little bit longer for the next great thing to cross the horizon and be ready to use, there's another next great thing showing just over the horizon, tempting you to wait another 6+ months. Sony made a choice in ~2000 to create a very fast programmable hardware, with the versatility of a CPU but much greater throughput. The result did work for their targets, even if the developers hated it and more modern designs have rendered it obsolete. The worth of Cell should really be evaluated regards what was available and upcoming from 2000 to 2004, when HW designs had to be finalised ready for production. IMO the only argument against Cell is if there was an alternative that was as fast, such as if Xenon can perform the same workloads Cell is doing. If nothing else was as capable regards what devs actually get from the hardware (a less powerful processor that's easier to program and yields better real-world results) then Cell was good in some respects.
 
Given that the decision to go for Cell was around the same time the worlds leading CPU developer was aiming for nothing else than high clockspeeds and failed flat on its face with a horrible turd, I´d say that Cell was a magnificent design at its time, tackling all problems early that the tech-world just realized later on.

No, it wasn`t perfect, but it identified the problems and used solutions that are now in widespread use - multicore, fat internal busses, lots of internal memory/cache and multiple cores. Problem is that while the rest of the world moved forward while Cell stood still, given how x86 evolved I dont doubt you could add semitransparent caches (= use a configurable amount of the LS as cache) and scatter/gather without penalizing code that fits and really exploits the LS and manual DMA-ing.

Guess the shoehorning into 2 markets proved to be too much, leaving a sub par solution on both sides (higher cost for PS3, fixed performance characteristic of resfreshes/shrinks for servers).
 
Cell was done at least two years before G80, and when G80 released that cost way too much to be able to go into the PS3, and it would have taken even longer (if ever) to become price competitive with 360. This was discussed quite extensively back then even, and there wasn't much disagreement.

Otherwise, most of what I think about Cell has been said. Successful or not I am convinced it was a groundbreaking design (including local store, flexio and xdr) that will be seen in history as important and groundbreaking, which of course doesn't mean automatically financially and strategically successful. At the time it was the most power per watt and supercomputers built on it broke records, and it still holds up quite well for some workloads considering its age.
 
Is his argument that they should have ditched Cell and gone with something like G80 in the PS3, or that Cell should have been ditched way earlier, because they should have had the foresight to see it was a one-and-done single-generation architecture? There is a distinction between the two arguments. Obviously they couldn't have put G80 in the PS3, but they could have cancelled their investment in Cell earlier and gone with an architecture more in-line with the 360 and remained competitive. I'm not sure what the right call is either way, I'm just trying to discern which of the two arguments Andrew is making.
 
I do think it's fairly easy to answe the question: Was Cell a good long term investment?

If it was a good investment they'd spend the money. They didn't, so it wasn't. They have the money, so that isn't the problem. You don't need to get into the technical details to get to that answer. Whether it was a good short-term design is for the people with the technical backgrounds to sort out.
 
a G80 based card wouldve ended up being way slower with a similar amount of transistors, flexibility comes at a (high) price.
Sure it wouldve enabled better effects and better efficiency, but I doubt it would run the vast majority of games/usecases with similar or even better performance as the RSX does.
 
...that Cell should have been ditched way earlier, because they should have had the foresight to see it was a one-and-done single-generation architecture?

If it was a good investment they'd spend the money. They didn't, so it wasn't.
I dont think it's as straightforward as that. The design was for a versatile processor that could have been used in Sony devices. Had Cell and SPURSEngine found widespread adoption across devices, the development of the platform would have continued. There's an argument that Cell wasn't suited for high-end CE devices, but we also know that Sony as an organisation had no internal continuity and the vision of Ken may not have been impractical but may have just not resonated with the rest of Sony. As technology progresses, Cell becomes smaller and less power hungry. At some point it'll be suitable for low-power devices while still offering all that programable power. If the Cell program had continued, the flexibility versus custom ASICS may have proven very worthwhile. Sony's latest TVs could all be running a Cell Selection of PSN titles for example, instead of Sony needing to look into creating a platform agnostic system and go head-to-head against MS and Apple who are far more experienced at that sort of thing.

Those sorts of discussions will never come to a satisfactory conclusion; I mention it only to cast doubt on the idea that if Cell had value then Sony would have invested in it. The corporate politics of Sony have been crap for twenty+ years, and that is enough to kill a great idea with a lot of potential.
 
Is his argument that they should have ditched Cell and gone with something like G80 in the PS3, or that Cell should have been ditched way earlier, because they should have had the foresight to see it was a one-and-done single-generation architecture?
The latter, I've said it several times explicitly now but I guess people are just skimming posts. All of this talk of G80 was simply in response to the assertion that the folks at IBM/Sony had "no idea" that GPUs and CPUs were already on a trajectory to take over the space that Cell was aimed at. That notion is either 1) nonsense or 2) indicative of some pretty serious issues from the decision-makers.

That's all I was saying on that point. Please read the context.
 
I don't think Sony sees the future in Cell either. They just need it for PS3. Like how they need Emotion Engine for PS2. I'm pretty sure like PS2 where they finished GS before EE work even started, Sony actually had a monster rasterizer ready to be paired with Cell. They just couldn't build you know the 32 SPEs Cell that they wanted. So had to stuck RSX in there instead.
 
The latter, I've said it several times explicitly now but I guess people are just skimming posts. All of this talk of G80 was simply in response to the assertion that the folks at IBM/Sony had "no idea" that GPUs and CPUs were already on a trajectory to take over the space that Cell was aimed at. That notion is either 1) nonsense or 2) indicative of some pretty serious issues from the decision-makers.

That's all I was saying on that point. Please read the context.

That's what I'd gotten out of your statements, but the responses seemed to be written as if you were suggesting G80 should have been in the PS3. I didn't want to put words in your mouth, so I wrote it as a question, hoping it would be clarified for everyone.

...
Those sorts of discussions will never come to a satisfactory conclusion; I mention it only to cast doubt on the idea that if Cell had value then Sony would have invested in it. The corporate politics of Sony have been crap for twenty+ years, and that is enough to kill a great idea with a lot of potential.

I don't know. It was not just Sony. It was Sony, Toshiba and IBM that abandoned it. You could be right, but the most likely situation is that the partnership looked at the design and decided it was not an architecture and business worth continuing.
 
The latter, I've said it several times explicitly now but I guess people are just skimming posts. All of this talk of G80 was simply in response to the assertion that the folks at IBM/Sony had "no idea" that GPUs and CPUs were already on a trajectory to take over the space that Cell was aimed at. That notion is either 1) nonsense or 2) indicative of some pretty serious issues from the decision-makers.

That's all I was saying on that point. Please read the context.

I admit even when I read shifty and your posts fully, I misunderstood your original point. Sorry about that. This does make more sense now.
 
FWIW I've never considered Cell forwards looking in design.
I still think it was as much designed as an evolution of the PS2 with it's VU's as it was any attempt to be a change in paradigm.
I think many of the its limitations were apparent before anyone wrote a line of code on it.
I think a lot of people just loved the idea it was different, in much the same way that many here loved the idea of PVR and tiling 10 years ago.
The idea of it being used instead of custom silicon for embedded devices was brain dead stupid, and demonstrates the inability to look at other peoples failures. Long term the custom silicon is always cheaper, and the markets always become cost sensitive.
And I don't think it or similar architectures (many memory pools with explicit DMA) have any real future.
 
Status
Not open for further replies.
Back
Top