Cell/CPu architectures as a GPU (Frostbite Spinoff)

True, but those seem like the kinds of things that are easily addressed by cheap dedicated hardware now. At the time a PS3 was a really good flexible option as a BluRay player, being only slightly more expensive than a dedicated player. I'm not sure there's much of a future in that space for exotic CPUs, because for codecs/disc formats to hit critical mass the hardware has to be cheap. The encoding side is another story. I'm not sure what the world of video encoders looks like right now.

Absolutely. 2011 is not 2006 in this space, no question about that.
 
And this is the interesting bit! Compute shaders are allowing rendering efficiencies. These compute shaders are new to PS3 right? The GPU can't do it, but DICE for one are using Cell as a compute shader engine. Now, unless when DICE did their GDC presentation most of the audience went, "so what? We've been doing this sort of thing for a couple of years now," (and maybe that happened as I don't know the state of console development), this is a new way of thinking. This is a way of thinking born out of GPU evolution, which was born out of a gradual analysis of workloads and a development of progressive solutions. Split work into tasks and create shaders. Unify shaders for efficiency. Add compute shaders for flexibility. But no-one back in 2000 was designing 2011 class GPUs, not because of a lack of funding, but thought patterns had got that far.

Now we have new GPU designs with compute shaders, but we also have a 2005 design capable of running the same concepts because it wasn't designed to a certain way of thinking based on jobs, but was designed to just crunch numbers. No-one started writing compute shaders on PS3 on day 1 because no-one thought about it. Heck, a lot barely used the SPEs because they didn't know how and had deadlines to hit. They have since learnt to reengineer their code to offload work typically associated with GPU onto Cell to support the weak GPU. But, unless I'm mistaken, the whole way of thinking about graphics has always followed the DX paradigm. No-one on day 1 was looking at Cell and thinking "I bet we could do some analytical AA on that." Nor was PS3 designed with MLAA in mind. It was just an idea someone had, that Cell could handle because it had a mix of performance and flexibility. Likewise how many years have had to pass before someone thought of doing the things DICE are doing? Unless it's all old news, DICE have taken a leap forwards in thinking. This leap forwards came from GPU evolution, and without DX11 would likely never have happened on PS3, but is enabled on PS3 for the same reason as MLAA.

The question therefore exists, what else is possible when you have a truly free processing resource that can try any algorithm reasonably fast? What techniques and approaches are possible that no-one has thought of because everyone is thinking in terms of DX-based GPUs, solving problems as they encounter them instead of dreaming-from-the-ground-up renderers? Frostbite 2 is showing that fully programmable hardware from 2005 is able to implement some of the cutting edge concepts of 6 years later, which surely suggests the possibility of as-yet-unknown rendering approaches. Ideas that'll appear another 5 years down the line. Ideas that would never appear on PS3 or Larrabee because these are commercial ventures, and which will rarely appear in academia because of limited budgets and legacy hardware designs.

It's this whole potential, whether it leads to anything or not, that I'm excited about and wondering if we could ever measure or evaluate without having a lot of clever folk sit down with a CPU architecture and be funded for a couple of years to create whatever renderers they can without any obligations, deadlines, or necessary end products!

I see your point, however I think the key word from your post is "gradual" as tech evolves fairly slowly. At day one the chosen gpu for a piece of hardware does everything needed, after all if it doesn't then a mistake was made in it's choice. Over time new ideas evolve gradually as you say, but by the time they come to fruition the general purpose cpu hardware on the chosen platform will likely be really old. Sure it can do it, but is that worth the overall cost? I personally don't think so.

I don't think you give the coders enough credit as well. Even if they are saddled with fixed hardware they are always thinking out of the box and can make even the most rigid hardware do bizarre things. You can date that as far back as the Atari 2600 which was fixed hardware capable of only rendering a player and a missle, but ultimately taken far beyond that. Likewise at the end of the day even the less general purpose 360 will be capable of running Frostbite 2 so it's not like it's lack of general purpose cpu grunt held ideas back.

I'd further argue that even if spu's didn't exist someone at this point would have still been trying out what Frostbite is trying today. The evolution of games is what pushes the tech needed to render them, that in turn pushes api's like dx to evolve to support new paradigms, and in turn devs will try out new things on pc and then evolve them to console. The spu's ability to mimick computer shaders to a degree is convenient, but I personally do not think their existence influenced where render technology was going. I would argue that they heavily speed up the advance towards multi core support and rethinking of data structures which is very significant, but they did not directly lead to compute shaders on their own. Irregardless of their existence I'd say the direction of Crytek and Frostbite would have been the same. There's really no way for me or you to prove otherwise naturally, but it's just my opinion.

Anyways Frostbite 2 will be an interesting test case indeed. It's clear that some stuff will be tricky to do on 360, however the 6 to 10ms gpu time to build the gbuffers on ps3 might be quicker to do on 360, the 1.3ms resolve time to xdr won't be needed on the 360 build, and the 360 build won't have jts stalls. So in the end any spu gains may be a wash. We'll have to wait and see.
 
Good point but at what performance penalty?

Cell can handle future solutions no problem but is the performance it delivers in those solutions enough to win out over an older more fixed function design utilising older solutions at a higher speed?
No, it never will be. The discrepency in terms of transistor density and functional logic, Cell itself isn't going to compete in terms of output. But being programmable, PS3 seems to have got longer legs as it can implement new ideas, and I'd say that points to a potential future. Next-(next-next-)gen could go with programmability over fixed function. Something scary like Larrabee. The initial expectation would be that performance would be terrible as it tries to run DX type engines, but there's the possibility that some bright sparks would find a completely different way to process the graphics data and manage to make the end result comparable to other DX GPUs, with the plus point that those computing resources can be turned to any other job when graphics power isn't needed.

I don't think you give the coders enough credit as well. Even if they are saddled with fixed hardware they are always thinking out of the box and can make even the most rigid hardware do bizarre things.
*cough cough* I have always championed the developer, and that's one reason I'm an avid supporter of the single processor solution, because I know given something to work with, eventually human beings would find ways to make it do amazing things. NAO32 is a prime example. That's using human ingenuity to work around hardware limits and maintain DX class rasterisers (a specification that in itself is headed towards open programmability). What would happen if these same devs had chance to work on a blank-slate solution? Any choice of any rendering technique : Ray casting; rasterising; volumetrics; bozoparticulate interferraction; sonographic impulse metacalculus (okay, devs would never give their new algorithms such funky names!).

I'd further argue that even if spu's didn't exist someone at this point would have still been trying out what Frostbite is trying today. The evolution of games is what pushes the tech needed to render them, that in turn pushes api's like dx to evolve to support new paradigms, and in turn devs will try out new things on pc and then evolve them to console. The spu's ability to mimick computer shaders to a degree is convenient, but I personally do not think their existence influenced where render technology was going.
That's what I was saying though. SPU's haven't been a focal point for ingenuity. Ideas have come from the PC space, which is an evolution of DirectX. Every bit of silicon that goes into a GPU is being designed to run DirectX, and has to adopt the ideas discussed in the DX committees. When IHVs have tried to put in novel solutions, the lack of API support and install base means they have often been a waste of time. It's been a slow evolution of ideas that has seen the creative software rasterisers of the pre-GPU era forgotten and never explored and given a chance to evolve in the same way, so we can't evaluate them in the same way. It's like comparing one cake recipe you're unsure of that's just words on a page with another you've cooked and eaten and tastes great. Are you going to waste ingredients trying that odd cake (turnip and marmalade) or stick with what you know tastes great?

I would argue that they heavily speed up the advance towards multi core support and rethinking of data structures which is very significant, but they did not directly lead to compute shaders on their own. Irregardless of their existence I'd say the direction of Crytek and Frostbite would have been the same. There's really no way for me or you to prove otherwise naturally, but it's just my opinion.
No, I agree with you. Which is my point! Cell never had a chance to do what it was really capable of (same with Larrabee, or any other CPU concept) because it has to follow the conventions as part of its business. I was never saying Cell inspired DX11. I'm saying Cell was capable of DX11. CPU's are capable of techniques not yet thought of - we just need someone to think of them and implement them! But also the CPU architectures capable of runnig them.

And this all ties back to the evolution of GPUs and CPUs. The ideal hardware giving developers the least agro and the most flexibility, meaning the least redundant processing resources, is a single ISA computer architecture. This doesn't work as GPUs are more efficient, we all know. The idea of one processor architecture that can be scaled to fit in a console, a handheld, a TV, a synthesiser, and power all the different functions, has always been an unrealistic dream. What part of that is because no-one has found the ideal graphics rendering system though? Is it actually possible, only the solution has always been overlooked? Wouldn't it be great if in some years, there was only one ISA, and every bit of code could be ported to different devices! Okay, it's not going to happen, but there's value to be found in determining whether that's because it is impossible as there's no rendering solution CPUs can do that'll compete with GPUs, or because even though such a solution could exist, the world is locked into the current ways.
 
I was never saying Cell inspired DX11. I'm saying Cell was capable of DX11. CPU's are capable of techniques not yet thought of - we just need someone to think of them and implement them! But also the CPU architectures capable of runnig them.

And this all ties back to the evolution of GPUs and CPUs. The ideal hardware giving developers the least agro and the most flexibility, meaning the least redundant processing resources, is a single ISA computer architecture. This doesn't work as GPUs are more efficient, we all know.

You have to consider the comparative efficiencies here. Whilst you can praise the implementation for it's comparative efficiency when the comparison is solely within a single architecture like the PS3, can you say that the comparitive efficiency holds out against a modern GPU even considering a more modern Cell processor? You have to consider system wide efficiency. It's all well and good making the most general purpose hardware but as 3dilitente said, when you're turning off most of the chip at any one point due to power considerations do you really gain enough flexibility to make up for the cost to outright performance? The specialisation in game systems of having specialist CPU and GPU hardware makes sense if both are more power efficient at what they do. Consoles are running against a power wall more than anything so performance per watt could almost be considered the most important metric so long as it's good enough in other areas.
 
Ideas have come from the PC space, which is an evolution of DirectX. Every bit of silicon that goes into a GPU is being designed to run DirectX, and has to adopt the ideas discussed in the DX committees. When IHVs have tried to put in novel solutions, the lack of API support and install base means they have often been a waste of time.

Well the directx guys don't work in a vacuum, the ideas they implement in the dx api are the result of many things including developer feedback and trends. It behooves them to do that otherwise they become more of a burden than anything else. Also they can't just implement anything in there otherwise the api would become a mess, and ihv's sometimes want to shove stuff in there that no one necessarily was crying for but more as a way to differentitate themselves from the competition.


Cell never had a chance to do what it was really capable of (same with Larrabee, or any other CPU concept) because it has to follow the conventions as part of its business.

I'd argue that Cell totally had a chance from day one to show what it was capable of, that was the purpose of Sony first parties after all. They only had one platform to be concerned with so it was a clean spu slate, and it was their job to show what spu's can do. Some did well but even with all those years and financial backing they still didn't eclipse the best from third parties. Some it should also be noted didn't do so well with the Cell clean slate. Those are pretty telling arguments right there as to how tough it is to get solutions working on fully general purpsoe hardware in a competitive timeframe.

You could argue as you say that they were still bound by dx/gl conventions, thinking, and api's. But you have to be realistic here. In fairly tale land where all studios have infinite money, no milestores and have Kate Beckinsale feeding them grapes as they code then ok give them a totally clean slate and see how many years it takes them to come up with something that oooo's and aahhhh's. But take that kind of mentality and put it in a competitive environment and the results would be brutal. The existence of dx/gl is to save time and best leverage gpu hardware. You could argue that it can be confining, but overall it leads to better and faster results.

One could even make the counter argument that fixed funtion gpu's let devs hit gpu limits quicker, because the combination of api's like dx and fast hardware implementations let them max out their ideas faster. That in turn *forces* them to think of new ways to do what they want to do because, well, they've maxed out the current fastest hardware implementations, so now what? Let think of something new, try it out, show some proof of concept pieces, get these ideas implemented into new gpu's, rinse and repeat. In fact taking this further, what if I were to claim that if we went with a totally clean slate of 100% general purpose processing and no helper api's that progress would actually slow down? I mean that's kind of what happened back in the day with software renderers. They hit a performance wall and things weren't changing very much. There was no burden of fixed function gpu's or graphics api's but instead we were all stuck waiting for one company, namely Id, to move things forward because things were moving at such a glacial pace. Then the gpu came along and progress exploded.
 
I don't think you give the coders enough credit as well. Even if they are saddled with fixed hardware they are always thinking out of the box and can make even the most rigid hardware do bizarre things.

This. I'll name Crytek since what they shown all can run in DX9. High quality object motionblur with edge smoothing despite DX9 API limitations. Early on there where devs saying it was only possible under DX9 API.
 
I'd argue that Cell totally had a chance from day one to show what it was capable of, that was the purpose of Sony first parties after all. They only had one platform to be concerned with so it was a clean spu slate, and it was their job to show what spu's can do.
You can't buy inspiration though. Ideas jsut happen, often as a coming together of ideas, with a splash of out-of-the-blur inspiration. Once an idea has been conceived, like reconstructive AA, it is explored by the avant garde and develops into either a useful feature or future evolutionary path, or is found to be something of a dead end. Getting those new ideas requires particular imaginative thinkers given particular freedoms and time.

But you have to be realistic here. In fairly tale land where all studios have infinite money, no milestores and have Kate Beckinsale feeding them grapes as they code then ok give them a totally clean slate and see how many years it takes them to come up with something that oooo's and aahhhh's. But take that kind of mentality and put it in a competitive environment and the results would be brutal. The existence of dx/gl is to save time and best leverage gpu hardware. You could argue that it can be confining, but overall it leads to better and faster results.
I agree. Absolutely. I've never said otherwise. In terms of business, it's more cost effective to go with what you know, and develop on top of current thinking, rather than start anew with a blank canvas and no idea if what you're trying to put on it will ever lead to a workable product.

I'm reminded of Laa-Yosh's tale of the transition to pure raytracing. There have been various modellers and renderers back on 8 bit home computers. The Amiga had a few, mostly rasteriser that renderer triangles. There was one novel program called Real3D that was a CSG raytracer, and although it couldn't model some surfaces as well as the triangle based modellers, it was several times faster and with better accuracy and quality for those models that could be created with CSGs. the offline graphics scene evolved principally on PC with the Amiga's demise, although Real3D made it across. It introduced non-tesselated HOS such as NURBS meshes and metaballs, and was capable of realistic reflection and refraction and shadows producing images of much higher realism than the likes of StudioMAX. However, raytracing was slow, and the scanline renderers gained much more support, and investment. They added features like GI and soft shadows, but each additional feature was a new engine on top of the scanline core. Whereas the raytracing solution, slow as it also was, could integrate these features naturally as part fo the lighting process - it just needed more rays.

We then get to a point where the complexity of modern renderers that needed individual lightmaps to be tweaked and loads of setup has outweighted the speed advantage of rendering, and Laa-Yosh is looking at doing everything in a straight raytracer now. The speed of CPUs has caught up that the algorithm of raytracing is faster overall in a production pipeline than all the speed-generating tricks.

Similarly, to me at least, 3D graphics has followed a path of tesselated triangles for exactly the same reason. We are now adding sophisticated lighting and shading which doesn't fit naturally with that rendering method, causing a fracturing of the rendering pipeline and a lot of juggling. If we went back to square one, given a set of resources and all the prior knowledge, is it not possible to produce a new rendering paradigm that accomodates all these problems by design, rather than the current model that deals with each new problem on top of prior solutions because back in 1999 no-one was thinking about how to add GI, AO, surface shading etc., and every attempt to deal with this had to be backwards compatible with all the 3D rendering software.
 
This. I'll name Crytek since what they shown all can run in DX9. High quality object motionblur with edge smoothing despite DX9 API limitations. Early on there where devs saying it was only possible under DX9 API.

Brute force is brute. :p
 
The real question of course is which architecture yields A.I. Strong and which yield A.I. Weak. Who cares about graphics? ;-)
 
Actually that should perhaps be I.M. Strong and I.R. Weak. ;)

Perhaps, but moderation issues prevent this.

Infractions/warnings/bans to be precise

Actually it does raise a point, would a CPU which is 'good' at GPU work be 'bad' at CPU work like A.I.? Does the concept of a CGPU displease the Strong A.I. for this reason alone?
 
Actually it does raise a point, would a CPU which is 'good' at GPU work be 'bad' at CPU work like A.I.? Does the concept of a CGPU displease the Strong A.I. for this reason alone?

Traditionally weakly programmed A.I. says yes. This is often a heavily branching routine using up lots of RAM and jumping all over the place. However, imho those are crap AI routines in the first place, and badly optimised code second. A.I. code should almost always be able to be rewritten around a streaming data model, that is then more suitable to 'GPGPU' type cores.

Of course, this is a good example of that where the reality turns out to be that no-one ever cares to evolve AI in that direction, then weak 'traditional CPU' capabilities are going to hurt nevertheless. This is the same type of issue that the PS3 faced in the first place, in that there was not enough drive to make changes here for various reasons.
 
I would love to see the evolution of this architecture in the next generation of consoles. I think it meshes very well a 10 year lifecycle. Does anyone think it would be better to ditch all the tools born from/for Cell for something else next gen? At this point, I believe it would be easier for devs to hit the ground running with an advanced Cell next gen. To start over with another processor seems like a waste of tools and research. On top of that, it would seem to gift devs more time to be inventive and give birth new techniques on the Cell. Maybe completely ray-traced games would have a chance to be realized on PSN or even discs.
 
Well, fixed function is obviously slowly on the way out, to some extent. All GPUs are moving the way of unified shaders and programmability. You have CUDA, OpenCL and compute shaders on the PC side. DX11 allows for computer shaders to be grouped and share information.

I'm not knowledgable about GPU hardware, really, but I'd be interested in sort of a birds-eye comparison of an SPE vs a compute shader or other GPGPU implementation. I'm assuming the GPGPU APIs would lack some of the flexibility of the instruction set for a SIMD processor. That might be a good starting point for the conversation.

I guess something like AMD's Fusion is also very similar to a console implementation, where the CPU and GPU are closely tied, and the line between the two is blurring. I guess the point of the SPE and GPGPU are mostly the same - fast processing of parallel data. I guess the solution ends up being, in theory, the same from a very high level. The PPU in Cell is more like a traditional CPU, the SPEs are SIMD and RSX is GPU function. In Directx 11 you have a CPU and instead of at general purpose SIMD, you get similar functionality from the GPU side. So, in my mind, what you have with the PS3 and DirectX 11 are both heading in the same direction from opposite ends of the spectrum, I suppose.
Well, I have always been a big fan of Larrabee...despite that it was obvious it would never succeed in the PC world because most of its features would be unused. In that sense I agree with Shifty, closed hardware is completely exciting to develop for.

That's what I find appealing about Cell now that Battlefield 3 is going to come out. Cell was initially meant to be a CPU and a GPU at the same time so it's not an accident that they are using it for graphics techniques.

Even so, I am a big follower of ATi and their GPUs and they will always be there to help the main CPU achieve results that you can't reach using software solutions only.

Also I've been a fan of software rendering since the voxels days, when programmers created a game running on voxels using the CPU. I can't remember which game it was. Plus, PC games like the memorable and extraordinary Need for Speed 3 let people choose between Software Rendering and 3D hardware accelerated GPU Rendering.

Of course GPUs are unbeatable beasts since a decent graphics chip and its GPU based processing can surpass complexity optimization for a CPU, so the game looked better running on 3D accelerated PCs, but the software option allowed a few effects not available on the GPU render engine, as far as I remember.

As I said I love software rendering, and I was hoping Larrabee -or a similar chip by AMD/ATi- would be included in next gen consoles, along with a GPU with some necessary fixed functions. It would allow some crazy stuff that I long for this generation and it's certainly exciting to know that your machine has a chip inside with 50+ cores.

Speaking as someone who definitely and once and for all jumped into the software rendering bandwagon, after reading an iD Tech employee -most specifically Todd Hollenshead- stating that Software rendering was the future...and how much better it is compared to ONLY hardware rendering, I can only say I became a big fan of DICE and their engines. I loved how they used the eDRAM in Trials HD to achieve some extra effects to greatly improve image quality while still keeping the overall flawless smoothness of the game, for instance.

Crytek, iD and DICE are nowadays my favourite developers when it comes to engines. iD and Carmack have been always a classic in my list.

Battlefield 3 will show that Cell was a great and really differentiating choice made by Sony and it's possibilities weren't fully explored... even in the case of exclusive titles. "Playing God" in an attempt to help Cell and avoid RSX is very much a good thing. Anyway, the engine is written by humans so that tells how factual their work is in relation to what they want to achieve, and the PS3 is not as powerful as some people used to believe.

Cell in itself will be seen as a pioneering CPU in the future. Despite this, for a game such as Battlefield 3 I don't see it as a magic bullet that will make the graphics look great compared to the PC version or even the 360 version. It's an old CPU competing against top notch PC full-fledged GPUs. Too weak to be a great general purpose CPU and too meek to act as a GPU. But well, the potential is there.

I have a feeling that the Xbox 360 version will be also outstanding from a console point of view. The Xbox 360 CPU, while not being as versatile and powerful as Cell, is light-fingered and might steal the show in its own way. It has half of Cell's flops, performance wise, which is a great amount of power. And Xenos :) is as always of great help.

Common sense tells me this, but maybe it's just me. (The same happened after the PS3 rampant piracy began. I was using common sense when everything seemed lost and thought that if it was just about some numbers, why couldn't Sony simply change those numbers and things would be fine? -turned out to be that the solution was the simpler thought- I didn't say a thing because I am not an expert on software and hardware subjects, and I was unable to properly explain my theory)

Anyway, DICE developers, welcome to my top 3 best developers ever list! Feel free to hang out on here -repi, Christina, etc- and look around the boards. Great stuff. Yay for you!
 
Last edited by a moderator:
Anyway, the engine is written by humans so that tells how factual their work is in relation to what they want to achieve, and the PS3 is not as powerful as some people used to believe.
How powerful did/do some people believe Cell was/is? I see this pop up from some people, but it's never defined. Why?

Cell in itself will be seen as a pioneering CPU in the future. Despite this, for a game such as Battlefield 3 I don't see it as a magic bullet that will make the graphics look great compared to the PC version or even the 360 version. It's an old CPU competing against top notch PC full-fledged GPUs. Too weak to be a great general purpose CPU and too meek to act as a GPU. But well, the potential is there.
Even though it's been 6 years since Cell was realized, it still beats modern CPUs at some game related tasks. This is even shown in a slide from DICE this year. In some ways, Cell is still a future model for consumer CPUs.

I have a feeling that the Xbox 360 version will be also outstanding from a console point of view. The Xbox 360 CPU, while not being as versatile and powerful as Cell, is light-fingered and might steal the show in its own way. It has half of Cell's flops, performance wise, which is a great amount of power. And Xenos :) is as always of great help.
I don't believe Xenon has half the FLOPs of Cell. Aren't the numbers 77 GFLOPS for Xenon and 204 GFLOPS for Cell? That's more like a third of Cell's FLOPs than half.
 
Last edited by a moderator:
How powerful did/do some people believe Cell was/is? I see this pop up from some people, but it's never defined. Why?

He was talking about the ps3 in general, not just the Cell. Not sure how it was here at B3D, but plenty of other sites/forums I visited actually believed games would look like the PS3 tech demos Sony presented.

I don't believe Xenon has half the FLOPs of Cell. Aren't the numbers 77 GFLOPS for Xenon and 204 GFLOPS for Cell? That's more like a third of Cell's FLOPs than half.

No, and theoretical peak isn't always relevant.
 
No, and theoretical peak isn't always relevant.

That is correct. It has been proven however that unlike most other processors (including afaik Xenon) the Cell can actually reach its peak. Of course even then it is hardly relevant in many cases, but props to Cell for a great design nonetheless. In the end though it goes to show that clever hardware design doesn't equal success. It's much like the popular tale that Russian programmers often managed to achieve the same calculation speeds with 1/10th the hardware power of their American counterparts, simply because they had better resources for producing good software than for good hardware.

The difference between Sony and the PS3, and Microsoft and the 360, is much the same, although Microsoft also deserves credit for seeing that ATI had some great ideas for GPUs (if I remember correctly, this was a time where ATI really outpaced NVidia in the PC desktop space, at least until NVidia got back into the game with the 8x00 series?) - assuming of course that it wasn't just luck. ;) But being heavily involved with GPUs for developing Windows, I assume they had enough people in-the-know.
 
He was talking about the ps3 in general, not just the Cell. Not sure how it was here at B3D, but plenty of other sites/forums I visited actually believed games would look like the PS3 tech demos Sony presented.
It's only a head/face. That looks and reacts like a Heavy Rain character. Of course, I think ND's recent characters look better than that video (not including the eye animations).

No, and theoretical peak isn't always relevant.
According to IBM, those theoretical max numbers are correct. Only one of those was proven to get anywhere near it's theoretical max number. And, I agree. Theoretical peak numbers are almost always irrelevant. In this case, Cell's number was proven relevant.

That is correct. It has been proven however that unlike most other processors (including afaik Xenon) the Cell can actually reach its peak. Of course even then it is hardly relevant in many cases, but props to Cell for a great design nonetheless. In the end though it goes to show that clever hardware design doesn't equal success. It's much like the popular tale that Russian programmers often managed to achieve the same calculation speeds with 1/10th the hardware power of their American counterparts, simply because they had better resources for producing good software than for good hardware.

The difference between Sony and the PS3, and Microsoft and the 360, is much the same, although Microsoft also deserves credit for seeing that ATI had some great ideas for GPUs (if I remember correctly, this was a time where ATI really outpaced NVidia in the PC desktop space, at least until NVidia got back into the game with the 8x00 series?) - assuming of course that it wasn't just luck. ;) But being heavily involved with GPUs for developing Windows, I assume they had enough people in-the-know.
Agreed.
 
I think the most telling aspect of all of this is how Cell itself changed the mindset of multiplat developers.

Could have sworn I read an article one time about a developer who was struggling with the PS3 to get parity with the 360 version of a game. They started to use the SPU's and during that time they figured they could do the same thing with the 360 and in turn increased the performance of the entire game. Because they had to try and optimise the PS3 version they found they could optimise the 360 version.

This is not a knock to developers but, if you gave developers everything they wanted as far as CPU's, GPU's, RAM, Mediums etc etc the advancments being made today would not have been possible. The short comings of each console has allowed the developers to try and approach things different ways.

I'm personally glad that the PS3 had a stripped down GPU and a beefy CPU, in my opinion it created an entirely different view of thinking and showcased how poorly we were using our current tech. If you look back at all the advancements that have been made from a programming/design concept you find that they were neccessary to overcome a limitation that the devs needed to overcome.

If the PS3 is better then the 360 is irrelevant, the fact that it competes graphically by using a GPU that is hindered speeks volumes. If you gave someone 15 years ago the specs for the Xenos and RSX and asked what machine would produce better graphics NOBODY would say the RSX. We are now starting to look past those basic arguments of speed and transitor counts and looking at what can make those aspects of that design work better.

It basically boils down to the fact that the PS3 (for better or worse) showed that we can do things a different way and potentially get better results. That programable brute force is KISS more then fixed function hardware. Because at the end of the day your biggest limitation isn't what you can't do today but what you can't do in the future.
 
To clarify one thing, Cell is not a fundamentally new idea. There were heterogenous processors with DSP-like coprocessors done years earlier, and they were abandoned years earlier.
Difficulty in programming and getting decent utilization out of the exotic design seemed to be a major factor.

These were for a different market, however, and did not have a major driver that allowed them to persist.
Sony's largess, a stable of developers, and solid branding allowed at least some money to go into making Cell an ongoing concern, while the earlier attempts had no cash flow or backing to last for long.
I would argue that it also helped that it had RSX to prevent the PS3 from being discarded for all those years where the Cell processors magical programmability was not sufficient.

I'm not sure, Sony's money aside, that the lesson learned is necessarily different.
Cell was a throwback to earlier failures when it was released, and while elements of the design may show up in the future, the heterogenous model with those quirky SPEs has not shown itself to be ideal.

I had initially predicted that the PS3 would have the most technically advanced AAA 1st party games, and this might be arguably the case for a period of time. It is a few years later than I had thought.
How much of those gymnastics were done out of necessity to achieve acceptable results, I cannot say. I had expected more non-graphical advances with Cell, but with everyone spending 4-5 SPEs on graphics or stuck with the limits of console memory, that lead doesn't seem to be all that great.
 
Back
Top