Cell/CPu architectures as a GPU (Frostbite Spinoff)

provided slow msaa performance

Cost of MSAA on either 360 or PS3 GPU is roughly the same just getting there via different methods.
360 costs are in the tiling department.
PS3 costs are in the ROP unit.
 
I don't think the amount of SPU power being used for graphics actually matters. What matters is non-graphics related SPU time. It's about how much SPU time you need for advanced A.I., audio, etc. The KZ3 screencaps give insight into those things.

It seems they didn't just go with a better GPU, because it's a less flexible rendering solution. I remember seeing something about Sony wanting to create the most flexible rendering system for that time. I think this was around 2006 or 2007. I think they succeeded in that regard.
I think you are wrong.I haven't seen anything special about A.I in PS3 exclusive games.Yes,KZ2/3 have good A.I but its nothing that hasn't been done before.UC2,GOW III?Nope...like any other game.

I agree PS3 exclusives have great audio but there are also couple other games that even excel at it like for example BFBC2,best sound i have heard as far as this gen goes.
 
I agree PS3 exclusives have great audio but there are also couple other games that even excel at it like for example BFBC2,best sound i have heard as far as this gen goes.

Not only great sound but also very good and advanced HDR sound effects.
 
Lack of processing power is the last thing I'd blame for poor AI. Looking how >5h single-player scenarios are relatively rare in newer FPS'es it's rather logical not to put too much effort in improving the AI there.
 
I think you are wrong.I haven't seen anything special about A.I in PS3 exclusive games.Yes,KZ2/3 have good A.I but its nothing that hasn't been done before.UC2,GOW III?Nope...like any other game.
Off topic.

Not only great sound but also very good and advanced HDR sound effects.
Off topic.

Lack of processing power is the last thing I'd blame for poor AI. Looking how >5h single-player scenarios are relatively rare in newer FPS'es it's rather logical not to put too much effort in improving the AI there.
Off topic.

Yet if I close the thread and deny people the opportunity to repeat the same tired discussions that don't belong here, I'm a Nazi moderator...
 
Cost of MSAA on either 360 or PS3 GPU is roughly the same just getting there via different methods.
360 costs are in the tiling department.
PS3 costs are in the ROP unit.
I mention the different phases of code execution on the Cell was from Mike Acton in my other post. It was you, though. This was back in 2006. I just wanted to give you your due credit. Sorry for the mix up.

http://ps3.ign.com/articles/690/690241p1.html

CE3, Crysis 2 thread(s) drama replayed. Deja-vu sensation will be present. :p
If it's the opposite results, FXIII and NGS 2 drama will probably be replayed. :p
 
Well, I guess shaders are becoming more and more programmable, while texture units and ROPs are still fixed function?

You wouldn't want them doing anything different than what they do.

I think one of the defining elements of this generation has been the influence of previously PC only developers. Epics engine used in many games. An engine that Sony sent engineers to help get running somewhat comparably on PS3. With the 360s ease to them and the year head start of it, the 360 became their priority. The PS3 was however unfamiliar and thus lamentable. You can contrast this to Namco's familiarity with PS1 hardware from arcade development. Which paid off nicely.
 
Last edited by a moderator:
If it's the opposite results, FXIII and NGS 2 drama will probably be replayed. :p

Not at this stage with these reputable developers with state of the art cutting edge tech and engines. Things have progressed enough and the developers and with public tech papers released to show how they are pushing it to the limit regarding the HW. They are using using methods to keep Cell working hard to assist GPU in possible ways while running non graphics code to.
 
Last edited by a moderator:
Not at this stage with these reputable developers with state of the art cutting edge tech and engines. Things have progressed enough and the developers and with public tech papers released to show how they are pushing it to the limit regarding the HW. They are using using methods to keep Cell working hard to assist GPU in possible ways while running non graphics code to.
What developers have published public tech papers showing they are "pushing it to the limit" regarding the HW? I haven't seen these tech papers show the amount and types of jobs being run on Cell and the times associated with it. The closest I've seen to that has been from DICE and that's still a far cry from what I mentioned a sentence ago. We are just getting thinly sliced pieces of cake (cake = information). I would, also, like to see more about devs taking advantage or not taking advantage of the benefits parallelism affords (DICE handled that part nicely, IMO).

BTW, what does "at this stage" mean? Does that mean 3rd party devs are at a much higher skill and budget level than just around 1 year ago (around the time both games were released)?
 
Cost of MSAA on either 360 or PS3 GPU is roughly the same just getting there via different methods.
360 costs are in the tiling department.
PS3 costs are in the ROP unit.

Tiling cost is more often than not less than the rop cost, hence why so many 360 games shipped with 2xmsaa and some with 4xmsaa whereas msaa is more of a rare find on ps3 games. It's easier as well to enable msaa on ps3 compared to 360, so it if were indeed similar or less cost then more games would have enabled it. I'd agree that some probably had to disable it due to being too tight on memory,


Because reinventing the wheel is a rare thing that happens once in a while, and there's a lot of legacy code and structures in place.

For the record, there is little legacy code out there now. Neither the 360 nor the ps3 allow it for optimal results, everything had to be re-thought and re-written to accomodate multi core, vmx/spu, and load hit store. If you just take legacy C code and run it on console it will run like ass, and you would not be getting the games you see today.


Screw 360 and PS3 and PCs, and let's just look at how we get silicon to make pretty pictures

You abolutely *need* to look at existing platforms because they are the real world test cases to many peoples questions. The ps3 shipped with a monstrous amount of 100% general purpose computing power for the time, more than pc's at the time and far more than it's direct competitor the 360. On top of that it's limited gpu necessitated the need to use that general purpose computational power for graphical needs. So it serves as a perfect real world test example to answer part of your question. You are wondering where gpu's need to go, fully programmable, fully fixed function, hybrid, etc. Ok, look at both consoles not as consoles, but as real world gpu's that have been in the field for 6 years.

With that in mind, look at the ps3 again, but *not* as a console. Instead treat the entire box it as a gpu. The "ps3 gpu" came out years ago and devs had years to play with it and do anything they wanted with it. What did it provide for it's time? It provided *significantly* above average 100% general computing power, probably more than any other computing device at the time, and it provided average fix function computing power. It's a hybrid gpu that leans more on the general purpose side almost as a necessity, but makes use of fixed function as well.

Now look at the 360 again, *not* as a console but treat the entire box as a gpu. The "360 gpu" came out years ago as well just like the "ps3 gpu", but they took a different tactic. Their gpu provides devs with below average general purpose computing power and above average fix fixed function computing power for the time. It's far less of a hybrid setup, leaning more closely to basically being a fixed function gpu (fixed function meaning todays typical shader model).

So we have two real world gpu's to compare, and roughly 6 years of game development work to compare on them. We can do this, this being treat the entire console as just a video card, because the consoles entire purpose in life is to render pretty pictures. If it can't render pretty pictures then it gets kicked to the curb.

Hence...that is a perfect real life comparison scenario. One person went to Fry's and bought the "PS3 Gpu" figuring that better 100% general purpose computing power was where it's at. Another person went to Best Buy and bought the "360 Gpu" figuring that better fixed function power is still the way to go. They slapped their respective video cards in their pc's and played games on them for years.

The result? Look at the games on console for the past 6 years to answer that question. That is your real world example of 100% above average fully programmable cores + average fixed function cores compared to 100% below average fully programmable cores + above average fixed function cores.


Which highlights exactly that you aren't understanding the very idea I wanting to explore - What the hell should a GPU be made like?!Should it be thousands of fixed-function, dedicated pipelines? Or hundreds of flexible, programmable pipelines? Or dozens of CPU-like cores that have to use a different mindset to render far more efficiently?

Compute shaders have answered that, general purpose but not 100% general like in a typical cpu, but still mixed with some fixed function hardware. That's where it's all going. I don't really think a fully 100% general purpose solution would be as good otherwise we'd have all gone back to software renderers that let us do anything, anytime, any way we want. Just really slowly.
 
I think you are wrong.I haven't seen anything special about A.I in PS3 exclusive games.Yes,KZ2/3 have good A.I but its nothing that hasn't been done before.UC2,GOW III?Nope...like any other game.
What games do you have in mind?

GOW3 certainly isnt an AI showcase.
Killzone 2/3 and Uncharted 2 have very convincing AI. Especially Killzone.
Its not just their tactics but the way the behave. The AI isnt only about how "smart" they approach you in order to kill you (that sometimes can be a bad example of AI), but how convincingly they blend the correct animation (to show the correct enemy "emotion" in relation to the current situation) and express human like vulnerabilities and wit.
For example playing Killzone 3 in Elite the enemies dont only become brutal with their tactics when they find the chance, but they will also be stressed, fall back and try to find ways to keep you away from them when they are in pressure. Sometimes they will try to get you from behind while you are trying to kill their comrades. Its not a one way system
In most games they AI simply knows the maximum efficiency
like in Halo's Legendary mode the AI is brutally offensive and assisted by continuous cheap ass roll overs, unbelievably damaging weapons, and covers. The system basically knows the routes and covers that will make your life harder in killing an enemy. They dont show human like vulnerabilities and animations.
When the AI fights like the perfect (cyborg) soldier it brakes the believability.
 
Ehm, joker, a simple question. Would you call the 360 CPU to have more generic function silicon or less generic function silicon than the PS3 CPU? Similarly, would you call the 360 GPU to have more generic function silicon or less generic function silicon than the PS3 GPU?
 
Count how many games can't use msaa due to render engine incompatibility or whatever, one hand would probably have enough fingers to count them. Now count how many games can use msaa, you'll need a lot of hands to count them. All of those dozens of games were unable to use msaa due to the design choices made by Sony, specifically to use old gpu hardware that required lots of memory and provided slow msaa performance and instead go with a heavily customized cpu. Which was my point all along, that their decisions have consequences and their decision to ignore gpu and focus on spu meant years of games on that platform missed out on having any form of anti aliasing, or had to settle on image quality destructive qaa. I don't see how anyone can view this as a positive design choice.

IMO there's quite a few games on the 360 that can't use MSAA.

It seems they didn't just go with a better GPU, because it's a less flexible rendering solution. I remember seeing something about Sony wanting to create the most flexible rendering system for that time. I think this was around 2006 or 2007. I think they succeeded in that regard.

Actually I think they didn't go with a better GPU because they didn't have the time. Does anyone really think the current PS3 was Sony's original design? I can't understand why they would purposely plan to use a GPU that was current gen at the time of release, only gimped.

Because reinventing the wheel is a rare thing that happens once in a while, and there's a lot of legacy code and structures in place.

I believe that reinventing the wheel all at once or in huge huge chunks is rare, but I find it hard to believe that the Frostbite, UE3, CoD, RAGE, MT Framework, etc. aren't all vastly different now than they were 5 years ago.

If you said this 4 years ago, I'd agree with you all the way. However I think there's been enough time that there's probably little legacy code left in most engines, I could be wrong.
 
Last edited by a moderator:
For the record, there is little legacy code out there now. Neither the 360 nor the ps3 allow it for optimal results, everything had to be re-thought and re-written to accomodate multi core, vmx/spu, and load hit store. If you just take legacy C code and run it on console it will run like ass, and you would not be getting the games you see today.
Not legacy code but legacy thinking and tools. Intel had an idea for a new GPU, but they had to make it run DirextX for business reasons or it would get nowhere, and yet it wasn't ideally suited for DirectX and so would never compete with DX GPUs. Thus it has no business and no market. But it could power different engines, like...I dunno a CSG raytracer, more effectively than a GPU, and if everyone had one to develop for, I'm sure the ways of thinking would reveal whole new paradigms.

You abolutely *need* to look at existing platforms because they are the real world test cases to many peoples questions...With that in mind, look at the ps3 again, but *not* as a console. Instead treat the entire box it as a gpu...
Exactly. :D

The "ps3 gpu" came out years ago and devs had years to play with it and do anything they wanted with it.
Years where the remit wasn't "go do whatever you can with this new toy" but "go make a commercially successful game that's going to work with the assets we can produce with our current toolchains." And most of those decisions would have to work on multiple platforms. Only a few developers and MS/Sony with their own technology groups had that luxury, and I imagine they always have a target in mind that measn that aren't truly free to 'play'.

Compute shaders have answered that, general purpose but not 100% general like in a typical cpu, but still mixed with some fixed function hardware. That's where it's all going.
And this is the interesting bit! Compute shaders are allowing rendering efficiencies. These compute shaders are new to PS3 right? The GPU can't do it, but DICE for one are using Cell as a compute shader engine. Now, unless when DICE did their GDC presentation most of the audience went, "so what? We've been doing this sort of thing for a couple of years now," (and maybe that happened as I don't know the state of console development), this is a new way of thinking. This is a way of thinking born out of GPU evolution, which was born out of a gradual analysis of workloads and a development of progressive solutions. Split work into tasks and create shaders. Unify shaders for efficiency. Add compute shaders for flexibility. But no-one back in 2000 was designing 2011 class GPUs, not because of a lack of funding, but thought patterns had got that far.

Now we have new GPU designs with compute shaders, but we also have a 2005 design capable of running the same concepts because it wasn't designed to a certain way of thinking based on jobs, but was designed to just crunch numbers. No-one started writing compute shaders on PS3 on day 1 because no-one thought about it. Heck, a lot barely used the SPEs because they didn't know how and had deadlines to hit. They have since learnt to reengineer their code to offload work typically associated with GPU onto Cell to support the weak GPU. But, unless I'm mistaken, the whole way of thinking about graphics has always followed the DX paradigm. No-one on day 1 was looking at Cell and thinking "I bet we could do some analytical AA on that." Nor was PS3 designed with MLAA in mind. It was just an idea someone had, that Cell could handle because it had a mix of performance and flexibility. Likewise how many years have had to pass before someone thought of doing the things DICE are doing? Unless it's all old news, DICE have taken a leap forwards in thinking. This leap forwards came from GPU evolution, and without DX11 would likely never have happened on PS3, but is enabled on PS3 for the same reason as MLAA.

The question therefore exists, what else is possible when you have a truly free processing resource that can try any algorithm reasonably fast? What techniques and approaches are possible that no-one has thought of because everyone is thinking in terms of DX-based GPUs, solving problems as they encounter them instead of dreaming-from-the-ground-up renderers? Frostbite 2 is showing that fully programmable hardware from 2005 is able to implement some of the cutting edge concepts of 6 years later, which surely suggests the possibility of as-yet-unknown rendering approaches. Ideas that'll appear another 5 years down the line. Ideas that would never appear on PS3 or Larrabee because these are commercial ventures, and which will rarely appear in academia because of limited budgets and legacy hardware designs.

It's this whole potential, whether it leads to anything or not, that I'm excited about and wondering if we could ever measure or evaluate without having a lot of clever folk sit down with a CPU architecture and be funded for a couple of years to create whatever renderers they can without any obligations, deadlines, or necessary end products!
 
And this is the interesting bit! Compute shaders are allowing rendering efficiencies. These compute shaders are new to PS3 right? The GPU can't do it, but DICE for one are using Cell as a compute shader engine. Now, unless when DICE did their GDC presentation most of the audience went, "so what? We've been doing this sort of thing for a couple of years now," (and maybe that happened as I don't know the state of console development), this is a new way of thinking. This is a way of thinking born out of GPU evolution, which was born out of a gradual analysis of workloads and a development of progressive solutions. Split work into tasks and create shaders. Unify shaders for efficiency. Add compute shaders for flexibility. But no-one back in 2000 was designing 2011 class GPUs, not because of a lack of funding, but thought patterns had got that far.

Good point but at what performance penalty?

Cell can handle future solutions no problem but is the performance it delivers in those solutions enough to win out over an older more fixed function design utilising older solutions at a higher speed?

BF3 should provide some very interesting results regardless of the outcome.
 
The GPU ? I don't know. There is a Super Companion Chip made by Toshiba for Cell. That one handles AV interfaces.

As I recall, Kutaragi said edRAM is out because they can't include enough for HD resolution. Cell and RSX can work on multiple frames/tiles at the same time. DMA to LocalStores works well regardless of which source frame/tile it is.

Well that's not what I mean. Remember I-32 from GSCube ? That's basically the last incarnation of PS2 GS. I am sure Sony had something like that in mind for PS3 when they developed Cell. I-32 version 2 with better filtering, texture compression and MSAA support @500 MHz would be the size of RSX in term of transistors count.

Basically, the architecture would be similar to PS2. They could keep backward compatibility easily. Cell+I-32v2 paired with 256 MB of XDR, unified memory like PS2. But because they only managed to get One PPU and eight SPUs onto a chip, that version of PS3 would lack the FLOPS to compete with 360, that's when they scramble and got RSX into the picture and broke backward compatibility in the process. It's one big screw up.
 
I'd be curious to see how much of that SPU power is being used for anything other than supporting the GPU functions. I'm sure there are a lot of things running on the SPUs, I just really doubt it's significant relative to the graphics function. The general argument here is if the SPUs are for the most part being used to accelerate graphics related tasks, why didn't they just go with a better GPU and a lighter CPU?

There is one other significant thing the SPUs are doing on PS3, actually.. video / audio decoding for Blu-Ray playback. The PS3 doesn't have dedicated decode hardware, and needed to handle mpeg 2, VC-1, h.264 AVC at higher bitrates than HD-DVD did, simultaneously with all the audio codec to 7.1 PCM decode.. Dolby TruHD, DTS Master Audio, etc.

Nothing to do with games, but nonetheless something that Sony was aiming to support with their hardware solution.
 
There is one other significant thing the SPUs are doing on PS3, actually.. video / audio decoding for Blu-Ray playback. The PS3 doesn't have dedicated decode hardware, and needed to handle mpeg 2, VC-1, h.264 AVC at higher bitrates than HD-DVD did, simultaneously with all the audio codec to 7.1 PCM decode.. Dolby TruHD, DTS Master Audio, etc.

Nothing to do with games, but nonetheless something that Sony was aiming to support with their hardware solution.

True, but those seem like the kinds of things that are easily addressed by cheap dedicated hardware now. At the time a PS3 was a really good flexible option as a BluRay player, being only slightly more expensive than a dedicated player. I'm not sure there's much of a future in that space for exotic CPUs, because for codecs/disc formats to hit critical mass the hardware has to be cheap. The encoding side is another story. I'm not sure what the world of video encoders looks like right now.
 
Back
Top