How much work must the SPU's do to compensate for the RSX's lack of power?

Status
Not open for further replies.
A smart air conditioner can also run AI programs.

Well that's where the line gets fuzzy, and I could say similarly that face/smile-detection cameras run AI. They could earn that designation on a literal basis from some - but they won't earn it from me! :)

Yes though, something like EyePet I think does have more of a legitimate 'complete' AI dimension to it, vs something like Singstar where the results are more or less binary. So I would view EyePet as something akin to an AI application running within a strong signal processing support environment; something like Magic Mirror, but with a free-floating element where the results will at least mimic more than yes/no possibilities. Magic Mirror being my standard here for example of Cell's signal processing prowess in a 'dumb' environment. (Though really its actual HPC apps in the signal space being the real showpieces)
 
Yes, I think I get you. The problem is when we try to divide the issue horizontally in a clean fashion (e.g., signal processing recognition vs AI recognition), we are assuming it is possible for the low level solution to come up with an accurate result to "float upwards" without much context.

In practice, in an application, what the low level solution passes up may be just some primitive concept like angle and vector of arm motion. Many times, the application may have to apply some static/learning assumptions and some chicken soup (heuristics and application determined context) to narrow the scope. Speech recognition may be a good example.
 
I mean look what we're seeing from the top graphical games on one platform and the developers are saying they can go a good deal further. Meanwhile, there is a definite gap between those and top graphical games from the other platform. I don't think the "no one's (not even 1st party devs) pushing the 360 hardware" can really hold up.


I don't know, who is actually pushing 360 hardware?
It's similar to last gen, where you had GOW, GT4, Shadow of the Colossus etc push the PS2 hardware to ridiculous limits, while the more powerful Xbox probably wasn't - at least not to the same degree. Except this time both machines have quite similar capabilities, so the extra effort might be what's putting the PS3 out in front.

It's hard to deny that Sony's studios are simply more technically capable. They have a lot more of them as well, plus they all share tech and some have ridiculous budgets (see KZ, GT5 etc.).

Sony have Team ICO, Polyphony Digital, Naughty Dog, Santa Monica, Zipper, Liverpool Studio, Evolution Studios, Guerilla and Media Molecule, all very capable.

Then there are their 2nd party and PS3 exclusive devs like Insomniac (until now), Sucker Punch, Ready at Dawn, Eat Sleep Play, Quantic Dream and Kojima Productions

There is simply no equivalent for the 360; I would say Rare, Remedy and maybe Bungie, Turn 10 have technical chops.

It's also telling that the best looking games on 360 are often MP titles or from 3rd parties, like AC2, ME2, RDR, Gears etc.

And yet even 4 years in, most MP titles still look better on 360, take RDR for instance, despite having to run at a significant handicap to the PS3 version (no compulsory install, some SKUs have no HDD, no Bluray for redundant assets etc.).

Then we have hints from Carmack and Crytek (often quickly redacted) and tantalising tidbits such as this one from Joker (too bad for NDAs):

Don't waste your time, the kool aid is just too strong. Sometimes I wish I had permission to post verbatim what people from NVidia and certain Sony first parties have told me first hand as to what they *really* think of the PS3 hardware compared to the competition, it would end all this conspiracy nonsense for once and for all.

http://forum.beyond3d.com/showthread.php?p=1430896#post1430896
 
We are very close to one another on this, but at the same time just this little tiny ways apart in our definitions. Take for example your 'signal processing recognition' line, which for me would be two different ideas: the signal processing, and the signal recognition. I am ignoring the later entirely, and focusing on the processing. The idea that given an input, and an algorithm/some code, Cell will be able to divine for you the proper interpretation of said signal input that you are looking for. The strength here is purely on Cell's speed, and in a notoriously parallel field at that; it's an ideal Cell use scenario. And though you would need that for a good AI on a macro level, in its own right I don't consider it AI since it is itself just an interpretation algorithm with the output preset rather than learned.

I do know where you're coming from on this totally, and we can agree to disagree in terms of the nomenclature; for me as far as gaming goes I do prefer to isolate 'AI' in the typical gaming sense from the console's strength in interpreting inputs, even if the later might be even moreso an achievement in terms of making calls on the part of the software.
 
Last edited by a moderator:
Other random thoughts...

1) As we know if the eDRAM had been 12MB vs 10MB, things would have been better at launch for the 360 and certainly easier to implement, both then and now. So Xenos had its own minor misstep for whatever reason (yields?)
I don't understand this question. Why does wishing for more edram imply yield issues? It implies cost issues.
 
I don't understand this question. Why does wishing for more edram imply yield issues? It implies cost issues.

I think yield and cost are inextricably tied to each other. If the yield is bad, the cost is going to go up. If the yield is good the cost is going to go down.

So in a sense you could state either case cost or yield issue and be correct. Unless of course yield is the same whether it was 12 MB or 10 MB eDRAM.

Regards,
SB
 
Exactly. If you were to assume that yields were 'the same' 10MB to 12MB, then on a design basis it really probably would have been worth it for MS to go to 12MB. For myself, I believe that yields must've been falling off a cliff - or materially worsened - at 12MB vs 10MB.
 
Exactly. If you were to assume that yields were 'the same' 10MB to 12MB, then on a design basis it really probably would have been worth it for MS to go to 12MB. For myself, I believe that yields must've been falling off a cliff - or materially worsened - at 12MB vs 10MB.
The more the better I guess but why 12MB? You need 15MB for 2xAA @ 720P, is 12MB more of a fit for a average render target in a deferred set-up? (I remember Sebbbi explain that they had to play with resolution and overlay a bit to make their RT fit in eDram.
 
And yet even 4 years in, most MP titles still look better on 360, take RDR for instance, despite having to run at a significant handicap to the PS3 version (no compulsory install, some SKUs have no HDD, no Bluray for redundant assets etc.).
Multiplatform titles was never a good way to judge hardware, I thought that was pretty well established. They will almost always have the leading platform as target which up until recently has been 360, among the HD consoles.
If a platform is unusual in any way, from what developers are used to, it will suffer in a MP title.
 
Last edited by a moderator:
As for nao's comments, I respect him and his work, but there are far too many examples of 1st party developers utilizing SPUs for vertex processing to ignore. Heavenly Sword was a first generation title and probably pushed the tech a lot less than today's games.
I'd say - and this is full speculation - that those 2-2.5 million triangles were mostly rendered as crowds of enemies using heavy LOD, just a few polygons in a segmented character model with no self-shadows or skinning or any complex processing at all, located in a large open arena type of environment with very little occlusion and a simple directional/spot light as the sun. There are many games today that are doing a lot more... which is pretty much expected from third generation titles.

So, I'd modify his statement like this:
Depending on what the game is doing as vertex processing, RSX may or may not be limited on its own.

But then at what framerate. My 7900GT could render 10m polygons/frame but perfomance would suffer.
 
I like how you basically came in to say 'we've done this a long time ago, and I'm too lazy to find the appropriate threads and I'm not really interested in this topic so I'm starting a new topic, but I'm also too lazy to start a new thread'! :LOL:

Your discussion of AI is understandable, but also something that was previously discussed. ;) Not that it is not interesting stuff of course and it hasn't nearly been discussed as extensively as graphics, for obvious reasons. Back at university, where when I started AI had just been promoted to a full, four year course with specialisation programmes, the definition of AI that stuck with me the most was: AI is whatever humans can do that computers can't.

We are very close to one another on this, but at the same time just this little tiny ways apart in our definitions. Take for example your 'signal processing recognition' line, which for me would be two different ideas: the signal processing, and the signal recognition. I am ignoring the later entirely, and focusing on the processing. The idea that given an input, and an algorithm/some code, Cell will be able to divine for you the proper interpretation of said signal input that you are looking for. The strength here is purely on Cell's speed, and in a notoriously parallel field at that; it's an ideal Cell use scenario. And though you would need that for a good AI on a macro level, in its own right I don't consider it AI since it is itself just an interpretation algorithm with the output preset rather than learned.

I do know where you're coming from on this totally, and we can agree to disagree in terms of the nomenclature; for me as far as gaming goes I do prefer to isolate 'AI' in the typical gaming sense from the console's strength in interpreting inputs, even if the later might be even moreso an achievement in terms of making calls on the part of the software.
 
And the transistor budget analysis is a waste of time IMO between RSX and Xenos;
Not at all! these consoles are definied by their unit costs, which are defined by their chip production costs, which are definied by process and transistor count. Comparing what two different GPUs of the same transistor budget has to be way up there in the Beyond3D raison d'etre! If all transistors are assembled equally RSX should have the same performance as Xenos only with strengths and weaknesses in different areas. However, as it is, RSX is basically on the back foot in most things (is there anything it beats Xenos on in real-world use?). Hence the existence of this thread, wondering how much SPUs are having to help out RSX, where if PS3 had Xenos, these SPU's wouldn't be needed.

Although granted, choice of GPU is a different topic to this one.
 
Other ramifications include the architecture of the upcoming generation of hardware. Should a relatively stronger GPU be the choice or is it better to increase the number of general purpose SPU-like resources? Is the added freedom in graphics programming worth the extra development effort required or is it better to have more standard rendering engines and allocate more manpower for the gameplay and AI systems?
 
Other ramifications include the architecture of the upcoming generation of hardware. Should a relatively stronger GPU be the choice or is it better to increase the number of general purpose SPU-like resources? Is the added freedom in graphics programming worth the extra development effort required or is it better to have more standard rendering engines and allocate more manpower for the gameplay and AI systems?

It's an interesting question. In a sense within the GPU that question seems to have been answered already, as even for the 360's GPU one of its benefits is precisely the ability to use the hardware for vertex or shaders and balance resources between them however you want.

In theory, having specialised hardware to do exactly what you need should always be faster. However, once your actual needs vary wildly between games/applications, then generic hardware is going to pay off. It's a matter of profiling averages for use cases and the size of standard deviation. The results may be that some costs are fixed, and others aren't, and you'll design the hardware accordingly.

Personally, I'm all for more generic hardware, if only to make sure the rendering pipeline can develop more rapidly on the software front instead of being locked down by the hardware. In a world where increase in performance on the hardware side is dependent on going massively multi-core and stream processing, relying on software to make effective use of that is going to be more and more important anyway.

In that respect, the power of being able to use the SPUs to improve the rendering pipeline in combination with the flexibility of the 360s GPU in that regard, both being seen as a strong point in the console hardware, or the changes between DX11 and DX9 on the PC side, seems a clear indication where we're going. But we should never forget to profile use cases and never overestimate the speed of software development, as Larrabee seems to have demonstrated abundantly by now.
 
I think you misunderstand me...

The current generation shows an interesting divide between the casual and the more hardcore games, but even with the later there's no direct indication that investing significant resources in advanced graphics technology would be a viable strategy.
Sure, GOW3 has a competitive advantage because of its IQ, which is in turn enabled by PS3's more flexible architecture. But did it really lead to better scores and sales (let's temporarily ignore the fact that all Sony 1st parties and maybe even others will be able to benefit from their efforts)? Could they have developed the same gameplay on a more conventional architecture with less money spent on the game's engine, thus leading to a better return of investment?
X360 on the other hand should allow for a more straightforward engine development, cheaper and faster if you will, and would most likely be able to create the very same gameplay.

Would it be better to develop a console that's very complex, very flexible, and allows for such trickery; or build one that's quite straightforward and does a lot of load balancing and stuff on its own, requiring less programmer time? Is the gain in graphics noticeable enough for the majority of the market?
Or would it be better to spend less on the development, or allocate these resources to anything else but graphics? We now have motion control as a default, and then there's quite a lot of room for gameplay innovation.
 
I'm not speaking form the perspective of there aren't jobs that need to be picked up by the SPUs. I'm saying, with these numbers in mind, that it doesn't have to eat much of the SPUs' time. I mean look what we're seeing from the top graphical games on one platform and the developers are saying they can go a good deal further. Meanwhile, there is a definite gap between those and top graphical games from the other platform. I don't think the "no one's (not even 1st party devs) pushing the 360 hardware" can really hold up.

To be honest it seems like 3rd party pushes 360 much further than 1st party.Look at games like RE5,AC:B,GTA IV,RDR now Crysis 2 and RAGE and compare them to Gears 2,probably still benchmark for 360 exclusive graphics.They all eclipse Gears 2 in terms of not just looks but what is happening on screen at once and at what scale.Its no wonder MS made new partnership with Crytek,it seems they have the engine and pedigree to push it more than what its capable with UE3 for example.And while its not fair to compare hardwares based on mp games,but in this case,where Sony has top tier developers in comparison with MS,it has even less sense to compare them by exclusives.
 
t's an interesting question. In a sense within the GPU that question seems to have been answered already, as even for the 360's GPU one of its benefits is precisely the ability to use the hardware for vertex or shaders and balance resources between them however you want.

In theory, having specialised hardware to do exactly what you need should always be faster. However, once your actual needs vary wildly between games/applications, then generic hardware is going to pay off. It's a matter of profiling averages for use cases and the size of standard deviation. The results may be that some costs are fixed, and others aren't, and you'll design the hardware accordingly.

Personally, I'm all for more generic hardware, if only to make sure the rendering pipeline can develop more rapidly on the software front instead of being locked down by the hardware. In a world where increase in performance on the hardware side is dependent on going massively multi-core and stream processing, relying on software to make effective use of that is going to be more and more important anyway.

In that respect, the power of being able to use the SPUs to improve the rendering pipeline in combination with the flexibility of the 360s GPU in that regard, both being seen as a strong point in the console hardware, or the changes between DX11 and DX9 on the PC side, seems a clear indication where we're going. But we should never forget to profile use cases and never overestimate the speed of software development, as Larrabee seems to have demonstrated abundantly by now.
OT
I agree but I would still not dismiss Nintendo or Sony coming for the best or the worst with some "outside of the box" thinking/product.
I tried to think in the "next gen" to raise the possibility of a return of fixed function pipeline I also wondered in a thread about how would the ps3 would have looked based on a PS2 hardware evolution. It wasn't much of a success as clearly people and for good reasons I'm sure have they eyes set on how things are evolving in the PC realm.
I think Nintendo choice for the 3DS is interesting as it may give us reference in perf per Watts/mm² for a evolved fixed function pipeline.
/OT
 
Exactly. If you were to assume that yields were 'the same' 10MB to 12MB, then on a design basis it really probably would have been worth it for MS to go to 12MB. For myself, I believe that yields must've been falling off a cliff - or materially worsened - at 12MB vs 10MB.
I don't think yield numbers have ever been released, but I doubt Microsoft knew specific yield characteristics like that at the time the decision was made to go with 10MB. Usually these decisions are made prior to the process being production ready.
 
Would it be better to develop a console that's very complex, very flexible, and allows for such trickery; or build one that's quite straightforward and does a lot of load balancing and stuff on its own, requiring less programmer time? Is the gain in graphics noticeable enough for the majority of the market?
Or would it be better to spend less on the development, or allocate these resources to anything else but graphics? We now have motion control as a default, and then there's quite a lot of room for gameplay innovation.


If that is meant to gfx and other "minor" (under the gamplay label) portion of the games I would say yes! I prefer a cheaper game with more content and potentally more bug free and maybe even running faster.


On the other side today gameplay is getting wider...
I mean maybe given the Cell power/flexibility will give some headroom for Move/EyeToy/Microphone uses that can significantly alter the feel or even gameplay* of the game instead of "just" even better gfx.

With that thought in mind I prefer a better CPU than GPU (or flexibility vs raw power)


* for feel maybe a animation system that make you see yourself better, or I remember seeing some pretty cool demos even just for EyeToy.
 
I don't know, who is actually pushing 360 hardware?
It's similar to last gen, where you had GOW, GT4, Shadow of the Colossus etc push the PS2 hardware to ridiculous limits, while the more powerful Xbox probably wasn't - at least not to the same degree. Except this time both machines have quite similar capabilities, so the extra effort might be what's putting the PS3 out in front.

It's hard to deny that Sony's studios are simply more technically capable. They have a lot more of them as well, plus they all share tech and some have ridiculous budgets (see KZ, GT5 etc.).

Sony have Team ICO, Polyphony Digital, Naughty Dog, Santa Monica, Zipper, Liverpool Studio, Evolution Studios, Guerilla and Media Molecule, all very capable.

Then there are their 2nd party and PS3 exclusive devs like Insomniac (until now), Sucker Punch, Ready at Dawn, Eat Sleep Play, Quantic Dream and Kojima Productions

There is simply no equivalent for the 360; I would say Rare, Remedy and maybe Bungie, Turn 10 have technical chops.

It's also telling that the best looking games on 360 are often MP titles or from 3rd parties, like AC2, ME2, RDR, Gears etc.

And yet even 4 years in, most MP titles still look better on 360, take RDR for instance, despite having to run at a significant handicap to the PS3 version (no compulsory install, some SKUs have no HDD, no Bluray for redundant assets etc.).

Then we have hints from Carmack and Crytek (often quickly redacted) and tantalising tidbits such as this one from Joker (too bad for NDAs):



http://forum.beyond3d.com/showthread.php?p=1430896#post1430896
The real question is who ISN'T actually pushing 360 hardware? There is MS documentation showing TOTAL usage of 2 to 3 cores in launch titles. Can you say the same in the other direction? It's hard to believe a machine that use to cost $200 more than the competition at launch while being subsidized about $300 is similarly capable. Blu-ray and HDMI 1.3 didn't cost $400 to $500. That's been proven.

What determines if hardware is being pushed or not, to you? Is it your belief in what you think that hardware should be able to do? What makes you think that hardware easier to push wouldn't be pushed further than hardware harder to push? Why was the easier to push hardware overtaken so easily by the harder to push hardware? It doesn't make sense. If it doesn't add up, then it's usually not true. That's the number 1 rule in investigating anything.

You say that it might be extra effort pushing the PS3 out front, but the time between Uncharted 1 and Uncharted 2 was 18 months. However, the leap was a lot larger than any leap on from a sequel on any other console. Again, what you say doesn't add up. That puts us back to investigation rule #1. I'm sure MS wouldn't waste money on none capable developers to purchase as 1st party developers. Sony wouldn't do that either, yet their is a huge difference in the results. The big 1st party exclusives on both sides have big budgets (Alan Wake, Halo, Too Human, Killzone, God of War, GT, etc), so that's not really the difference.

In spite of all these things being essentially even (or in the 360's favor), the gap between the 1st party platform games continues to widen. Yes, I believe the Kool-Aid is too strong. The real question is this. Using basic investigative techniques, which point of view is the real Kool-Aid? ;) More and more evidence is being laid out as time passes. I guess another question would be how long can it continue to be dismissed/ignored?
 
Status
Not open for further replies.
Back
Top