Cell/CPu architectures as a GPU (Frostbite Spinoff)

As long as you define redundant to mean that the new algorithm in software is always better (where better can mean, eg, faster, better IQ, or use less power), which is extremely unlikely.
Sure. A lot of this is fairly abstract theorising, and it's not often some hardware can't be repurposed for some other task, such as the whole field of GPGPU turning graphics hardware designed for pushing polygons into generic data processors. I suppose one real-world example is fixed function shader pipelines. Fixed discrete vertex and pixel shaders means if one hasn't got work to do, it's sitting idle. Replace them instead with universal shaders and it's more efficient overall. Similarly a 'GPU' architecture that has fixed function processing structures (like processing batches of pixels instead of single pixels) will at times have either idle processing units or be doing redundant work, or be locked into a way of doing things. Like Xenos was designed with a tiled, forward renderer in mind, and now that deferred rendering is becoming popular that hardware doesn't flex so well to that role. The question I have is where you have the flexibility:performance balance, are there ways of doing things thanks to flexibility that gain performance? Larrabee failed, but then it was being used as a DX renderer. That's like taking the SaarCOR raytracer and getting it to run DX, which it'll be poop at. What would Larrabee have achieved if put in a closed box free from legacy support and given to devs for two years to create their own Larrabee specific renderer? We'll never have a practical reference because that'd be such a costly undertaking no-one will invest in its exploration, but it's something the whole industry needs to be able to evaluate somehow. (or not, because it's all a business and compatibility/legacy has tied that business into various standards and toolchains that a clean break cannot compete with)

Though TBH I don't really want to go into this topic any further at the moment. Firstly I want to see how Frostbite turns out on every platform - the story is one-sided at the moment thanks to relevant interesting GDC talks. Secondly it needs to be in the general arhcitecture forums and not the console tech. This particular thread just had to be created now to move the topic from the Frostbite thread. ;)
 
Firstly I want to see how Frostbite turns out on every platform - the story is one-sided at the moment thanks to relevant interesting GDC talks. Secondly it needs to be in the general arhcitecture forums and not the console tech. This particular thread just had to be created now to move the topic from the Frostbite thread. ;)

Yap, what we have discussed so far are mainly at the level of engine design and implementation. To make a real game, they have to put the engine to real dataset. The workflow, artists, special needs in some scenes will stress the entire system in unexpected ways. That's why I feel that they still have tons of work to do and experiment.
 
Xenos is a compromise with big plus points and some serious minus points. It's not more a magic bullet or better design than Cell, save that its dev tools make it easier.

...and look how it all turned out. Quality content from day one, tool and dev support from day one, etc, because xenos did all the work. If a company has the ability to shove a totally customized cpu *and* gpu in there and somehow manage the financials and heat issues then power to them, go for it and get the best of both words. If not then screw the cpu and go nuts on the gpu. Personally I'd still screw the cpu even if it was possible and instead go with a kicking gpu and use the money saved by ditching an exotic cpu to put more ram in there.


Theoretically though, evaluating total system performance in terms of what is done, looking at the games achieved isn't a fair comparison because games are built around a business that requires efficiency has a lot of legacy thinking.

This type of comment always comes up and I'll always ask the same thing, what makes people think that in year 6 games are still being done with legacy thinking on ps3? If they were they would look and run like utter crap. It's a fundamental issue that comes up repeatedly most likely because people are still confused as to why the ps3 hasn't pulled ahead of the 360 graphically, so they always fall back to "not using spu's", "legacy thinking", etc type arguments. This really is nonsense at this point. As I've said before on other posts, look back at what a pc game with a 7 series nvidia card and 512mb ram looked like back in the day, and look at ps3 games today and it will give you a clue as to how heavily the spu's are being used. Anyone still running on legacy thinking on ps3 is basically dead in the water at this point.



Very few devs can afford the luxury to explore all the weird and whacky ways to reengineer graphics pipelines and do novel stuff.

At this point in the lifecycle many do, and ironically, as I've also said in the past, it's the multi platform studios that will be best positioned to do that because they have the most as stake. That's borne itself out with Crytek and Frostbite being ahead of everyone else.


That's quite possibly true for a console business proposition, but this thread is intended more to discuss what can be achieved on the hardware, irrespective of developer requirements. If by the end of the lifecycle the programmable system overtakes the fixed-function system, that shows that programmability is an enabler and gets more from your silicon budget, even if that's a bad choice for a fixed-hardware console that needs to satisfy good business.

If you want to discuss that just out of academic reasons then sure, but it's totally unrealistic. Having to wait 6 years for well funded rockstar developers or whoever to finally show the benefits of a design choice all but demonstrates how it was the wrong design choice all along. In this case it still hasn't proven itself even after 6 years, we'll have to wait and see if Frostbite 2 finally shows some advantage. I wonder what people will do if it turns out in the end to still have no advantage overall, even with Frostbite 2 engine. It will be funny to see if the Frostbite guys get hit with the "lazy dev" tag if after it's all said and done the 360 version ends up keeping pace with the ps3 version.


The question here isn't which is better to put in a console, but which provides the best graphical returns per measure of silicon - less powerful, more flexibile designs; or more power, less flexible designs?

In my mind that is the easiest question in the world to answer, made even easier by the lab test known as the ps3. Put all your money in gpu, ram and tools and you win in both cases, both in a console war and in graphical return.


However, because of those designs, using the CPU is an option. If Sony had gone with x86 and a customised GPU, which presumably wouldn't be any more advanced than Xenos, would it be able to achieve the same level of results? We'll be able to look at 360's version of Frostbite 2 and see how it compares. :D

Only because they've become more programmable. ;)

I'd never have suggested anything Intel, seems unlikely they will be put in a console anytime soon. But the ps3 would have graphically been better off if they put the same piece of crap cpu as the 360 has into their machine along with a better gpu. They probably would have won the console war that way as a side bonus. If they additionally ditched bluray and instead used that money saved to put more ram into the machine then they would have handily won this gen really quickly, and had significantly better graphical return than what you see now. Spending money on the cpu in my mind is the wrong choice for so many reasons, for business reasons, development reasons, and graphical return in the long run reasons. Plus remember that a pound of art talent is worth ten pounds of tech talent, so if you really just want graphical return then put more ram in there and let the artists do their thing. I really doubt you will ever see a gaming machine as imbalanced as the ps3 ever again, a situation where a gaming machine has a rockstar cpu combined with a geriatric gpu. I'd put my money on that, never gonna happen again, the lesson has been learned.
 
The irony of it all is that the CPU should be doing gameplay related work, more complex AI and world simulation and allow for larger levels and such. And we're never gonna find out if the Cell could have done that because everyone needs the SPUs to keep the pace in the graphics.
 
If you look at the KZ3 debuginfo, you'll see that at the same time (as occlusion culling, MLAA and other post processing), the SPUs are being used to run audio, security, complex AI, animation, game logic and simulation, motion control, assist in 3D display, and enable larger level. On top of that, in Europe and Japan, you can also run PlayTV to record video to HDD while the game is running.
 
I'd bet you anything the majority of their time is spent on the single purpose of graphics, and the minority is divided among all those other systems. Goes to show the importances.
 
This type of comment always comes up and I'll always ask the same thing, what makes people think that in year 6 games are still being done with legacy thinking on ps3? If they were they would look and run like utter crap. It's a fundamental issue that comes up repeatedly most likely because people are still confused as to why the ps3 hasn't pulled ahead of the 360 graphically, so they always fall back to "not using spu's", "legacy thinking", etc type arguments. This really is nonsense at this point. As I've said before on other posts, look back at what a pc game with a 7 series nvidia card and 512mb ram looked like back in the day, and look at ps3 games today and it will give you a clue as to how heavily the spu's are being used. Anyone still running on legacy thinking on ps3 is basically dead in the water at this point.

Yes and no. All the PS3 devs have to learn to exploit the SPUs. But it is also true when criticising the PS3 architecture, many only thought of standard/common features like QAA vs MSAA (instead of trying to create an even better AA solution). In the early days, people also doubted or didn't believe deferred shading on the SPUs. So I'd say there are room for growth for the developers along the way too.

At this point in the lifecycle many do, and ironically, as I've also said in the past, it's the multi platform studios that will be best positioned to do that because they have the most as stake. That's borne itself out with Crytek and Frostbite being ahead of everyone else.

Yes and no. Multi-platform and exclusive developers have high stake. They simply chose different ways to express themselves. But MP developers have the additional overhead of maintaining platform parity where as exclusive developers may earn less.

If you want to discuss that just out of academic reasons then sure, but it's totally unrealistic. Having to wait 6 years for well funded rockstar developers or whoever to finally show the benefits of a design choice all but demonstrates how it was the wrong design choice all along. In this case it still hasn't proven itself even after 6 years, we'll have to wait and see if Frostbite 2 finally shows some advantage. I wonder what people will do if it turns out in the end to still have no advantage overall, even with Frostbite 2 engine. It will be funny to see if the Frostbite guys get hit with the "lazy dev" tag if after it's all said and done the 360 version ends up keeping pace with the ps3 version.

Then we know more about where the bottleneck is.

In my mind that is the easiest question in the world to answer, made even easier by the lab test known as the ps3. Put all your money in gpu, ram and tools and you win in both cases, both in a console war and in graphical return.

...

In the first place, the GPU can also acquire CPU-traits and become a combined CPU + GPU. But that may also mean that it'd be more troublesome to program (for more advanced stuff). Definitely agree that more RAM is better though.
 
I'd bet you anything the majority of their time is spent on the single purpose of graphics, and the minority is divided among all those other systems. Goes to show the importances.

Those areas are not neglected. Like memory and HDD, there is always more advanced needs for computing power. Even if the PS3 has a powerful GPU, I'm sure we'll sure come back to talk about wish list and short falls.
 
The irony of it all is that the CPU should be doing gameplay related work, more complex AI and world simulation and allow for larger levels and such. And we're never gonna find out if the Cell could have done that because everyone needs the SPUs to keep the pace in the graphics.

That is really not true. We so far never found out in multi-platform titles because nobody could be bothered, basically. Insomniac used the RSX purely for graphics, and the SPUs for all sorts of other stuff. And as patsu said, the Killzone 2 debug info screen shows a tonne of systems running on SPUs that are typically GPU work. Plenty of other examples too, like Havok 4.5 optimised for SPUs showing some great physics performance improvements and SPUs being used for physics in general (I remember something like 5 times faster). Uncharted 2 also uses SPUs for stuff like sound occlusion. AI pathfinding is also typically SPU work, at the very least in just about all of Sony's titles. I think Housemarque used SPUs for managing collision detection for the thousands of objects that game can show at the same time (endless mode is the best example in that game, awesome stuff).

There's completely different reasons why we don't hear much about SPUs, and they have very little to do with there not being enough SPU time left. I'll wager that the type of AI and physics stuff employed in current games is still so embarassingly primitive that the SPUs spend a trivial amount of time on them even using unoptimised C code. And particularly for multi-platform development, there's just no incentive to make the most of the strength of a particular platform as long as you cannot leverage that on other platforms.

I think that's why it is interesting to see if something will change here as PC developers start experimenting with using GPGPU type applications more with DirectX11, and then perhaps with some luck we'll see some of that transfer to the PS3 too, though I'm not incredibly hopeful.
 
Try to look at all that deferred shading on the SPU issue in a wider context...

Dice has accumulated uncounted man-years of experience in multiplatform programming for their previous titles on the PS3, plus there's all the various publications released by the entire industry to build upon. So it's not like someone could sit down with the machine on day one of the devkit's arrival and the first thing to code would be such a renderer. There's millions of dollars of investment and a lot of experience behind this achievement.

Also, it isn't something a developer would usually do just for the sake of it (okay, some programmers probably enjoy the challenge and seeing the results). They're literally forced to do this if they want to reach the 50+ million user base for their game, because there's probably no other way to get to parity with the other two versions.


And on top of this all, if Crytek somehow can't get the PS3 version of their engine to a similar parity, then it'll be all about their inability to do the job.

So doing deferred shading on the SPUs is at once a great achievement and a seriously worrying sign for every developer thinking about console gaming. This is what you have to do to compete, and it's not pretty.
 
I'll wager that the type of AI and physics stuff employed in current games is still so embarassingly primitive that the SPUs spend a trivial amount of time on them even using unoptimised C code.

But that's exactly what I'm talking about. Every dev spends all their resources on just to keep up with the graphics and all other applications remain primitive. Where are those truly next gen games that do more with these consoles compared to a PS2/Xbox? So there are more AI characters running around in Reach, but fundamentally we're stuck playing the same gameplay, it's just prettier, louder, and there's a bit more of everything. That is not what we've been promised, but it all went sideways with the graphics race.
 
That is really not true. We so far never found out in multi-platform titles because nobody could be bothered, basically. Insomniac used the RSX purely for graphics, and the SPUs for all sorts of other stuff. And as patsu said, the Killzone 2 debug info screen shows a tonne of systems running on SPUs that are typically GPU work. Plenty of other examples too, like Havok 4.5 optimised for SPUs showing some great physics performance improvements and SPUs being used for physics in general (I remember something like 5 times faster). Uncharted 2 also uses SPUs for stuff like sound occlusion. AI pathfinding is also typically SPU work, at the very least in just about all of Sony's titles. I think Housemarque used SPUs for managing collision detection for the thousands of objects that game can show at the same time (endless mode is the best example in that game, awesome stuff).

There's completely different reasons why we don't hear much about SPUs, and they have very little to do with there not being enough SPU time left. I'll wager that the type of AI and physics stuff employed in current games is still so embarassingly primitive that the SPUs spend a trivial amount of time on them even using unoptimised C code. And particularly for multi-platform development, there's just no incentive to make the most of the strength of a particular platform as long as you cannot leverage that on other platforms.

I think that's why it is interesting to see if something will change here as PC developers start experimenting with using GPGPU type applications more with DirectX11, and then perhaps with some luck we'll see some of that transfer to the PS3 too, though I'm not incredibly hopeful.
I don't think thats even what argument is.CELL(and even old Xcpu) HAS to do all that physics,sound,A.I,simulation etc. work.Its what cpu is for.But what I think Laa Yosh and Joker said is that what would happen if you didn't need SPUs to do shit in graphics related work.Would it greatly improve world interaction,simulation,animations,a.i,physics etc. I think that it definitely would.SPUs are damn fast,especially for those things.Hell,they are even fast at graphics tasks and this should be piece of cake.

Problem is that some things like geometry rendering and post processing because of RSX old design cursed Cell to dedicate a lot of time for something that could be done with advanced gpu at that time. It's really a damn shame that ND uses ~30-35% of SPU time for vertex processing to make things easier for RSX. Above mentioned things would benefit alot from that chunk of time.
 
Looking at the future I'd say that programmable will end up winning, simply because there has to come a time where hardware performance outstrips demand.
I am not sure this is guaranteed.
There is a steady growth in the demand for more performance, higher resolutions, more dimensions, more features, etc.
There is no hard fundamental limit I'm aware of on current graphics or current display technologies, much less any new avenues or technologies that may come in the future.

There are stronger physical limits on the ability of hardware to increase performance.
Power alone is a very hard limiter for silicon. Transistor budgets double every node, but power consumption only decreases by 20-30% in the best case. Transistor counts may rise by 16x, but per-transistor power will not be 1/16 what it is now.
I think demand may outlast our ability to grow performance.

We are already very careful about how much of our chips are active at one time, since it is already impossible for a chip to hit 100% utilization of all its components.
We will need to be even more so in the future. If you can't turn on more than 1/4 of your chip at any instant, the order of magnitude that can be gained through specialization or orders of magnitude from fixed-function can look pretty tempting.

Similarly a 'GPU' architecture that has fixed function processing structures (like processing batches of pixels instead of single pixels) will at times have either idle processing units or be doing redundant work, or be locked into a way of doing things.
Batching isn't a fixed-function design choice so much as a compromise in silicon complexity for more throughput.
All the throughput-oriented compute designs have some kind of SIMD basis.
Larrabee had a base granularity of 16, but due to the need to hide texturing latency, the realistic batch size would have been a multiple of 16, putting it in the same league as GPUs.

What would Larrabee have achieved if put in a closed box free from legacy support and given to devs for two years to create their own Larrabee specific renderer?
Should we ask Project Offset?
 
Yes and no. All the PS3 devs have to learn to exploit the SPUs. But it is also true when criticising the PS3 architecture, many only thought of standard/common features like QAA vs MSAA (instead of trying to create an even better AA solution). In the early days, people also doubted or didn't believe deferred shading on the SPUs. So I'd say there are room for growth for the developers along the way too.

The irony here is that mlaa which is so often touted on forums came to be not because of spu's but because msaa, which looks better than mlaa, is only realistically feasible on pc and 360. This is because rsx sucks at it. If back in the day they instead went with crappy cpu and better gpu then for the past six years you would have been enjoying msaa on all your games. Do you really feel it's been a better ride to have dealt with no aa or the ever appaling qaa for all these years, just to get mlaa on a handful of games in year 6? Note as well that a 4xmsaa capable gpu would have looked better than mlaa does today, that just makes the cpu value proposition look even worse.


Yes and no. Multi-platform and exclusive developers have high stake. They simply chose different ways to express themselves. But MP developers have the additional overhead of maintaining platform parity where as exclusive developers may earn less.

They do, but at this stage in the cycle multi platform studios have far higher stakes. Between sharing tech across multiple projects like EA, etc do, and between trying to become the new standard middleware, the financial stakes for multi platform tech developers is far higher than it is for exclusive studios. If a platform exclusive studio makes some funky tech then great, it affects their bottom line and perhaps partly a few others that may share that tech. If something like Frostbite 2.0 hits with thunder then it will effect all of EA's bottom line since it will be used across many games. That's why at least to me it's not at all surprising that the best tech this late in the console cycle is not coming from a studio exclusive to whatever platform, it's from purely multiple platform studios.


Then we know more about where the bottleneck is.

Do we though? In your opinion, where do you think blame will be laid, on the hardware on on the dev studio? We already have that answer with Crytek, they have already been widespread labelled as noob, etc, in the console world, as has just about every other multi platform dev. So honestly, in your opinion, if Frostbite 2 ran just as well on 360, or perhaps even a touch better, what do yout think the verdict will be as to where fault lies?


In the first place, the GPU can also acquire CPU-traits and become a combined CPU + GPU.

Yes that's exactly the point! The gpu morphs over time to the needs of the studios. Carmack back in the day influenced gpu development based on where he thought it should go. The stuff Frostbite, Crytek, Naughty Dog, etc, doing today will influence the next gpu's (actually it already has). Then in the next machine you take the latest and most bombastic gpu available, shove it in the box until it's dated, rinse and repeat. That's the progression that gives you the best graphical bang for the buck, is least likely to make studios fold, gives you quality content right at console launch, and it still doesn't stop anyone from experiementing on cpu with more kooky stuff.
 
The irony here is that mlaa which is so often touted on forums came to be not because of spu's but because msaa, which looks better than mlaa, is only realistically feasible on pc and 360. This is because rsx sucks at it. If back in the day they instead went with crappy cpu and better gpu then for the past six years you would have been enjoying msaa on all your games. Do you really feel it's been a better ride to have dealt with no aa or the ever appaling qaa for all these years, just to get mlaa on a handful of games in year 6? Note as well that a 4xmsaa capable gpu would have looked better than mlaa does today, that just makes the cpu value proposition look even worse.

We would still be at 360's "sometimes yes, sometimes no" 4xMSAA ? Why is Halo using Temporal AA ?
As for 4xMSAA looking better than MLAA, I think it depends on the game. On paper the latter is superior in some scenes, and less so in subpixel edges. But it is great for deferred renderers.

They do, but at this stage in the cycle multi platform studios have far higher stakes. Between sharing tech across multiple projects like EA, etc do, and between trying to become the new standard middleware, the financial stakes for multi platform tech developers is far higher than it is for exclusive studios. If a platform exclusive studio makes some funky tech then great, it affects their bottom line and perhaps partly a few others that may share that tech. If something like Frostbite 2.0 hits with thunder then it will effect all of EA's bottom line since it will be used across many games. That's why at least to me it's not at all surprising that the best tech this late in the console cycle is not coming from a studio exclusive to whatever platform, it's from purely multiple platform studios.

You can argue it either way, Sony first and second parties share their tech with the entire PS3 developer base. Sony has a platform to support. So far, we have cheap to expensive techniques like logluv colorspace, culling, MLAA, stereoscopic 3D techniques, etc. from them. There may be more.

Do we though? In your opinion, where do you think blame will be laid, on the hardware on on the dev studio? We already have that answer with Crytek, they have already been widespread labelled as noob, etc, in the console world, as has just about every other multi platform dev. So honestly, in your opinion, if Frostbite 2 ran just as well on 360, or perhaps even a touch better, what do yout think the verdict will be as to where fault lies?

Heh, why guess now ? We would know more later. That's the point right ?

Yes that's exactly the point! The gpu morphs over time to the needs of the studios. Carmack back in the day influenced gpu development based on where he thought it should go. The stuff Frostbite, Crytek, Naughty Dog, etc, doing today will influence the next gpu's (actually it already has). Then in the next machine you take the latest and most bombastic gpu available, shove it in the box until it's dated, rinse and repeat. That's the progression that gives you the best graphical bang for the buck, is least likely to make studios fold, gives you quality content right at console launch, and it still doesn't stop anyone from experiementing on cpu with more kooky stuff.

Not really. Platform vendors and developers will try to maximize their resources/investment. They will figure out how to use CPU together with GPU one way or another. In one of the NGP slides, it noted that the NEON media engine in the CPU can share memory with the high performance, slightly customized mobile GPU. Perhaps some more interesting use can come out of it ? There are more than one way to slice the problem.
 
Try to look at all that deferred shading on the SPU issue in a wider context...

Dice has accumulated uncounted man-years of experience in multiplatform programming for their previous titles on the PS3, plus there's all the various publications released by the entire industry to build upon. So it's not like someone could sit down with the machine on day one of the devkit's arrival and the first thing to code would be such a renderer. There's millions of dollars of investment and a lot of experience behind this achievement.

Also, it isn't something a developer would usually do just for the sake of it (okay, some programmers probably enjoy the challenge and seeing the results). They're literally forced to do this if they want to reach the 50+ million user base for their game, because there's probably no other way to get to parity with the other two versions.

Yes they are forced to do it. But the good thing is they didn't just stop there. ^_^

And on top of this all, if Crytek somehow can't get the PS3 version of their engine to a similar parity, then it'll be all about their inability to do the job.

So doing deferred shading on the SPUs is at once a great achievement and a seriously worrying sign for every developer thinking about console gaming. This is what you have to do to compete, and it's not pretty.

They are in the business of cross platform tool. *If* a large developer (or potential customer) can do it better than them, then yes they may likely lose that customer.
 
I don't think thats even what argument is.CELL(and even old Xcpu) HAS to do all that physics,sound,A.I,simulation etc. work.Its what cpu is for.But what I think Laa Yosh and Joker said is that what would happen if you didn't need SPUs to do shit in graphics related work.Would it greatly improve world interaction,simulation,animations,a.i,physics etc. I think that it definitely would.SPUs are damn fast,especially for those things.Hell,they are even fast at graphics tasks and this should be piece of cake.

The SPUs would be underutilized. They are there to also work on graphics from day one.

Problem is that some things like geometry rendering and post processing because of RSX old design cursed Cell to dedicate a lot of time for something that could be done with advanced gpu at that time.Its really a damn same that ND uses ~30-35% of SPU time for vertex processing to make things easier for RSX.Above mentioned things would benefit alot from that chunk of time.

Even on DX11 hardware, DICE has occlusion culling running. It's a more efficient way to use your computing power because you don't have to process hidden objects. Not sure about the exact nature of the 30-35% of SPU time for vertex processing though. The environment may be very dense since the SPUs may cull better than Xenos (according to DICE's culling slides).

As for post processing, if you want to quote them, NaughtyDog mentioned that they can get better results on the SPUs compared to RSX _and_ 360. The results are (more ?) mathematically correct, and fast enough. Why not use the SPUs then ?

EDIT: The other (and main) reason for DICE to use occlusion culling is destructible environment.
 
...and look how it all turned out. Quality content from day one, tool and dev support from day one, etc, because xenos did all the work. If a company has the ability to shove a totally customized cpu *and* gpu in there and somehow manage the financials and heat issues then power to them, go for it and get the best of both words. If not then screw the cpu and go nuts on the gpu. Personally I'd still screw the cpu even if it was possible and instead go with a kicking gpu and use the money saved by ditching an exotic cpu to put more ram in there.
It depends on what you prefer. Some people prefer to have all the improvements in the beginning and show no real improvemented later on. The is usually terrible for a long-term lifecycle. Then, most like to have continued improvements to be shown thorughout the console lifecycle. That, usually, helps to sell more consoles towards the end of it's lifecycle. People feel like they are getting top quality at bargin basement prices. Can't you tell how the general mentality of people is shifting all around you at the halfway mark?



This type of comment always comes up and I'll always ask the same thing, what makes people think that in year 6 games are still being done with legacy thinking on ps3? If they were they would look and run like utter crap. It's a fundamental issue that comes up repeatedly most likely because people are still confused as to why the ps3 hasn't pulled ahead of the 360 graphically, so they always fall back to "not using spu's", "legacy thinking", etc type arguments. This really is nonsense at this point. As I've said before on other posts, look back at what a pc game with a 7 series nvidia card and 512mb ram looked like back in the day, and look at ps3 games today and it will give you a clue as to how heavily the spu's are being used. Anyone still running on legacy thinking on ps3 is basically dead in the water at this point.
There was an article about the 3 phases of coding on the PS3. I believe Mike Acton was speaking on this. The first phase was having everything or almost everything on the PPU with nothing to very little on the SPUs. The second phase was moderate usage of SPUs and some offloading of coding from the PPU. The third phase was light to no PPU usage for game code and heavy use of the SPUs. Sony's 1st party are on the 3rd phase, now. If you aren't on phase 3 or close to it at this point, it's legacy thinking to me. ND said the hardest part of taking full advantage of the Cell was to "keep all the plates spinning" (UC2 interview). Do you think any 3rd party devs have reached that point, yet? Until I see a Cell usage chart with associated jobs, I can't even say DICE is there. However, from their presentation, I applaud what they have done so far.

Personally, I don't understand all the "PS3 not pulling ahead of the 360 graphically" talk. I and most others seem to believe this happened some time ago. Most seem to believe the PS3 has pulled ahead in a number of categories (graphics, audio, A.I., and scale). I understand that you and some others don't subscribe to that, though.


I wonder what people will do if it turns out in the end to still have no advantage overall, even with Frostbite 2 engine. It will be funny to see if the Frostbite guys get hit with the "lazy dev" tag if after it's all said and done the 360 version ends up keeping pace with the ps3 version.
Based on the presentation, I don't think anyone would call DICE's effort "lazy". It doesn't mean they don't have a lot of room to improve their techniques on the PS3, however. That would still be effect, if the 360 version ends up not keeping pace with the PS3 version.

But the ps3 would have graphically been better off if they put the same piece of crap cpu as the 360 has into their machine along with a better gpu. They probably would have won the console war that way as a side bonus. If they additionally ditched bluray and instead used that money saved to put more ram into the machine then they would have handily won this gen really quickly, and had significantly better graphical return than what you see now. Spending money on the cpu in my mind is the wrong choice for so many reasons, for business reasons, development reasons, and graphical return in the long run reasons. Plus remember that a pound of art talent is worth ten pounds of tech talent, so if you really just want graphical return then put more ram in there and let the artists do their thing. I really doubt you will ever see a gaming machine as imbalanced as the ps3 ever again, a situation where a gaming machine has a rockstar cpu combined with a geriatric gpu. I'd put my money on that, never gonna happen again, the lesson has been learned.
So you don't care for the better physics and audio the Cell affords games like UC2, Killzone 2 and 3? You wouldn't care for the additional space, in Blu-ray, that makes these games easier and better for the end user to experience (less discs, no loading screens, better quality audio, etc)? Would you have taken the HDD out as a standard option, for more RAM, as well? Of course, that takes away a company's incentive to subsidize a console as much as well. That, probably, means far less of a budget to work with for the design.

But that's exactly what I'm talking about. Every dev spends all their resources on just to keep up with the graphics and all other applications remain primitive. Where are those truly next gen games that do more with these consoles compared to a PS2/Xbox? So there are more AI characters running around in Reach, but fundamentally we're stuck playing the same gameplay, it's just prettier, louder, and there's a bit more of everything. That is not what we've been promised, but it all went sideways with the graphics race.
Flexibility means you can choose what you wish to put your resources into. It's just like some games choosing to render at a lower resolution than 720p for some games. The whole "they have to just to keep up with the graphics" part doesn't add up. It's highly unlikely they would try to improve these other areas. The proof is on the 360 multiplatform games. Most of those have the 360 ahead in the graphics department, but there is zero improvement in any other areas. If your theory held water, there would be some advancement in "all other applications" on the 360. After all, they are suppose to be twiddling their thumbs while the PS3 version is struggling to meet parity, right? :)
 
That is why you invent new ways to do things, in addition to just optimizing existing approaches.

I don't think it is wise to generalize an approach without looking at the specific designs and implementations.

EDIT: In general, if the gain is small, it may also mean that you picked the wrong thing to optimize.

Well, I will say... to all those points: obviously.

Yes, in my case, optimizing was "inventing" a new way to do things... similar to something mentioned in the DICE GDC presentations about reducing dot product operations. My alternate algorithm essentially involved a bit more storage and more addition ops.

Yes, one shouldn't generalize too much; there are always exceptions. Nevertheless, the majority of my engineering experiences shows that a KISS principle works best when seeking a balance of benefits. Sure, there may be compromises that are unacceptable given specific requirements.

The approach I picked was dictated by profiling tools that explicitly showed a large slice of CPU resource was spent on the existing algorithm in the multiply operations. It just so turns out that the optimization, though it produced better results, wasn't worth the gain. The gain would have been more worthwhile given a larger data size. Sometimes, optimization is more art and trial & error based. You have to try various options. Perhaps an algorithm with an exponentially degrading performance curve is as good as you'll get for the given problem and is acceptable given the data limits.
 
The irony here is that mlaa which is so often touted on forums came to be not because of spu's but because msaa, which looks better than mlaa, is only realistically feasible on pc and 360. This is because rsx sucks at it. If back in the day they instead went with crappy cpu and better gpu then for the past six years you would have been enjoying msaa on all your games.
Yet, pretty much all graphically acclaimed games on PS3 came with MSAA/QAA until GoW3.
What about 360 games? Gears? Halo *? There is Alan Wake if you can count it with that resolution.
edit: Oh yeah, I forgot about Crysis 2 not using MSAA either.

And yes MLAA can be better than MSAA (definitely better than 2x). Think shader aliasing, post resolve tone mapping etc.
It's also programmable thus improvable unlike MSAA.
I think once again, you are just trying too hard.
 
Last edited by a moderator:
Back
Top