How come?It still takes couple ms to make 2xMSAA happen on RSX.Dropping it will give you couple ms on RSX in return.Granted it will take you ~4ms of SPU time to do it...Not really IMHO, it's mostly about having more resources to sample the depth and some framebuffer compression to avoid moving the same pixel 4 times.
Makes you wonder if upcoming GPU's designs will change to better accommodate post process AA solutions and drop support for MSAA in turn...
RSX is bandwidth limited compared to the XGPU, which is why it takes a measurable performance hit (and why no AA penalty was once claimed for X360), ~26GB/sec is far from unlimited.
Well the heart of the Playstation family is actually the GPU anyway. I mean with PS2, Sony actually made the GS first before the EE. The GS was pretty much done before they contacted Toshiba to work on EE. That's probably why GS was so outdated by the time PS2 was released. So I wouldn't be suprised if Sony had their own GPU ready for PS3 too before they go to IBM and Toshiba for Cell. Sony invested on eDRAM, and it wasn't even used for PS3 in the end. Their 32MB GS was able to output 1080, they probably had its successor for PS3 in mind before they go with Cell. They probably just wanted to put the power of GSCube into PS3. But I guess the world had moved on by that point.
Not really IMHO, it's mostly about having more resources to sample the depth and some framebuffer compression to avoid moving the same pixel 4 times.
Makes you wonder if upcoming GPU's designs will change to better accommodate post process AA solutions and drop support for MSAA in turn...
Okay, that's a fair argument, but are you suggesting the only reason we got 7800 at that time instead of 8800 is because nVidia/ATi didn't have enough money to invest, and with more money could ahve designed a whole generation of current thinking? I don't believe any research can be accelerated just by investing money. I don't beleive if someone handed nVidia a trillion dollars in 2000 they'd have produced the geForce 400 series architecture? There are limits in understanding that only come with experience, and I'm not sure how much better a GPU could be designed and manufactured for want of more investment.
Take what's learned this gen and keep it all in mind on the gpu for next gen. Look at the stuff Dice is implementing. On old gpu hardware they have to turn to cpu for help, which is less efficient but they have no choice. However newer hardware supports what they have in mind and hence Frostbite 2.0 will be nicely gpu assisted on current gpu hardware.
Custom forms of aa will be next in line to get hardware assist on gpu, it's inevitable. That's the best way to go about it because letting hundreds of stream processors on gpu handle all the loads automatically will always be better than having coders manually juggle it all in a cpu + gpu setup. Look at how it's handled on ps3. You've all seen 'jts' shown when it comes to coding on the ps3, that's just "jump to self" which basically tells rsx to wait until an spu tells it to keep going. It's the fundamental way to sync spu+gpu on ps3, and it's also basically a gpu stall. If you hit a jts, which you inevitably do, then you are wasting gpu processing time. I don't doubt that the different systems made to sync spu and gpu are clever but they are not the optimal way to go, nor are they a good use of developer time.
It's much better to let the devs just think up the clever tasks, hurl them at a proper gpu with its bank of processors and let the machine handle all scheduling. At least that's my thoughts on it.
It's easier... which is great, but not necessarily better. It depends on the whole package (Memory architecture, tools, GPU performance, CPU characteristics, etc.).
No, which is one argument for programmability. The moment any fixed hardware becomes redundant thanks to new algorithms, it's a waste of silicon, whereas truly programmable hardware can always be used to do whatever is asked.Now since MLAA is being done on SPU instead of MSAA on GPU (from my understanding atleast), can the silicon that is "reserved" for MSAA be used for other stuff? Or Quinquix or what its called that the RSX does instead...
Xenos is a compromise with big plus points and some serious minus points. It's not more a magic bullet or better design than Cell, save that its dev tools make it easier. And we can't really compare the whole systems as better or worse, much as fanboys want to, because we never have ideal implementations of the same game/engine to compare. Take any game that runs better on 360 than PS3 (of which there are many!), we can't say for sure that the developers are using PS3's system design optimally. How many devs or engines are using DICE's Cell-based shading? If that ends up being a big win, it could be that overall Cell+RSX gets more done than Xenon+Xenos.A gt400 back then would have been nice But no not that far, however they could have implemented features back then that would still be standouts today. Look at Xenos as an example, it's customization to support edram and aa paid off big time, even if they may have caused some other headaches along the way.
That's quite possibly true for a console business proposition, but this thread is intended more to discuss what can be achieved on the hardware, irrespective of developer requirements. If by the end of the lifecycle the programmable system overtakes the fixed-function system, that shows that programmability is an enabler and gets more from your silicon budget, even if that's a bad choice for a fixed-hardware console that needs to satisfy good business.The general way to go about this is slap the biggest, baddest, most customized gpu you can possibly think of in there at the time, mate it with whatever cpu, and ride it out. That gives you the benefit of simple pipeline and tools, and gives you great hardware support from day one.
However, because of those designs, using the CPU is an option. If Sony had gone with x86 and a customised GPU, which presumably wouldn't be any more advanced than Xenos, would it be able to achieve the same level of results? We'll be able to look at 360's version of Frostbite 2 and see how it compares.Take what's learned this gen and keep it all in mind on the gpu for next gen. Look at the stuff Dice is implementing. On old gpu hardware they have to turn to cpu for help, which is less efficient but they have no choice.
Only because they've become more programmable.It's much better to let the devs just think up the clever tasks, hurl them at a proper gpu with its bank of processors and let the machine handle all scheduling. At least that's my thoughts on it.
However, because of those designs, using the CPU is an option. If Sony had gone with x86 and a customised GPU, which presumably wouldn't be any more advanced than Xenos, would it be able to achieve the same level of results? We'll be able to look at 360's version of Frostbite 2 and see how it compares.
I guess Intel faced the same problem with Larrabee it takes a lot of time and efforts to get the thing perform correctly.One of the biggest problems with using the exotic choice of Cell and RSX is that it has taken so long to really see the benefits. In a competitive business, it is just as important to have impressive results at launch as it is to have continuing progress 6 years later.
What happens if the consumer doesn't buy into your product at launch because you are betting that in the long run the product will cede impressive results? I think that the only thing that saved the PS3 this generation was the millions of loyal fans from the PS1/PS2 era---they allowed the goodwill and time to invest in further development.
I remember distinctly the "wait for (game X) to be released".
As long as you define redundant to mean that the new algorithm in software is always better (where better can mean, eg, faster, better IQ, or use less power), which is extremely unlikely.No, which is one argument for programmability. The moment any fixed hardware becomes redundant thanks to new algorithms, it's a waste of silicon, whereas truly programmable hardware can always be used to do whatever is asked.
As long as you define redundant to mean that the new algorithm in software is always better (where better can mean, eg, faster, better IQ, or use less power), which is extremely unlikely.
Well they would have had an extra years development time and around 30-40m additional transistors to play with (assuming the CPU's had equal transistor count). I'm guessing they would have had more money to poor into the GPU development too. So you would have to take all that into account when looking at performance on the 360.
One of the biggest problems with using the exotic choice of Cell and RSX is that it has taken so long to really see the benefits. In a competitive business, it is just as important to have impressive results at launch as it is to have continuing progress 6 years later.
What happens if the consumer doesn't buy into your product at launch because you are betting that in the long run the product will cede impressive results? I think that the only thing that saved the PS3 this generation was the millions of loyal fans from the PS1/PS2 era---they allowed the goodwill and time to invest in further development.
I remember distinctly the "wait for (game X) to be released".