How much work must the SPU's do to compensate for the RSX's lack of power?

Status
Not open for further replies.
I believe Epic said go with 512MB of RAM instead of 256MB.

Which was pretty much a no-brainer and I think many other devs were pushing for it too.
It might have been about Gears being the game in the most advanced stage of development, so Epic was the one able to show off the dramatic difference in texture resolution at that time.
 
Every Xbox360 game uses edram regardless of AA implementation. Also, edram was not a cost saving measure. It's what the designers felt was necessary to achieve the highest performance. The cost saving measure was only having 10 MB and requiring tiling for AA with 720p. 4x AA with 480p fits perfectly.
It's a cost saving measure, in that it let's you have a smaller PCB bus. And of course every game uses it, it has to, to get anything on the screen.
The advantages it gives you are there, but it's not nearly as useful as say the PS2 eDRAM or the Dreamcasts 32^2 buffer, and with a whole lot of negatives. You need to split the rendering up if you are going to render even 720p and the stuff you need to alpha blend and stencil, still needs to be send to the buffer. The AA isn't that big an advantage as various compression schemes in "normal" setups has brought the hit down to a very acceptable levels.
The very fact that the GoW series doesn't use the AA feature, but a z-buffer method they came up with themselves, is telling.
 
Last edited by a moderator:
To be frank, I don't really consider Epic to be the best or most technically advanced developer around. What they do on the 360 isn't really representative of its capabilities; they have a damn good art team and probably pretty efficient production tools, and that makes their games look good.
 
Hm, I wonder if PS3 devs only optimize for the Cell nowaday?
Isn't it as least as much worth to find tricks and optimization for poor ol RSX - we only talk about the Cell ...is RSX maxed out or what?
 
To be frank, I don't really consider Epic to be the best or most technically advanced developer around. What they do on the 360 isn't really representative of its capabilities; they have a damn good art team and probably pretty efficient production tools, and that makes their games look good.

Could you name some games though that you think are doing something extra special technically?
 
Also all the 60fps titles that also look good: Forza 3, MW1-2, Rage. Or games pushing 4xAA like Blur, Joker's sport games, Sebbi's Trials HD.

Stuff that's complicated and hard to do technically won't necessarily look the best at a first glance.
And I'm not saying that Epic's stuff isn't good, it's just that their work shouldn't be the measure of what the 360 can do.
Not to mention that I recall something about deferred shadow rendering being the actual reason for Gears' and UE3's lack of AA in general, and not the architecture itself.
 
That was proven looong ago to be FUD. Yes it's true but it has little impact on rendering to copy the buffer over to CPU RAM.

RSX has ca. 1.25 the transistor budget of Xenos if you leave out the daughter dies EDRAM cells. Nvidia would have to be incredible bad engineers for that not to make a difference.

The EDRAM die is mainly a cost saving measure. It's nice for a few things, but the features it offers is far from free. When some of the most high profile exclusive and 1st party games on the platform jumps through hoops to use or not use it, something is wrong.
The reason some games look slightly better on 360, can as I already said before, have many other reasons than technical superiority.
What matters is that PS3 beat 360 pretty thoroughly, when comparing the top games on either platform. It should, it's a year younger and the hardware's first iteration didn't look like the components was thrown in with a shovel in the mid eighties.
I would like to see what numbers the poster who came up with the 70 to 80% number used, because it can't the pretty creditable Wikipedia numbers I'm looking at, where RSX comes out in top more often than not.

Pretty darn debatable at this point. A year ago maybe, but stuff like Crysis 2 and Bulletstorm looks as good as any game out there. "Thoroughly" is a total misnomer. Games like Reach, Gears 3, Brink, Rage, also look really hot. Anyways, the best titles on PS3 dont prove anything regarding this topic, because the SPU's could be used to get them there. In other words I suspect while a GTX470 may be 10% faster on PC games, if they were both in consoles the 5870 would likely trounce it, all at 50% fewer transistor.

1.25 the transistor budget means nothing. Current Nvidia desktop GPU's are 1.5 the transistors of ATI, for something like 10% more performance (GTX480 vs HD5870). At least in gaming, it's not hard to believe at all that ATI are much better engineers than Nvidia. If you dont believe that, ask yourself who you want in PS4 ATI or Nvidia? The way Nvidia is burning up transistors chasing GPGPU, I dont think your answer is the latter. Considering ATI is currently getting 2.72 teraflops out of 2 billion transistors, while Nvidia is getting 1.35 teraflops out of 3 billion. Of course flops dont tell the whole story, but they do tell an important part of it, and I have a feeling those ATI flops would be harnessed a lot better in a console where the software is tailored to the hardware, than they currently are on PC where it's vice versa. In other words where GTX470 is currently 10% faster in PC games, I suspect HD5870 would trounce it if they were in competing consoles, all at 50% fewer transistors.
 
Last edited by a moderator:
RSX and Xenos are totally different architecture.I thought it was pretty well known not to compare them solely by specs and transistor count,i mean Unified Shaders alone makes very big difference in Xenos advantage,its also considerably more efficient.And i dont know why people are dismissing eDRAM.It seems to be quite an advantage when you use it right,people just have to see how good Capcom uses is for its games.If somebody has some kind of comparison between desktop gpus with old shader models and unified,it would be nice to see performance difference between those.
 
Pretty darn debatable at this point. A year ago maybe, but stuff like Crysis 2 and Bulletstorm looks as good as any game out there. But a lot of times the media and internet dont seem to want to acknowledge anything but PS3 exclusives for best graphics. "Thoroughly" is a total misnomer. Games like Reach, Gears 3, Brink, Rage, also look really hot. Anyways, the best titles on PS3 dont prove anything regarding this topic, because the SPU's could be used to get them there.

1.25 the transistor budget means nothing. Current Nvidia desktop GPU's are 1.5 the transistors of ATI, for something like 10% more performance (GTX480 vs HD5870). At least in gaming, it's not hard to believe at all that ATI are much better engineers than Nvidia. If you dont believe that, ask yourself who you want in PS4 ATI or Nvidia? The way Nvidia is burning up transistors chasing GPGPU, I dont think your answer is the latter.

I can't believe I just read that first paragraph! Most people aren't visually stupid. In other words, big differences stand out. A lot of people see and hear the differences these 1st party PS3 games bring. Games like Brink are nice looking, but they're still missing many graphical features KZ2 has. I'm not going to say something looks on par or better knowing that's missing. And, I'm not gulible enough to be led to believe the people that set this technical bar in the past didn't severely enhance their graphical techniques over this time period (not with so much headroom and know-how left to explore on the platform).

One of the things I learned this gen is that everything is debatable, even the obvious. Things that were obvious by looking at them last gen seem to be debatable this gen. *sigh*
 
Pretty darn debatable at this point. A year ago maybe, but stuff like Crysis 2 and Bulletstorm looks as good as any game out there. "Thoroughly" is a total misnomer. Games like Reach, Gears 3, Brink, Rage, also look really hot. Anyways, the best titles on PS3 dont prove anything regarding this topic, because the SPU's could be used to get them there. In other words I suspect while a GTX470 may be 10% faster on PC games, if they were both in consoles the 5870 would likely trounce it, all at 50% fewer transistor.

1.25 the transistor budget means nothing. Current Nvidia desktop GPU's are 1.5 the transistors of ATI, for something like 10% more performance (GTX480 vs HD5870). At least in gaming, it's not hard to believe at all that ATI are much better engineers than Nvidia. If you dont believe that, ask yourself who you want in PS4 ATI or Nvidia? The way Nvidia is burning up transistors chasing GPGPU, I dont think your answer is the latter. Considering ATI is currently getting 2.72 teraflops out of 2 billion transistors, while Nvidia is getting 1.35 teraflops out of 3 billion. Of course flops dont tell the whole story, but they do tell an important part of it, and I have a feeling those ATI flops would be harnessed a lot better in a console where the software is tailored to the hardware, than they currently are on PC where it's vice versa. In other words where GTX470 is currently 10% faster in PC games, I suspect HD5870 would trounce it if they were in competing consoles, all at 50% fewer transistors.
FLOPS in GPU/VPUs tells as little about performance as contrast ratios of LCD screens. It's such a malleable measure. Much more so with V/GPUs than with CPUs.
The 10% number can be and most likely is a result of the APIs eating some of the advantages and differences between the two competitors. For a real estimate you have to look at clean and real test... Such as in a console where there are relatively few and light software layers. Ask yourself why manufactures would still choose Nvidia If they make worse products than ATI. Esp. Apple seem very keen on Nvidia products recently.
 
Because it's a conversation that needs more information to answer than anyone here who's allowed to talk has, and in place of real facts people pull in crazy arguments and platform alliegiances and get all emotional. Such threads never work, hence they're taboo. Plus this thread is about SPUs supporting RSX, and not how much better RSX is(n't) then Xenos.
 
Hm, I wonder if PS3 devs only optimize for the Cell nowaday?
Isn't it as least as much worth to find tricks and optimization for poor ol RSX - we only talk about the Cell ...is RSX maxed out or what?

Cell itself is not covered by (a lot of) NDA. RSX is.
So talking about specific RSX optimizations is out.
 
Cell itself is not covered by (a lot of) NDA. RSX is.
So talking about specific RSX optimizations is out.

Thanks for the answer T.B.!!

EDIT: Although I know now that I don't get an answer, I really would like to know which games are kings in RSX efficiency, and how these games compare to the PS3 top tech games!
Although this is iterated to death here on this forum, it still impresses me how Ninja Theory managed the tech for HS.
Regarding the question of this thread, this seems to be a game that shows you can do a lot with RSX on its own, without relying to much on SPU for rendering (plus all the missing graphics tech knowledge back then + all the missing PS3 memory back then)
 
Last edited by a moderator:
RSX and Xenos are totally different architecture.I thought it was pretty well known not to compare them solely by specs and transistor count,i mean Unified Shaders alone makes very big difference in Xenos advantage,its also considerably more efficient.And i dont know why people are dismissing eDRAM.It seems to be quite an advantage when you use it right,people just have to see how good Capcom uses is for its games.If somebody has some kind of comparison between desktop gpus with old shader models and unified,it would be nice to see performance difference between those.
Isn't it the unified shaders that makes Xenos more efficient and thus THE big difference? Also, when looking at the pixel and vertex capabilities of Xenos (when compared to RSX), isn't EVERY pipeline counted twice? Basically, all 48 shader pipelines are counted for vertex power, which would leave zero pixel shading power, right? The same goes for the pixel power, correct? In other words, you can't do both at the same time. When talking about things happening in parallel (SPUs helping RSX/ handling RSX jobs), would that be an advantage or disadvantage? And, why?
 
Isn't it the unified shaders that makes Xenos more efficient and thus THE big difference? Also, when looking at the pixel and vertex capabilities of Xenos (when compared to RSX), isn't EVERY pipeline counted twice? Basically, all 48 shader pipelines are counted for vertex power, which would leave zero pixel shading power, right? The same goes for the pixel power, correct? In other words, you can't do both at the same time. When talking about things happening in parallel (SPUs helping RSX/ handling RSX jobs), would that be an advantage or disadvantage? And, why?

Yea,sorry i meant that.You can do both at same time,thats why it is so efficient.If it demands more pixel shading more alus will automatically be dedicated to that,but there will be alus left for vertex shading,both at same time(from what I am understanding) while on G70 architecture there will always be alus sitting idle.If scene is more pixel heavy more alus are going to do pixel shading and vica versa.Here is how Ati described it:

G70 architecture

http://www.elitebastards.com/pic.php?picid=/hanners/ati/dx10/dx10-14.jpg

Xenos

http://www.elitebastards.com/pic.php?picid=/hanners/ati/dx10/dx10-16.jpg

We wont disscuss about 360 and ps3 gpus anymore since Shifty said no comparison posts :)
 
Basically, all 48 shader pipelines are counted for vertex power, which would leave zero pixel shading power, right? The same goes for the pixel power, correct? In other words, you can't do both at the same time. When talking about things happening in parallel (SPUs helping RSX/ handling RSX jobs), would that be an advantage or disadvantage? And, why?

The best example to see what difference it'd make, is to look at Ninja Gaiden 2 vs Ninja Gaiden sigma 2. NG2 uses the unified shaders to do heavy amount of vertex processing resulting in it pushing a high number of polygons onscreen (most notable in cutscenes). Compare this to Sigma 2 where the vertex processing is pared back but at the same time it has a huge upper hand on pixel shading with improved lighting,specular maps etc.

EDIT: I think all this is off topic, won't talk about it much cause I fear Shifty :p
 
The best example to see what difference it'd make, is to look at Ninja Gaiden 2 vs Ninja Gaiden sigma 2. NG2 uses the unified shaders to do heavy amount of vertex processing resulting in it pushing a high number of polygons onscreen (most notable in cutscenes). Compare this to Sigma 2 where the vertex processing is pared back but at the same time it has a huge upper hand on pixel shading with improved lighting,specular maps etc.

EDIT: I think all this is off topic, won't talk about it much cause I fear Shifty :p
Well, I'd take the advantages of NGS2 over NG2 anyday.
 
Status
Not open for further replies.
Back
Top