How can Nvidia be ahead of ATI but 360 GPU is on par with RSX?

Status
Not open for further replies.
It's what everyone is banking on, that Cell will help by doing things like tessellation (generating the "right" detail level for geometry) or pre-rendering particle effects like smoke.

A lot of this should work quite sweetly since Cg should be able to run on Cell's SPEs in the same way that it runs on RSX.

Jawed
 
ihamoitc2005 said:
Unified shaders do not = better performance/clock-transistor at all times

7800GTX overclocked @ 500mhz, ~304 Million transistors
Xenos @ 500mhz, ~337 Million transistors

Situation where 0 vertex shader required, o.c. 7800GTX = Xenos
Situation where 1 vertex shader required, o.c. 7800GTX > Xenos
Situation where 8 vertex shader required, o.c. 7800GTX >>>> Xenos

But, Xenos has AA & HDR while mutually exclusive on 7800GTX ... trade-off.

Sorry, this is a rediculous comparison - you are assuming that and ALU for one = an ALU for the other, this is not the case; a single shader ALU in Xenos is more capable than a single pixel shader ALU in RSX. With ALU's pipelined in RSX the they are all ALU's are not going to be used all the time (i.e. there will be fewer occasions that RSX can achieve close to peak ALU utilisation). Xenos also has less shared resource - i.e. texture address processors are separate and do not take up ALU time, and texture addressing happens entirely independantly so ALU's aren't locked on dependant texture ops.

While many have hooked on to the notion that Unification means a better resource utilisation, its probably a little less known how much the multithreaded nature affects things even before unification, so we'll have to wait for an answer on that.

And, FYI, the logic portion of Xenos works out a 257M transistors.
 
a shadow rendering capability that runs at roughly 6x the speed of RSX
Err.. what?

OPs that can populate the framebuffer with shadow data at three to four times RSX speed (32G samples per second) - RSX is bandwidth limited
RSX is not a freaking GS, even in old NVidia GPUs the zixel Fill was more efficient then a freaking "Bandwith divided by number of 32byte entries" calculation.
 
zidane1strife said:
A unified shader arch, If I've remember correctly, should be able to either:

a.) Allow similar performance to a dedicated one with less transistors/lower clock

or

b.) Allow greater performance for a similar transistor/clock budget.

Yet the xenos is not only unified but on top of that has an additional chip with eDRAM which eases b/w constraints, and allows for supposedly near performance penalty free 4xAA + HDR, etc.

Given it's virtually 'free' from the penalties of AA, and that it should offer better performance for similar clock/transistor budget, we should be seeing things that blow g70 based demos out of the water, yet we are not.

What gives? That is the question.

the developers still trying to learn with the brand new clean slate design?
 
Wasn't there a reason that Sony showed the Future Getaway demo? You know that demo that using certain effects only using the CELL processor. Maybe that's what will help let the PS3 do 4xAA and HDR at the same time.
 
we should be seeing things that blow g70 based demos out of the water, yet we are not.
what demos? realtime, ingame, "realtime" cinematic, prerendered, prerendered with only 1 3d frame real time....
 
Jawed said:
Ah yes, it's that argument all over again about the useless features of Xenos:
  • increased efficiency (50%+) for all shader code
  • increased efficiency (100%+) for all texture operations
  • a shadow rendering capability that runs at roughly 6x the speed of RSX
  • ROPs that can populate the framebuffer with shadow data at three to four times RSX speed (32G samples per second) - RSX is bandwidth limited

*Cough*

Random number generators are cheap these days!
 
Last edited by a moderator:
Fafalada said:
a shadow rendering capability that runs at roughly 6x the speed of RSX
Err.. what?
48 shader pipes to run vertex programs instead of 8, when doing any kind of shadow pre-render.

RSX is not a freaking GS, even in old NVidia GPUs the zixel Fill was more efficient then a freaking "Bandwith divided by number of 32byte entries" calculation.
Yes, it works alright until you have some detailed shadows to render. I'm talking about peak bandwidth, not average.

Jawed
 
If the RSX was "just an overclocked G70", the developers would have had it since day one.

It's clearly much more than that.
 
mckmas8808 said:
Question?
Could it be that the connectivity that the CELL can pump to the RSX could put the overall graphics ahead what the Xenon and Xenos could do? If and I mean IF the RSX is weaker overall than Xenos, maybe Sony didn't mind because they helped designed the RSX to communicate with the CELL in such a great way to help out. Maybe they just looked at things differently than MS?

But we won't know for sure until somebody releases some damn information about the RSX.:devilish:

If PS3 needs to use CELL to achieve the same graphics processing power as the Xenos, then doesn't much of the PS3 CPU advantage go out the window? If 4 of the 7 SPE's are dedicated to helping the GPU, you are effectivly cutting your available CPU power in half...

The more we compare these systems, the more they seem to be a wash....
 
mckmas8808 said:
Could it be that the connectivity that the CELL can pump to the RSX could put the overall graphics ahead what the Xenon and Xenos could do? If and I mean IF the RSX is weaker overall than Xenos, maybe Sony didn't mind because they helped designed the RSX to communicate with the CELL in such a great way to help out.

Yes.

Talking about GPUs doesn't make sense with the PS3. And any GPU to GPU comparision is going to have little bearing on realworld performance. I think of the RSX as more of a rasterizer.

It should be clear even from public developer statements that the RSX doesn't function like a pc GPU - or I guess you could say you could use it that way but that would be a waste.

So far the only thing real is what developers have shown running in public. And until something better comes from the 360 developers, the PS3 is displaying a significant realworld advantage.
 
scooby_dooby said:
If PS3 needs to use CELL to achieve the same graphics processing power as the Xenos, then doesn't much of the PS3 CPU advantage go out the window? If 4 of the 7 SPE's are dedicated to helping the GPU, you are effectivly cutting your available CPU power in half...

The more we compare these systems, the more they seem to be a wash....
In that case it's nothing but the Emotion Engine all over again. One the one hand I don't think Sony would make that mistake again, but on the other hand, I have the feeling that King Kenny really wants the Cell to be a graphics chip.
 
scooby_dooby said:
If PS3 needs to use CELL

needs?

I guess you haven't even bothered to read the public PS3 docs and patents.

There is no needs. That is how the system was designed and it is how developers are using it.
 
Don't take my posts out of context, thx. There was an entire sentance there you know!

"If PS3 needs to use CELL to achieve the same graphics processing power as the Xenos,"

That seems to be the main caveat everyone is using to defend(for lack of a better word) RSX. I'm just wondering if that is the case, because if it is, then much of this argument about the weak X360 CPU goes out the window, since the CELL will be doing both GFX and General Processing.

Am I way off here? I feel like I'm just pointing out the elephant in the room.
 
Last edited by a moderator:
scooby_dooby said:
If PS3 needs to use CELL to achieve the same graphics processing power as the Xenos, then doesn't much of the PS3 CPU advantage go out the window? ...

If Xenos is EXCLUSIVELY used for pixel shading with ALL it's 48 ALUs, then it would fall roughly on par with 24 pixel pipes of a hypothetical RSX (48 vec4 units etc etc...) and you'll still have the vertex shaders available on the RSX...AND CELL...

So no...it doesn't go out of the window...
 
Jaws said:
If Xenos is EXCLUSIVELY used for pixel shading with ALL it's 48 ALUs, then it would fall roughly on par with 24 pixel pipes of a hypothetical RSX (48 vec4 units etc etc...) and you'll still have the vertex shaders available on the RSX...AND CELL...

So no...it doesn't go out of the window...

does that assume that both rsx and xenos are running same efficiency rate?
 
dukmahsik said:
does that assume that both rsx and xenos are running same efficiency rate?
Not to mention the vastly more efficient texturing that Xenos can perform, because it can texture even when a conventional GPU would have no texturing instruction to run.

And the viable per pixel dynamic branching, which is a technique beyond RSX's reach because the architecture is too large-grained.

etc.

Jawed
 
Status
Not open for further replies.
Back
Top