Is the Xenos a shader monster yes or no?

Status
Not open for further replies.
Why aren't developers pushing the insane amount of extra shader power the Xenos has compared to other GPU's like the R520/580 and G70/71?
The Xenos can do execute 4096 differents as opposed to the 1024 of the R520/580 and G70/71 and do 500k shader instructions as opposed to 131k of the R520/580 and G70/71.

Is the Xenos or the Xbox 360 as a whole just too difficult or what else?
Also why does Halo 2 stress the Xbox 360 hardware more and make it run hotter?

As a final question.
Why the hell are almost all Xbox 360 games only using bilinear filtering when the Gamecube and the Dreamcast were doing trilinear filtering shouldn't anisotropic filtering be done no problem?
 
The Xenos can do execute 4096 differents as opposed to the 1024 of the R520/580 and G70/71 and do 500k shader instructions as opposed to 131k of the R520/580 and G70/71.

That part of the sentence should obviously be: The Xenos can execute 4096 different shaders.
 
Answer: no, relatively speaking.

A higher "# instruction slots" allows for more complex shader programs. From your link:
Although 4000 is a reasonably large number of instructions to support in a single code block, this is a limitation on the number of instructions that can be applied to a single shader program because the full program is stored on the chip and never partially retrieved from memory.

It doesn't mean more shader power. In that department, Xenos has less than R580 and possibly G70/71--and that's before you consider that it's clocked lower. It's more powerful overall than R520, though.

What specifically do you expect to see with the extra shader power that you aren't in current games (putting aside the obvious fact that first-gen games usually don't push a console's limits b/c of developers' limited time and experience with final kits)?

Halo 2 is being emulated, which probably explains the extra heat.

Dunno about bi/tri/ani filtering, though there was a recent thread on the latter that posited lack of AF may be b/c of (again) limited time/experience.
 
Besides dev kit availability and general generation transition issues, just looking at the meager gap between the X1900 and X1800 in performance gives some hints. Xenos has 3x as many shader ALUs as it does texturing units (ignoring that some of the ALUs are also delegated to vertex processing). That means to get the most out of Xenos you need to do a lot more math per pixel and so far developers are not doing that. There are other reasons like issues with tiling and the general lack of quality SM3.0 hardware for the PC and developer familiarity (or lack thereof). I remember when the PS2 came out and there were a lot of questions like this. There seems to be enough evidence from tech demos and snapshots of coming games (or games like FNR3, GRAW, etc) that indicate it is fairly powerful. Ultimately it wont really matter though in that it is how developers use it that determines how good it is. Kind of dissappointing that the majority of MS's fall titles are UE3. Gears of War, Too Human, Mass Effect, Rainbow Six, Brothers in Arms, etc. The only solid 2006 title from MGS that is not UE3 is Viva Pinata. That is not doing much to distinguish itself there.

overclocked said:
Is it really.

Hard to compare architectures, and I am sure there are areas the X1800 is faster, but based on what ATI has said in regards to shader performance and bandwidth (with a 720p target) Xenos is faster and based on the raw shader performance from a theoretical peak standpoint (which is misleading, but the only thing we have to go on) then yes, Xenos is "faster" than the X1800. But then again, it all depends on how they are used by software. Even faster hardware can lose in certain circumstances. But arguments about "faster" really fall into the context of us. Xenos would be a poor PC GPU, and it is not faster in every situation.
 
dukmahsik said:
it's only been 6 months, look at the 2nd gen games and then compare

Which should be arriving fall 2007. In the meantime we will have 2,346 threads on this very topic ;) And even then it wont be answered completely!
 
overclocked said:
Is it really.
I was wavering on that, but 48 "full" ALUs vs. 40 (16 full + 16 mini + 8 VS) seems pretty even. Considering R520XT's 25% clock advantage, I suppose you could give it the slight edge. But if we're counting MADDs, then Xenos appears to win, hands down.

Or am I overlooking something? Are you considering threads, batch sizes, tiling with AA, maybe HDR?
 
Pete said:
I was wavering on that, but 48 "full" ALUs vs. 40 (16 full + 16 mini + 8 VS) seems pretty even. Considering R520XT's 25% clock advantage, I suppose you could give it the slight edge. But if we're counting MADDs, then Xenos appears to win, hands down.

Or am I overlooking something? Are you considering threads, batch sizes, tiling with AA, maybe HDR?

What's the MADD difference between the two?
 
The 360 all ALU's are MADD capable. Only the 16 primary ALU's are on R520.

I have compared Xenos to R580 and realized it looks woefully underpowerd..what I failed to remember is that R580 is so ridiculously TMU's limited, it's gets nowhere near it's theoretical maximums and in fact is only slightly better performing than a R520, which Xenos compares much more favorably too.
 
And by the way, you dont have to worry about Xenos being TMU limited for several reasons. It has less shader power than R580, it I believe has better TMU's in some sense (both have 16) but most importantly, games will be coded specifically for it, so the idea of being texture limited simply isn't really possible, within reason. If games could be coded just for R580, it would be a monster too.
 
Pete said:
I was wavering on that, but 48 "full" ALUs vs. 40 (16 full + 16 mini + 8 VS) seems pretty even. Considering R520XT's 25% clock advantage, I suppose you could give it the slight edge. But if we're counting MADDs, then Xenos appears to win, hands down.

Or am I overlooking something? Are you considering threads, batch sizes, tiling with AA, maybe HDR?


This is interesting, I love doing stuff like this.

Anyway, I have no idea if it's even remotely correct, but someone, probably Mintmaster, told me that his "wild ass guess" was each mini-ALU might be "worth" 30% of a full ALU.

If you apply that correction, counting each mini-ALU in R520 as .3 Xenos ALU's, you come out with Xenos having a reasonable amount more shader power. Something like, 25%?

R520: 16 (full ALU's)+.3X16 (mini-ALU's)+8 VS= 28.8, X1.25 to correct for clockspeed,= 36, vs 48 in Xenos..Xenos =133% of R520?
 
btw the above calculation would be very bad for Sony, because a X1800XT alone can basically give a 7900GTX a run, or damn near, most of the time.

That's BEFORE you drop the clock 100 mhz and half the bandwidth on RSX..

Of course, it could all be bunk my calculations, so take with heavy salt grain.
 
In terms of pixel shading its about the same or slightly ahead of its competition due to its ability to incremently increase from 24 to 32 to 48 while in terms of vertex shading its clearly ahead as it has the ability to incremently increase all the way upto 48 (that, however is until we developers start using it)
 
So Xenos ~X1800XT+?
RSX ~7900GTX+?

Doesn't seem to be much in it.
Besides I doubt we'll ever know the exact "power" of either chip since they are in a closed box and will not be independantly benchmarked.

As far as what graphics PS3 or XBOX 360 will be pumping out - that will be a function of the how the entire system behaves (and how developers can harness that power) rather than one single aspect.
 
joebloggs said:
So Xenos ~X1800XT+?
RSX ~7900GTX+?

Doesn\'t seem to be much in it.
Besides I doubt we\'ll ever know the exact \"power\" of either chip since they are in a closed box and will not be independantly benchmarked.

As far as what graphics PS3 or XBOX 360 will be pumping out - that will be a function of the how the entire system behaves (and how developers can harness that power) rather than one single aspect.

correction

Xenos ~ X1900XT (minus the clockspeed, minus 128 bit memory bandW, minus dedicated memory, additional USA, EdRam, Memexport Function (enslaving of dual thread core))

RSX ~ 7800 GTX (same clockspeed, minus 128 bit memory bandW, minus 256 Mb dedicated memory, additional FlexIO, Turbocache and interface with Cell)
 
kabacha said:
correction

Xenos ~ X1900XT (minus the clockspeed, minus 128 bit memory bandW, minus dedicated memory, additional USA, EdRam, Memexport Function (enslaving of dual thread core))

RSX ~ 7800 GTX (same clockspeed, minus 128 bit memory bandW, minus 256 Mb dedicated memory, additional FlexIO, Turbocache and interface with Cell)



We've already had ATI say Xenos is comparable to X1800XT optimised for 720p.
Aegia have mentioned RSX is 7900GTX+.

Anyway whatever the case maybe it's the games that will do the talking.
 
joebloggs said:
We\'ve already had ATI say Xenos is comparable to X1800XT optimised for 720p.
Aegia have mentioned RSX is 7900GTX+.

Anyway whatever the case maybe it\'s the games that will do the talking.

in terms of shader performance Xenos is comparable with X1900XT , not X1800XT and in terms of shader performance it looks like RSX is closer to 7800 performance than it is 7900 performance (7900 GTX+ is 700 Mhz compared to 7800 which is 550 Mhz *same as RSX*) , 7900 also has 1800 Mhz memory BandW compared to 1200 from 7800 *which is even higher than RSX* AGEIA is right on the account of architecture but wrong on the account of performance


all you have to do is look at whats available infront of you and you get a pretty consice view of the performance
 
Status
Not open for further replies.
Back
Top