R520 = Dissapointment

Joe DeFuria said:

That's pretty interesting. Given the fillrate parity between XL/GT and XT/GTX and their relative performance numbers, I would have to say that it's clear as day that the XT's advantage is largely attributed to its higher speed and amount of memory.

The only evidence of higher efficiency is in the Splinter Cell numbers. But even there the X850XT PE is much faster than the XL. It would really be nice to see some R520 v R420 numbers at similar clocks. The big deal with R520 is supposed to be efficiency but we havent really seen it outside of the excellent dynamic branching performance thus far.
 
trinibwoy said:
The only evidence of higher efficiency is in the Splinter Cell numbers. But even there the X850XT PE is much faster than the XL.


Yea thats a tread I have seen far too often..that the x1800xl is slower in lots of cases than the x800PE even though the have "simular" specs? Driver issues? Design issues?

/me rubs magic 8-ball for the answer :)
 
Interesting figures yet a few bizarre comments. Why keep mentioning how the X1600 performs in comparison to 6800GT? They are hardly competitors with their price levels. :?:

It seems that the X1800 has improved AA efficiency in D3D but in Doom3 is no better than earlier implementations. Is this due to cruddy OpenGL drivers or is it just the rendering technique of the game engine, I wonder?

I'm surprised they didn't test Splinter Cell using SM3.0 but I assume this will be covered in their follow-up article mentioned on the last page.

I still hate that MRSPs are being compared to street prices with no caveat that MRSPs are likely to fall quickly. It just doesn't make sense when you are looking at a general product overview. :mad:

SM3.0 comparisons next, please!
 
Anybody find this interesting:

OpenGL games are still a hurdle for all the ATI cards to overcome, but it seems as if the X1600 XT is more highly capable of mitigating the impact of a non-Microsoft API on performance.
 
jb said:
Yea thats a tread I have seen far too often..that the x1800xl is slower in lots of cases than the x800PE even though the have "simular" specs? Driver issues? Design issues?

/me rubs magic 8-ball for the answer :)
You guys?!!! Nowhere in that Anand article is X1800XL beaten by X850XTPE. :oops:

Jawed
 
serenity said:
Anybody find this interesting:
OpenGL games are still a hurdle for all the ATI cards to overcome, but it seems as if the X1600 XT is more highly capable of mitigating the impact of a non-Microsoft API on performance.
Misconstruing the double-Z/stencil capability of RV530 for OpenGL ability :rolleyes:

Jawed
 
Jawed said:
Misconstruing the double-Z/stencil capability of RV530 for OpenGL ability :rolleyes:

Jawed

By-the-by, the legendary "OpenGL driver rewrite" reared its head again in the conference call yesterday. It threatens to join Duke Nukem at this point. (Well, okay, that's over the top --but I'm getting cranky about it.)
 
Last edited by a moderator:
Jawed said:
You guys?!!! Nowhere in that Anand article is X1800XL beaten by X850XTPE. :oops:

Jawed
I have to say that has me stumped too - the X1800XL, running at lower clocks for both core and memory either matches or convincingly beats the X850 in pretty much every test in that article, with or without features like AA or AF, and yet there's apparently no evidence of efficiency improvements?
 
I've been testing against an X800 XT, so that matches the specs of the X1800 XL - as you'd expect, efficiency gains are not always predicactable, for instance pure PS tests there is very little gain because the PS are already pretty maxed out anyway, so the scheduler isn't helping much (it'll come into it own when there is a variety of workloads), however, in pure pixel fill the XL is 25% better than the X800 XT, getting much closer to it theoretical and maxes out its Z fill entirely (suggesting that double Z may have been useful for these high end parts), and pure texture gains range from 6% a one layer to 56% better than the X800 XT at eight layers.
 
Jawed said:
You guys?!!! Nowhere in that Anand article is X1800XL beaten by X850XTPE.

Edit: Nevermind, I'm an idiot :oops: You're absolutely right - the XL does beat out the PE everywhere - impressive given the fillrate and bandwidth deficit.
 
Last edited by a moderator:
Dave Baumann said:
I've been testing against an X800 XT, so that matches the specs of the X1800 XL - as you'd expect, efficiency gains are not always predicactable, for instance pure PS tests there is very little gain because the PS are already pretty maxed out anyway, so the scheduler isn't helping much (it'll come into it own when there is a variety of workloads), however, in pure pixel fill the XL is 25% better than the X800 XT, getting much closer to it theoretical and maxes out its Z fill entirely (suggesting that double Z may have been useful for these high end parts), and pure texture gains range from 6% a one layer to 56% better than the X800 XT at eight layers.
Indeed, I am troubled to as why ati chose not to double z across the board :cry:
 
trinibwoy said:
Edit: Nevermind, I'm an idiot :oops: You're absolutely right - the XL does beat out the PE everywhere - impressive given the fillrate and bandwidth deficit.
It's actually an X850XT - an XTPE might squeak past the X1800XL by 1 or 2FPS in one or two non-AA cases, I suppose.

Jawed
 
Hi, XbitLabs posted update on ATi X1k tested again NV... here the link...
Look in some case X1800XL can beat 7800GTX too :cool: ... Just in case anyone interest.
 
satein said:
Hi, XbitLabs posted update on ATi X1k tested again NV... here the link...
Look in some case X1800XL can beat 7800GTX too :cool: ... Just in case anyone interest.

Which cases? That article is too huge to wade through :)
 
BF2, FarCry.

Then again, the 6800 Ultra does better than the X1800 XT in the OpenGL benchmarks:
CoR, Doom3, Pacific Fighters.
 
Chalnoth said:
BF2, FarCry.

Then again, the 6800 Ultra does better than the X1800 XT in the OpenGL benchmarks:
CoR, Doom3, Pacific Fighters.

Thanks Unfortunately, that just confuses things for me. In that BF2 bench where XL > GTX, the GT and GTX are pretty close....why? Also, the GTX > XT without AA/AF in both FEAR and BF2.

I hope somebody does a really thorough investigation, like checking the performance hit of AA and AF separately etc. I'd especially like to know what the performance of ATi's "pure" AF looks like - if it's faster than Nvidia's hacked AF that would be really cool. I know Ratchet did some of this in his previous Rage3D reviews - his presentation style is excellent for answering these types of questions.

Ratchet are you listening??!!
 
X1800 is still impressive. All the features included are worth the price tag, IMO. Especially the 512MB version. I'd pay $549 for the memory only (512MB GDDR3 at 1600MHz), knowing the performance is on par with, if not better than, the GTX. The biggest let down so far is the power consumption in idle as well as accompanying hit/noise. Why is it running so hot on idle? Is it running at full spin even in 2D mode? Someone please explain to me because I'm very confused.

Slightly off-topic:

Now that I know HDR + AA is possible with X1800XT, would I be able to play Splinter Cell: Chaos Theory with AA enabled (& HDR in SM3.0 mode of course)? AOE3 is another game that I've been anxiously waiting for, and it'd be great if I can enable both of them in the game.

lop
 
trinibwoy said:
I'd especially like to know what the performance of ATi's "pure" AF looks like - if it's faster than Nvidia's hacked AF that would be really cool. I know Ratchet did some of this in his previous Rage3D reviews - his presentation style is excellent for answering these types of questions.

I thot I linked this somewhere: http://www.xbitlabs.com/articles/video/display/radeon-x1000_19.html

Not enuf for a good start? Includes link to GTX under same conditions (near the bottom).
 
Back
Top