R520 = Dissapointment

geo said:
Mebbee. Went with what I had. I know that you'll believe me when I say I'd have come back here with an opposite indicator. So, somewhat better than anecdotal (because Brent is a trained observer), but I'd agree not conclusive.
Well yeah, but isn't that just kind of obvious from its function alone? Why would increasing AF on certain angles result in a generic performance hit when plenty of scenes don't have said angles?

This is why God created synthetic benchmarks. Somebody, get to it. Make us an AF tunnel with a user-selectable number of sides.
 
Junkstyle said:
Dude thats total PR BS pulled from the ATI site.

http://apps.ati.com/ir/PressReleaseText.asp?compid=105421&releaseID=764450

They sure fooled you.

Errm, maybe that is why it is in the "Press Release" forum here that I linked? Sorry, if I thot there was going to be any doubt about that I would have mentioned it. . .

Where are all the whorish (since that seems to be your not-quite-stated conclusion) statements of this variety from developers for X850XT in an ATI press release? How about X800?

Go ahead and find those. . .I'll wait here.
 
Last edited by a moderator:
I'm retracting this whole post because I dont want to continue talking about a PR here.
 
Last edited by a moderator:
Chalnoth said:
You know what really pisses me off about this? The entire reason that nVidia doesn't still support their nearly angle-independent anisotropic is because of ATI's own angle-dependent anisotropic, that most users seemed to claim had a minor image quality impact at most.
Chalnoth, it's not that simple at all.

The first NV hardware supporting AF beyond 2x was the Geforce 3/4. The performance drop of AF was really high, and it had nothing to do with angle independence, since the majority of surfaces are horizontal or vertical. The software was not as far behind the hardware as it is today, and 4xAA was quite expensive (often a 70% hit), so increasing the resolution was the number one priority to improve image quality for most people. I mean really, just look at this:
http://graphics.tomshardware.com/graphic/20020206/geforce4-17.html#anisotropic_performance
1600x1200 w/o AF was notably faster than 1024x768 with 16xAF. Even I might have chosen the former, and I'm an AF junkie from the original Radeon days (even though it was only bilinear).

Then there was the FX, and to say that you're pissed off that no-one liked the image quality of that generation is ridiculous. The defaults that NVidia chose made for very poor AF quality, regardless of angle independence. Then there were all the other tricks NVidia was doing to hide its pixel shader deficiencies, the poor AA quality, etc. The good AF was buried under a pile of crap, so no wonder nobody cared.

We're at the point now where angle dependent AF and its "minor image quality impact" is one of the last distracting artifacts left on the screen. Now we're used to being able to play most games at 1600x1200 with 4xAA/16xAF, but this wasn't the case by a longshot back in 2001/2002. People were too busy shitting in their pants when R300 hit the streets to care that it's AF quality wasn't quite as good as the GF4.

EDIT: fixed link
 
Last edited by a moderator:
Then there was the FX, and to say that you're pissed off that no-one liked the image quality of that generation is ridiculous. The defaults that NVidia chose made for very poor AF quality,

No actually the defaults were quite good. Until the 5x.xx detenator series arrived and so began more of the optimisation wars. I'm not going to get into an argument of who shot first. But all the optimisations that have occured the last 4 years were the beginning of the end of quality for AF. I'm impressed that ATI has provided an option to remove them. But I understand Chalnoth's sentiments regarding this. People at the time cared more about performance than quality regards to AF. And now we've seen the unfortunate net result of those impressions.
 
Mintmaster said:
Chalnoth, it's not that simple at all.

The first NV hardware supporting AF beyond 2x was the Geforce 3/4. The performance drop of AF was really high, and it had nothing to do with angle independence, since the majority of surfaces are horizontal or vertical. The software was not as far behind the hardware as it is today, and 4xAA was quite expensive (often a 70% hit), so increasing the resolution was the number one priority to improve image quality for most people. I mean really, just look at this:
That's not true at all. I did some tests back in the day, and found, to my surprise, that my GeForce4 Ti 4200 took less of a performance hit at 8-degree anisotropic than a Radeon 9700 Pro in a synethetic scenario where there were zero off-angle surfaces. Note that this is the opposite of what you would expect if either card was limited by any other part of the system.
 
JoshMST said:
I am somewhat disappointed in their midrange and low end parts so far, especially what I have seen in the reviews. They really don't offer much more than NV's current lineup with the 6200/6600/6800. I was expecting the X1600XT to whip the pants off of the 6800 and be fairly comparable to the 6800 GT, but that just doesn't look to be the case. This really makes you step back and wonder what NV will be pushing out here for their midrange 90 nm parts...

I agree. Especially about the X1600 XT which i expected much more from. I'm very impressed by the 1800 XT, features more then performance but there's not exactly anything wrong with the performance so that hardly matters :). But MSAA+HDR doesn't seem to be usable with anything but the high end cards, not that i've seen any benchmarks yet but the warnings from f.e Dave B here seems to indicate that it won't be that usable on anything but the 1800 XT/XL. So the additional features won't matter that much on the mid/low end cards. I'm a bit of a feature junkie so i would take a X1600 XT rather then a 6800 GT but that is probably not the case for the majority.
 
serenity said:
You .. just .. shot .. yourself .. in .. the .. foot. ;)

Amazingly accurate, yeah. :rolleyes:

I did take the time to look at the benches and compare them to a few of the review sites.
The performance difference for Far Cry and Doom3 were within 1% of two of the launch sites. The Splinter Cell numbers were way off though. I wouldn't say I shot myself in the foot. If he made the numbers up, he did a damn good job of guessing. Either way, the performance differences were remarkingly similar.
 
ondaedg said:
I did take the time to look at the benches and compare them to a few of the review sites.
The performance difference for Far Cry and Doom3 were within 1% of two of the launch sites. The Splinter Cell numbers were way off though. I wouldn't say I shot myself in the foot. If he made the numbers up, he did a damn good job of guessing. Either way, the performance differences were remarkingly similar.

Yep I think they were fairly accurate as well, maybe testing with the 5.8 or 5.7 cats will show the difference in Splinter Cell more to HA's favor.

Even though Sander is a big "ass" I don't think he was lieing, and the official benchmarks really do show that for the most part.
 
IMO Sander did seem to cherry pick settings that would make the X1800 look bad, but correct, he wasn't that far off the mark, except for maybe two titles.

But I think more people were reacting to his infantile "poor me" attitude than they were his results.
 
Chalnoth said:
That's not true at all. I did some tests back in the day, and found, to my surprise, that my GeForce4 Ti 4200 took less of a performance hit at 8-degree anisotropic than a Radeon 9700 Pro in a synethetic scenario where there were zero off-angle surfaces. Note that this is the opposite of what you would expect if either card was limited by any other part of the system.
OK, Chalnoth, you're bordering on outright bullshit now.

Why the hell do you think I linked you to that graph and gave you those stats? Okay, here are some more:
NO AF: http://graphics.tomshardware.com/graphic/20021024/ati-06.html
AF: http://graphics.tomshardware.com/graphic/20021024/ati-08.html

And that's all due to off angle surfaces? Give me a break. Someone on these boards did a test and the performance hit for HQ AF on the X1800XT was something like 7%.

Maybe your program didn't work properly, or didn't show the true speed without AF. Maybe AA was enabled and it masked the hit. Maybe it was too different to in game scenarios with multiple textures and blending stages. But you're saying that your one synthetic test overrules the plethora of data from actual games done by reviewers all over the web? I rarely use rolleyes, but :rolleyes: :rolleyes: :rolleyes:

You are making absolutely ridiculous claims in the face of overwhelming evidence. You think the GF4 had less of an AF hit than the R300. You think the GeForce FX (the paragon of image quality that it was :LOL:) should have been praised for its AF quality. Get real.
 
I am still trying to figure out what benchamrks people are looking at...

The 1800XT loses most of them... I just went back through them and in all but a very few the X1800XT ties, loses by a little or losses be more than a little. Where is the site that is showing ATi leading in most of the tests? I sure dont see it.

Reading the Thread on this at rage3d is a joke. I cant remember my password there or I would post a response. New King of the hill??? What hill...

Is everyone basing this on the FEAR demo?? Becuase thats what i see on Ati fan sites. Guess what there are about a dozen more tests than that one benchamark. This simply favors favors a 512mb card. COD2 is exactly the same for the same reason.

There is not a game out there that i see that can be accredited to ATi having better or performance that is not simply a matter of the 512mb frame buffer. (or some obvious driver glitch)
 
ZoinKs! said:
Hellbinder has a long history of irrational exuberance about ATi's cards before release then feeling crushing dissapointment after they're out.

r520 is a great chip and these are solid boards. It competes on performance and is a clear and notable win on features and image quality. Yeah, it should have been out 6 months ago but nothing can change that now.

Hellbinder, what were you expecting?

No i dont...
 
Well I was waiting for the arrival of the R520 to make a decision on what card to replace my 9700 with. I suppose the R520 is generally G70's performance equal, but it's definitely not equal on price.

I'm not convinced of R520 having feature advantages. I don't really believe in feature advantages to be honest. Many people seem to blather on and on about feature advantages of each card and that they are the defining factor on judging which card is better. Well, news for ya'll. Voodoo3 was better than TNT even without 32-bit. Geforce4 was far better than the feature-superior Radeon 8500. My G400MAX's fabulous EMBM and superior quality didn't make it the card to have over a Geforce 256.

Features are only useful or impressive if they give the card a significant tangible advantage immediately . They do not future proof a card. Otherwise they might as well be failed attempts at making said card superior and just got left on the bullet list.

NV's 7800 is a fantastic card that has been out for many months now. ATI screwed up by going 90nm, like NV did with 130nm once upon a time.

My problems with R5x0:
*Unfortunately ATI didn't match up the missing months with reasonable MSRPs.
*I'm genuinely unimpressed with the power draw of the X1800XT.
*What's up with the OpenGL performance? How can we let ATI slide on this? It has been over a year now since we saw this big time with Doom3.
*Loud as hell X1600 with inadequate performance vs. >1 year old mid/top parts.
*Lack of immediate availability even after long delay (stinks of desperation to get the thing "out" ASAP)
*Crossfire support almost nonexistent (although I doubt I will ever go crossfire)

NV isn't going to give ATI a moment of slack. I think the R5x0 series is going to be eclipsed in a few months come NV's refreshes. If ATI refreshes in 3 months they aren't going to make more friends lol. So I would say buying anything based on R5x0 is a waste of money right now. Unless the price is FAR lower than MSRP.
 
swaaye said:
I'm not convinced of R520 having feature advantages. I don't really believe in feature advantages to be honest. Many people seem to blather on and on about feature advantages of each card and that they are the defining factor on judging which card is better. Well, news for ya'll. Voodoo3 was better than TNT even without 32-bit. Geforce4 was far better than the feature-superior Radeon 8500. My G400MAX's fabulous EMBM and superior quality didn't make it the card to have over a Geforce 256.

All of those are examples where the product with superior features had a significant performance deficit. I don't think you can say that in the case of X1800XT vs 7800GTX.
 
Mintmaster said:
OK, Chalnoth, you're bordering on outright bullshit now.

Why the hell do you think I linked you to that graph and gave you those stats? Okay, here are some more:
NO AF: http://graphics.tomshardware.com/graphic/20021024/ati-06.html
AF: http://graphics.tomshardware.com/graphic/20021024/ati-08.html

And that's all due to off angle surfaces? Give me a break. Someone on these boards did a test and the performance hit for HQ AF on the X1800XT was something like 7%.
Well, not all of it, but most. With respect to this, there are a few things to keep in mind:
1. ATI's algorithm is not nVidia's: they may use different sample positioning or a different number of samples.
2. ATI's architecture, since it doesn't have hardwired texture latency hiding, is likely to do better in situations where there's a large number of samples over a small area, which is the typical case for anisotropic filtering.
3. nVidia's GeForce4 had an additional latency associated with enabling anisotropic filtering, likely related to the circuitry that was calculating the degree of anisotropy, that resulted in a reduction in performance for face-on surfaces with anisotropic enabled.

That said, I'll describe exactly what I did to perform this test. The test was very simple. All that I did was I created a new level in UnrealEd. This level was a simple rectangular room with a rather low ceiling. The player was placed into the level at one corner facing the opposite corner. This resulted in a situation where the floor and ceiling were completely horizontal, and the opposing walls completely vertical. This highly synthetic scenario resulted in a situation where a very large portion of the screen had to use the maximum degree of anisotropy available (I don't have the level I used around any longer, but I would hazard to guess 30%-40%).

Testing was done by starting the frame counter, without touching the mouse, and waiting until the average framerate stabilized. All testing was done without FSAA and with VSYNC disabled, and at all levels of anisotropic filtering.

While I clearly don't have the actual numbers I had back then, I remember distinctly that the Radeon 9700 Pro had a slightly but noticeably higher performance hit than the GeForce Ti 4200.

I'd love to go back and replicate the results, but I don't have the hardware any longer.
 
Joe DeFuria said:
I would word it a different way:

The more complex a shader that you have, the better ATI's part will do.

Yup. Where Nvidia has the power, Ati has the smarts. Of course there can be scenarios where one of them suits the task much better. While expecting future proofness is stupid in this industry, still, I favor Ati's solution. It's let's say, more elegant. :)
 
Back
Top