Radeon9500 O/Cing o_O

Personally I'd be interested to see how the best of one card compares to the best of another card wrt AA, AF or some such regardless of any differences in technologies to achieve AA, AF, etc. That is to say, having a graph that shows a GF4's 8xAF vs a 9700's 16xAF is fine as long as the writer shows (or attempts to show) the IQ differences.

Kinda like different ways of automatic transmissions by different cars to show 0-60 comparisons. I don't think the majority will go "But CarA is faster that CarB because it has superior auto transmission tech"... all that matters to the majority is 0-60 if they're looking for an auto transmission car.
 
Reverend said:
Kinda like different ways of automatic transmissions by different cars to show 0-60 comparisons. I don't think the majority will go "But CarA is faster that CarB because it has superior auto transmission tech"... all that matters to the majority is 0-60 if they're looking for an auto transmission car.
I think that's a pretty poor analogy. Comparing 16x AF on one card to 8x AF on another, or 6x AA on one card vs 4x AA on another is more like saying: Car A is faster at going uphill than Car B, so we'll take 0-60 numbers for Car A going uphill and Car B on a flat track.

Would you consider that a fair comparison?

Now, if you want to *just* compare image quality and leave numbers out of the picture, that would be a fair comparison. But performance numbers should be done on as level (i.e. not hilly ;) ) playing field as possible.
 
Am I the only one that thinks that Anand´s numbers don´t seem to stack up to those of HardOCP or THG in comparison to the competition, regardless of the level of aniso employed?

Or do the 9500´s loose that much performance going from 8x to 16x Level aniso, so that the gap to NV25 suddenly closes as much? Doesn´t make sense to me.
 
OpenGL guy said:
Reverend said:
Kinda like different ways of automatic transmissions by different cars to show 0-60 comparisons. I don't think the majority will go "But CarA is faster that CarB because it has superior auto transmission tech"... all that matters to the majority is 0-60 if they're looking for an auto transmission car.
I think that's a pretty poor analogy. Comparing 16x AF on one card to 8x AF on another, or 6x AA on one card vs 4x AA on another is more like saying: Car A is faster at going uphill than Car B, so we'll take 0-60 numbers for Car A going uphill and Car B on a flat track.

Would you consider that a fair comparison?

Now, if you want to *just* compare image quality and leave numbers out of the picture, that would be a fair comparison. But performance numbers should be done on as level (i.e. not hilly ;) ) playing field as possible.
My point is "the best of A vs the best of B", OpenGL_guy, as long as the writer shows the IQ differences in addition to numbers.

I believe most folks simply want to know what the best any one card can do vis-a-vis other cards.

While I can assume that some folks are ignorant or stupid, I don't think they are ignorant or stupid enough not to know the difference between "8x" vs "16x" or "4x" vs "6x", whether in terms of screenshots or graphs.

Again, I'm not talking about "level playing field" (that, in itself, is almost impossible for reviewers unless they're privy to ALL the technology details of every single different card... I have to assume that NV's method of arriving at "8xAF" display quality is different from ATI's method of arriving at "8xAF" display quality). I'm talking about the best of A vs the best of B.
 
Reverend said:
While I can assume that some folks are ignorant or stupid, I don't think they are ignorant or stupid enough not to know the difference between "8x" vs "16x" or "4x" vs "6x", whether in terms of screenshots or graphs.

I think I'd have to assume that most folks are too ignorant to actually read the words that are written on graphs, and just compare which bar is longer. You'd be surprised if the bar for card "A" was at 1600 x 1200 and the bar for card "B" was at 640 x 480 how many people would look at the graph and then tell the news to the world how much faster card "B" is.

You'd also be surprised how many people can't locate their state on a US map (no offense to non US board members... just an example :) ).
 
Bigus Dickus said:
I think I'd have to assume that most folks are too ignorant to actually read the words that are written on graphs, and just compare which bar is longer. You'd be surprised if the bar for card "A" was at 1600 x 1200 and the bar for card "B" was at 640 x 480 how many people would look at the graph and then tell the news to the world how much faster card "B" is.

You'd also be surprised how many people can't locate their state on a US map (no offense to non US board members... just an example :) ).

To elaborate on the stupidity of people, more than half of the US population thinks that humans were around during the time of the dinosaurs.
 
Ailuros said:
Am I the only one that thinks that Anand´s numbers don´t seem to stack up to those of HardOCP or THG in comparison to the competition, regardless of the level of aniso employed?

As it seems Anandtech used an far slower processor for the comparison then THG and [H]. THG used an 2.2GHz P4 and the R9500Pro was a little slower than the Ti4400/4600. [H] used an P4 @2.53GHz and the R9500Pro messed with the Ti4600 and was faster sometimes. So maybe Anandtech used only an 1.8GHz P4 or an slower AMD-processor (they don't give any spec-info !!! ).

So it seems for me the R9500Pro needs an fast processor to outperform the Ti4600, like the R9700Pro (using the same core and the same drivers this is no surprise ).
 
Perspective

The great thing about the new ATI cards is how well they perform considering that all the benchmarks used today were developed and tuned for nVidia cards. In addition the fixed point T&L is being emulated through a vertex shader program so transisters are not wasted on out of date technology.

If you look at the newer software coming out the ATI cards perform much better. And when you start seeing applications with 128-bit color and large pixel shaders anyone who buys a directx 8 card is toast.

It is a waste of money to by any nVidia card or the ATI 9000. :-?
 
I think what we're clearly seeing with the Radeon 9500 right now is a lack of optimization for the subtle differences between it and the Radeon 9700. It really looks to me that ATI didn't tweak the drivers at all for the differences, just copying over the 9700 drivers (perhaps with minor essential changes, depending).

So, unless the Radeon 9500 has some architectural deficiency that causes relatively low performance at the highest-quality modes (such as the only 2x memory controller), we should see significant improvements in performance in those areas over time.

Of course, I still don't really like 'em, at least not yet. Even not considering the NV30, anybody should consider waiting at least a month or two until the issues are ironed out with these cards.
 
To elaborate on the stupidity of people, more than half of the US population thinks that humans were around during the time of the dinosaurs.
You're elaborating on the stupidity of the US population, not "people".

I probably have to say "Ouch" myself :).
 
Reverend said:
To elaborate on the stupidity of people, more than half of the US population thinks that humans were around during the time of the dinosaurs.
You're elaborating on the stupidity of the US population, not "people".

I probably have to say "Ouch" myself :).

Well some people in other countries probably don't even know that dinosaurs ever existed. The point being if the US population is still so incredibly ignorant even with all the money and resources we have available, then what does that say about the intelligence of humanity in general. Anyway this is a more or less pointless discussion and it's totally off topic so I'll just drop it. ;)
 
Even not considering the NV30, anybody should consider waiting at least a month or two until the issues are ironed out with these cards.

Would someone consider the NV30:

a) It's not a mainstream product.

b) Global mass availability is still in the air.

c) NV30 will ship with no issues at all? I won't take a bet that it will not run UT2003/3dmark2001 and q3a w/o a glitch, but what about the rest? I'll bet the ATI fans will take careful and loooong notes after launch. Not unjustified at all heh.
 
To be fair to Chalnoth (and I'm always fair, right?!) , he said even NOT considering the NV30, one should wait a couple months, based on needing to get "the issues ironed out."

That being said, I don't know what "issues" with the 9500 or 9700 non pro cards he's talking about that we need to be ironed out. The 9700 Pro issues are few (though can be significant) and pretty well known at this point, and seem to be related to power draw. If anything, the 9500 boards will be stabler / more compatible by definitio, due to lower power draw. I don't know what issues he's trying to refer to.

So, unless the Radeon 9500 has some architectural deficiency that causes relatively low performance at the highest-quality modes (such as the only 2x memory controller), we should see significant improvements in performance in those areas over time.

Chalnoth,

?? The "architectural deficiency" of the 9500 relative to the 9700 is a 128 bit bus! ;)

All the tests I've seen show the 9500 Pro, in NON AA / Aniso situations performing pretty much on par with the Ti 4400. Isn't that what one would more or less EXPECT to happen given the comparative raw memory bandwidth and fill rates? (Keep in mind the Ti is 4x2, and the 9500 is 8x1).

Then, with Aniso and/or AA enabled, 9500 Pro takes a significant lead.

What else should one expect from the 9500 Pro? Assuming the GeForce4 Ti architecture, drivers are "mature", How much "faster" do you think hte 9500 Pro should be, based on its specs? Sure, there's always room for some performance tweaks, but I wouldn't expect too much.
 
Joe DeFuria said:
That being said, I don't know what "issues" with the 9500 or 9700 non pro cards he's talking about that we need to be ironed out. The 9700 Pro issues are few (though can be significant) and pretty well known at this point, and seem to be related to power draw

No 16bit AA is till a big issue for me with the 9500/9700 series.
 
No 16bit AA is till a big issue for me with the 9500/9700 series.

Understood, but I don't think that's something Chalnoth is referring to when he says to wait a few months for issues to be ironed out.

That being said, if 16 bit AA is something you feel you require, then you shouldn't look at the Radeon 9xxx series unless they decide to implement that feature.

I personally don't blame ATI if they don't ever support it. (Not that this is any consolation to you!). There's a difference between features that gamers say they value, and a lack of said feature that would prevent a good number of them from buying the card. ;)
 
Joe DeFuria said:
No 16bit AA is till a big issue for me with the 9500/9700 series.
I personally don't blame ATI if they don't ever support it. (Not that this is any consolation to you!). There's a difference between features that gamers say they value, and a lack of said feature that would prevent a good number of them from buying the card. ;)
The whole DAoC community then? There is no mention in the future expansion & graphics ehancements pack of a 32 bit option, so I cant even bank on that.
 
The whole DAoC community then?

Well, not the whole DAoC community, that's sort of my point:

In other words, does the whole DAoC community:

Play DAoC exclusively, such that other advantages that R9xxx brings to other games have no impact on buying decision?

Cartainly, I bet the whole DAoC cummunity SAYS they "need/want" 16 bit AA. (A reasonable request, to be sure!) However, there is only some subset of the DAoC community that will fit the above description of actually not BUYING a 9x00 card beacuse it lacks that feature.

Again, I'm not at all trying to belittle this gripe you have with Radeon 9xxx. It's legitimate. It's just that I feel

1) The number of people who would be prevented from buying the 9xxx because of that is currently relatively small

2) That number will only decrease over time, as 16 bit modes vanish.

So it's hard to believe that ATI would invest the resources to implement it. In all honestly, your (and the DAoC community's) efforts would be better spent getting the DAoC developers to patch the game to 32 bit.
 
Back
Top