Radeon 9800 Pro preview

tEd said:
Ante P said:
tEd said:
oh we surely have a winner here. Some comments from the author seemed alittle off though

what comments would those be
keep in mind tough that I'm rather looking at this from a consumers perspective

as I said in the first post I'm not a "techie" :)

nothing major really , just for instance "still only 6x" thingy was alittle confusing , considering there is nothing comparable in quality to date

that's why I called it "wish list"
I also explained why I want a higher mode

for an example, my monitor only does 75 Hz at 1600x1200
and 1280x1024 is still a bit jagged even with 6x AA

thus: I want 8x AA

and to really make clear I also stated that no other tech even comes close to quality, but that doesn't mean that this card is perfect: thus there's room for improvement and I was personally hoping for more AA


*takes a breather* :)
 
Wow, I was blown away by the improvements over the R300. I really didn't think it could be done in such short time.

Kudos to ATI for advancing the industry in the correct direction.
 
Interesting stuff! Great preview, Ante P.

Comments:

Anyone else notice the significant relative performance benefit at 4xMSAA specifically? (Hint to Ante P--if you can, that last page with the 9700 OC'd to 9800 clocks should feature 4xMSAA tests!) I think it was UT2003 4xAA/8xAF that had 9800 at 1.5x 9700 performance, despite the 15%/10% clock increase. UT2003 high-end scores look in particular good, which is very interesting because a) it seems to be the new benchmark of choice and b) 9700/GFfx 5800 take a *huge* hit from AF and even more from AF+AA at high-res UT2003.

My speculation on all this is that 9800 in some way optimizes the framebuffer compression for 4xMSAA in particular. (My guess is that it does what I also guess NV30 does--"natively" compress at 4x using a bitmask to signal whether compression is enabled or not on a per-pixel basis.)

Also, the obvious guess for the stencil optimization is that early/Hier-Z now work in z-fail mode as well as z-pass mode, i.e. can be turned on when performing Carmack's Reverse.

Pretty impressive stuff, although the list prices are on the high side IMO.
 
Luminescent said:
According to HardoOCP:
The R300 supports 96bit FP (Floating Point) precision, while the GeForceFX supports up to 128bit FP precision. The 9800 Pro now supports 128bit, 64bit, and 32bit FP pixel precision.
Does this mean fp32 precision per component in the fragment color processor?
I'm not sure about the R350 (don't have one!) so I can't answer this particular puzzling bit but I would have to assume - No... i.e. the core is still 24-bit FP per component... 96-bit *max* is written to main memory as a 128-bit value... some input/clarification from the ATI engineers on this is appreciated.

On a related note - can anyone tell me (examples appreciated) why max-fp24 would not be appreciated (compared to max-fp32 a la the NV30)? It's a personal bitch of mine re the R300, with my own reasons of course.
 
r350 is 24bit per channel
so far it is enough for everything we threw at it
i have seen spectacular examples where 32bit fp seem to fail on the same shader
 
dominikbehr said:
r350 is 24bit per channel
so far it is enough for everything we threw at it
i have seen spectacular examples where 32bit fp seem to fail on the same shader
Could you tell us what examples are those, and whether they are important (to you, with what you're referring to)?
 
I'm sure there are lot of people are out there who, like me, are wondering whether this card is going to be enough to play Doom III and games based on it's engine at fairly high res and fairly good quality.

It's certainly fast enough for current games, maybe even overkill, but will it be fine for the next games ? Should I whip my GF4 4400 into pulling my increasingly heavy cart just that little bit longer ?
 
Typedef Enum said:
I'm looking forward to reading Anand's R350 preview...quite simply, I would like to hear his thoughts on ATI's offering(s) vs. nVidia @ this point in time. Last time, it was "delivered as promised." This time, I'm thinking it might be something like, "nVidia lost the performance race quite some time ago, and it doesn't look like they will get it back anytime soon."

Well, Anand just can't keep his head out of nVidia's....oh, well, you know:

NVIDIA will not have a chance to respond to the Radeon 9800 Pro for another couple of months, with their NV35 part. NVIDIA has NV35 up and running and it is already significantly better than the lackluster NV30; although we're not sure if it will be able to outperform the Radeon 9800 Pro, at this point we can say that from what we've seen, NVIDIA has regained some of our confidence.



Didn't find the card that great. Sure a good card, but i'm a bit disapointed

And this suprises whom? ;)
 
According to this page of Anand's review:
http://www.anandtech.com/video/showdoc.html?i=1794&p=7

the performance difference between enabling quality and performance aniso on the 9800 is not as great as doing so with the 9700. At 1280*1024 with 4x AA and 8x Aniso the difference (in fps) between quality and performance aniso is 15.5, while the difference between the two on the 9700pro is 26.4. Is this do to the memory controller tweaking or possibly increased trilinear texture filtering throughput?
 
An interesting thing, from the HardOCP numbers: Nvidia's drivers seem to be slightly less CPU heavy than ATIs. In CPU limited situations the FX always seems slightly faster (though it doesn't really make much difference, I know). Although I suppose CPU inefficiency and other driver inefficiencies are not entirely linked, based on this, it seems to me that the Nvidia driver is already pretty optimized. So, those expecting a miracle for the FX from future driver improvements, may be disappointed...
 
Nagorak said:
An interesting thing, from the HardOCP numbers: Nvidia's drivers seem to be slightly less CPU heavy than ATIs. In CPU limited situations the FX always seems slightly faster (though it doesn't really make much difference, I know). Although I suppose CPU inefficiency and other driver inefficiencies are not entirely linked, based on this, it seems to me that the Nvidia driver is already pretty optimized. So, those expecting a miracle for the FX from future driver improvements, may be disappointed...

this is old news, it's been like that for ages.
here's some of my old benchmarks scores where you see the same situation:
http://www.nordichardware.se/recensioner/grafikkort/2002/GV_R9700/index.php?ez=6

http://www.nordichardware.se/recensioner/grafikkort/2002/Gainward_GF4_8X/index.php?ez=7
in this test even geforce 4 mx is faster than the 9700 when things are CPU-limited

I could dig up a lot more too
 
IMG0005990.jpg


Club3d card with 2 fans. 8)
 
Back
Top