first r420 review leak

From the looks of those benches it appears that AF seems to drag the nv40 down quite a lot where as ATI don't seem to take much of a performance hit from enabling AF, with AA it seems the other way though, nvidia appears to have less of hit with using 4xAA. I wonder if this is just related to immature AF implementation since they're using an all new method like ATI's and ati has just got a much better more mature implementation/drivers with handling AF. Also probably the new Ultra Extremem edition will bring it up to parity with the XT judging from the scores. Love to see some HL/Doom3/Stalker benchies, these are the games we need all this grunt for.
 
Ailuros said:
I've been twisting my fingers over it for days now and I can't seem to be able to find any real usability for it. If I have performance to spare I'd rather invest it in a higher sampling pattern and/or in higher resolutions.

Correct me if I'm wrong, but it has no appreciable performance hit?
 
Trini,
I was not claiming that Nvidia was more efficient. It looks like both ATI and NVidia run using 2 ALUs per pipe now, and NVidia's only advantage is 2+2 co-issue. I would actually expect NV40 to be a little less efficient per pipe due to FP32 and complexity (SM3.0). I'm just point out that simply saying "muauha, a 12pipe card runs as fast as a 16pipe card" is relative hypocrisy since the same people were hyping ATI's lower-clock but wider pipes in the last-generation. Doing more with less pipes is not neccessarily better than doing more with lower clock. It's more an aesthetic viewpoint if anything. Some people feel high parallelism is "more elegant" others feel that going for very high clocks (e.g. Intel) is the right approach.
 
PaulS said:
Unless it already happened, I'm counting down the seconds until the ATi fans roll up and insist that every single benchmark they lose in is either a result of cheating on the part of NVIDIA, cheating on the part of the developer, or just a really crap test. Every benchmark ATi wins is both accurate and without bias.

Tick, tock.

Reading through the thread....that's exactly what I was thinking :LOL: It's hard for these boys to share the crown after wearing it so proudly for the last two years ;)
 
mozmo said:
From the looks of those benches it appears that AF seems to drag the nv40 down quite a lot where as ATI don't seem to take much of a performance hit from enabling AF, with AA it seems the other way though, nvidia appears to have less of hit with using 4xAA. I wonder if this is just related to immature AF implementation since they're using an all new method like ATI's and ati has just got a much better more mature implementation/drivers with handling AF.
Yes i noticed the same thing. AF hurts nV more than ATI, and vice versa with AA. I wonder how the ATI AF quality is now, i can't see anything from those little thumbs.
 
Eronarn said:
Ailuros said:
I've been twisting my fingers over it for days now and I can't seem to be able to find any real usability for it. If I have performance to spare I'd rather invest it in a higher sampling pattern and/or in higher resolutions.

Correct me if I'm wrong, but it has no appreciable performance hit?

True but what you missed in my point was that you need a specific framerate for it to operate as it should, otherwise you'll end up with horrendous edge crawling. I don't know about you but I most certainly do not appreciate dancing meanders along my poly edges, nor do I consider them a higher EER.
 
Rican said:
is it just me but it looks like they are pretty evenly match

From those images, yeah. But wait until B3D's review goes up because that will provide the REAL benchmarks. :rolleyes:
 
Hey though it is pretty darn good looking so far, but not what I had hoped for ah well...

Now if the pro was ahead of the ultra like the XT is I would be happy though.
 
Ailuros said:
True but what you missed in my point was that you need a specific framerate for it to operate as it should, otherwise you'll end up with horrendous edge crawling. I don't know about you but I most certainly do not appreciate dancing meanders along my poly edges, nor do I consider them a higher EER.

Oh. Well, that won't be an issue in the X800, I don't think- it will likely run fast enough to make it worth it now that it's an 'official' feature and not a registry edit.
 
I think what Ailuros is saying is that if he was getting 90fps @ 1024x768 w/4xFSAA, he'd rather go to 1200x1024 w/6xFSAA @ 60fps than spend the 90fps on temporal AA.

With higher spatial antialiasing and resolutions, you don't get artifacts if the framerate is unstable. You also don't lose FPS on non-tripple-buffered games because of vsync lock.
 
Rican said:
is it just me but it looks like they are pretty evenly match

My initial thoughts also.

For me it's going to boil down to:

The ability of my PSU to power a dual molex 6800 vs The Single Molex X800

And

PS3.0
 
anaqer said:
Bit of an OT : could anyone please explain to me what the point in making the heatsink have it's upper right corner "chopped off" is?
It's not like it would interfere with the power connector or capacitors (like it did on the 9800XT) or something.
I thought it was common sense to maximize heatsink surface...:?:

Also : "video processing engine"... someone will like this fo sho' 8)

I'm sure it's just the poor little cooler designer trying to come up with a sleek design, look how well the round shaped upper right corner, matches the roundings on the left side ;-)

Well, I guess it shouldn't be that hard to run stuff like video compression/decompression on vertex shaders and tricks like deinterlacing, sharpness, colour features (skintone etc), peaking etc. on pixel shaders. Well... hmm... a lot of video processing is temporal.
 
it looks like you cant lose with buying either card iam just going to wait to see how this is going to affect prices
 
I would wait for a better review, Lars is one of the most biased reviewers on the net that couldn't review a calculator properly, look at past reviews and the crowning the moron does.

The cards will be close overall as most games are CPU limited anyways, the deciding factor for most will be price, power requirements, although you will actually be able to buy a R420 based card.. with lots of availability.
 
Rican said:
is it just me but it looks like they are pretty evenly match

I guess it probably depends on what's going on with the test setup. One thing I've learned this semester after reading through about 50 reviews, is that very slight things can have rather major effects on framerate. Trilinear vs Bilinear filtering is a big one, but even things like benchmarking with sound, or texture compression, or other things can have big effects on performance.

I don't think any one review is capable of telling the story. To me right now, it looks like in Tom's benchmarks the NV40 has a slight edge, but it's pretty dificult to say. Not only that, but simply rerunning the tests on the same system might show variances. I wouldn't expect 2-3% (4-5fps) in some of these benchmarks to be all that out of the question.

Nite_Hawk
 
Stryyder said:
Yup Tom's Hardware is as impartial as Moveon.Org is...

Except that Moveon and, oh, Club for Growth, are advocacy organizations whereas Tom is supposed to be an impartial reviewer. I hope you weren't implying that Moveon and other groups on the left are uniquely biased when it comes to political advocacy.
 
DemoCoder said:
I think what Ailuros is saying is that if he was getting 90fps @ 1024x768 w/4xFSAA, he'd rather go to 1200x1024 w/6xFSAA @ 60fps than spend the 90fps on temporal AA.

With higher spatial antialiasing and resolutions, you don't get artifacts if the framerate is unstable. You also don't lose FPS on non-tripple-buffered games because of vsync lock.

either its late or your maths don't add up :)

at 1024*768 he can go 4xTFSA (8x effective) at 90fps - imo thats better than running 1280x1024 @ 6xfsa @ 60fps - especially when you get into `busy` scenes and the fps drops.

also - the fps drop would be far bigger running 6xFSA at 1280*1024 than it would with 4xTFSA @ 1024*768

:D
 
trinibwoy said:
PaulS said:
Unless it already happened, I'm counting down the seconds until the ATi fans roll up and insist that every single benchmark they lose in is either a result of cheating on the part of NVIDIA, cheating on the part of the developer, or just a really crap test. Every benchmark ATi wins is both accurate and without bias.

Tick, tock.

Reading through the thread....that's exactly what I was thinking :LOL: It's hard for these boys to share the crown after wearing it so proudly for the last two years ;)

I like to call that, trolling.

I would wait before saying they are going to split the crown. ATi wins almost every bench, except for CoD, which NV just destroys it in.
 
hmmm said:
Stryyder said:
Yup Tom's Hardware is as impartial as Moveon.Org is...

Except that Moveon and, oh, Club for Growth, are advocacy organizations whereas Tom is supposed to be an impartial reviewer. I hope you weren't implying that Moveon and other groups on the left are uniquely biased when it comes to political advocacy.

I think he's saying that he's as biased as those groups are, even though he shouldn't be, and that's the deciding factor. Bias isn't bad in a political sense, it IS bad in an 'impartial' sense. Remember, he's judging the product, judges are supposed to be impartial.
 
Back
Top