Wich card is the king of the hill (nv40 or R420)

Wich card is the king of the hill (nv40 or R420)

  • Nv40 wins

    Votes: 0 0.0%
  • they are equaly matched

    Votes: 0 0.0%

  • Total voters
    415
Status
Not open for further replies.
pascal said:
We are STILL waiting for the first trully pervasive use of the DX7 featureset level game (DOOM3) and Carmack´s last year said that his next base level for development after DOOM3 will be DX9 level. My guess we will have to wait more 4 years to see something really good with DX9 :LOL:

I'm thinking that Doom3 is now going to be more of a DX8/DX9 hybrid. Though it was supposed to be a DX8 game with a low quality fallback for DX7 cards.
 
The only quality improvment will come from FP precision but see that: http://www.beyond3d.com/forum/viewtopic.php?t=10607&postdays=0&postorder=asc&start=80

DaveBaumann said:
Mordenkainen said:
My whole point with coming out of the curtain on this thread is because DOOM 3 is always refered to a DX7 tech. I've pointed out it uses fragment programs, which DX7 hardware doesn't support, only DX8. I've pointed out how DOOM takes advantage of floating point precision on the fragment_program, which only DX9 hardware can do.

The quote pointed to earlier is quite pertinent here. The feature set for Doom3 is designed around the capabilities of GeForce256/2 (DX7 class). To achieve this on DX7 hardware it does it in something like 8 rendering passes. With later hardware he is using various vertex and fragment programs to reduce these rendering passes to collapse the quantity of work done - IIRC D3 actually uses very few shaders, the majority of them are different methods of calculating the unified lighting model depending on the fragment capabilities of the board (NV1x / ARB path - multiple passes, NV1x/2x path - 2-3 passes, R200 path - 1 pass, NV3x path - 1 pass, ARB2 path - 1 pass). Basically all of these are doing roughly the same job, but they are implicitly taking higher capabilities of the hardware for each path being written.

Its probably disingenuous to say there will be no quality differences - JC points out himself that there are quality differences, but these are probably fairly incremental in nature. For instance, The ARB2 path will automatically make use of float precision (which he mentions) but, the length of the ARB2 shader for the unified lighting model is probably insufficient for there to be really large quality differences between int / FP16 / FP24 / FP32 precisions to be hugely noticeable - he makes note of the differences between the NV3x path and ARB2 that you'd only spot the differences if you knew what you were looking for.

This is the difference between a game that is designed around a DX7 feature set and one that’s designed around a DX9 feature set, as JC says his next engine will be - while there are certainly going to be incremental improvements in quality between the ARB and ARB2 paths, because it was inherently designed around the DX7 class feature set the functionality available in the ARB2 path won't show that much difference. When he starts designing specifically around the DX9/R300/NV30 (or better) feature set that will set the baseline and he'll really be pushing the shader and precision limits of this class of hardware, such that I'd expect and enormous quality increase.

It was designed around the DX7 feature set level (GeForce256 NSR). This is why I call it DX7 level game.
 
RostokMcSpoons said:
I'm thinking I'm going to hold out for PCI-E, and the *next* gen cards... probably an R500. Which means I'll be getting an Athlon64 cpu too... no doubt SATA Raid would be 'nice' ;) I'm going to need time to save up for a *total* upgrade. I just know I'll kick myself if I buy another AGP card now, with PCI-E so close...

(I reserve the right to change my mind about all of that, if HL2 actually does come out soon, and my pc can't get over 40fps!)


Btw... I wonder what impact the wide-spread adoption of LCD screens will have on the future of graphics cards... where the displayable frame rate is often limited to 60 or at best 75fps... it gets harder to justify buying a mammothly powerful card when half the frames it renders can't ever be displayed... ?

2 things

1. System wise I'm in the same boat as you - 9700Pro/AthlonXP2100@2600speed. One thing why I'll probably wait for the refresh nVidia/ATI products is because I don't want to buy an AGP card when in 6-12 months time I'll be buying a PEG mobo.

2. LCD's - IMO then the ATI card is the sane choice for now - T-AA works better with more headroom on v-sync so if you are limited in rez you can get usable 6xAA with the X800Pro with no T-AA (where currently the 8x nVidia mode does them no performance favours) or 4xAA and 2xT (8x effective). Dod will never have looked so good.
 
"Dod will never have looked so good."

T-AA using Radlinker on my 9800XT makes DoD look unbelievably good. I can't wait to try out an x800 product!!
 
Evildeus said:
Well, Nv should have 1 year of experience before Ati delivers a SM3.0 card, but Ati will deliver.

You know this made an interesting point occur to me. Perhaps ATI is now in their "early geforce period" where nvidia kept rehashing virtually the same gf1 tech each subsequent release until they were forced to do otherwise. The R500/600 could very well end up being their own nv30. The past is no guarantee of future performance.
 
I was a little underwhelmed by ATI's latest offering. Strong DX performance, but OpenGL performance is lacking.

Framerate - tie
Featureset - NV
Drivers - NV
AA quality - ATi
Shader quality - ATI
AF quality - NV

I'd give this gen to NV.
 
ATM, i would desagree with your ranking. I don't know how much Nv can improve its shaders performance, we will need to wait for the first comparisons (and WHQL drivers):
Framerate DX - ATI
Framerate OGL - NV
Featureset - NV
Drivers - ATI
AA quality - Draw
Shader quality - NV
Shader performance - ATI
AF quality - draw ( Nv if the full image AF is available)

So i would say draw for the moment.
 
ED...um I will agree with everything you just posted, except the AA - there's no doubt that ATI has the better usable FSAA..... with the speed of both X800's, 6X is a very usable option, and that's not even counting temperal AA or Gama correction........ While nVidia deserves kudos for getting better, it's still not competitive with ATI there......

And, for those of us that really use FSAA, it is a huge difference - more than enough to even tip the scales in ATI's favor......not counting the size and power requirment differences....... ;)

Hmmmm...And shader "quality"....What the heck is THAT?????
 
Personally, here's how I would rank them:

Raw speed - tie
FSAA- ATI
AF- slight ATI lead
DirectX games- ATI
OGL games- nvidia (but this counts mostly for Q3 engine)
Price- ATI
Size & heat- ATI
Driver Quality- ATI
Featureset- nvidia
Potential to wring more out of drivers: nvidia
Shader perf- atm ATI, nvidia can still pull ahead

nv40 seems like a great chip on its own, but there's a couple of little things that would just make me want to go stick with ATI for this round. The size and heat issue is I think enough for most people to continue buying ATI. Overall the X800 just seems better. I don't see the GT winning much over the X800P and the 6800U doesn't beat the X800XTPE completely.

I think DO really hit the nail on the head though, when he mentioned that nv40 as it stands right now is going to have a difficult transition to the mobile space. I would expect an RV4xx/M12 variant in the fall to do quite well with the low-k .13u helping out.
 
MrBond said:
I think DO really hit the nail on the head though, when he mentioned that nv40 as it stands right now is going to have a difficult transition to the mobile space. I would expect an RV4xx/M12 variant in the fall to do quite well with the low-k .13u helping out.

I agree that the mobile space will be difficult. I don't see anything that would stop Nvidia from going low-k for the mobile parts but the do need extra transistors because of the 2D processor/featureset/FP32. But we'll see what happens. The thing that i'm worried about is that the new mobile parts will be severly bandwidth starved (as in 9600 SE) which will create some problems in new games even though the GPU itself could handle them.
 
Raw speed - tie
FSAA- ATI
AF- slight ATI lead
DirectX games- ATI
OGL games- nvidia (but this counts mostly for Q3 engine)
Price- ATI
Size & heat- ATI
Driver Quality- ATI
Featureset- nvidia
Potential to wring more out of drivers: nvidia
Shader perf- atm ATI, nvidia can still pull ahead

Agree on all but raw performance in which I'd give ATi the slight edge, but driver development is everything and ATi's got an advantage, in Dx anyway, here at the moment. Give it a few more months and we may see another picture in raw speed, DX games and OGL games.
 
arent ATI building a new OGL driver?

oh and surely MSAA ATI has won hand dwon becasue of 6x and GC for everyone and T-AA wheere it will work.

The problem is not many sites bothered to show 6xAA performance.
 
Unfortunately, they can't show 6x AA and still be fair- because Nvidia doesn't have the mode. It'd be either 6x AA vs. 4x AA (where ATI would lose) or 6x AA vs. that weird one that Nvidia uses (where ATI would win). Both situations are somewhat unfair.
 
Eronarn said:
Unfortunately, they can't show 6x AA and still be fair- because Nvidia doesn't have the mode. It'd be either 6x AA vs. 4x AA (where ATI would lose) or 6x AA vs. that weird one that Nvidia uses (where ATI would win). Both situations are somewhat unfair.

Sorry, but it's "unfair" to NOT use 6X FSAA.... If the card has the power to use it, then use it!
 
Eronarn said:
Unfortunately, they can't show 6x AA and still be fair- because Nvidia doesn't have the mode. It'd be either 6x AA vs. 4x AA (where ATI would lose) or 6x AA vs. that weird one that Nvidia uses (where ATI would win). Both situations are somewhat unfair.

thats whats wrong with pure shoot-out mentality in reviews IMO. I don't specifically care whether nViida don't have 6xAA when I want to know whether 6xAA is usable on the X800.

Also if you benched apples to apples (as much as) and then went on to max quality, then its is fair to bench 6x v 8x (where ATI's 6x has better edge quality most of the time anway).

On a 1024x768 LCD I sure as hell want to know what max AA I can get and use, the sam eshould be true of any 1280x1024 LCD as well.

That's why I like HOCP's methodology - hell enough sites do pure apples to apples graph wars, at least [H] are giving different information than everybody else, that I find relavent.
 
martrox said:
Eronarn said:
Unfortunately, they can't show 6x AA and still be fair- because Nvidia doesn't have the mode. It'd be either 6x AA vs. 4x AA (where ATI would lose) or 6x AA vs. that weird one that Nvidia uses (where ATI would win). Both situations are somewhat unfair.

Sorry, but it's "unfair" to NOT use 6X FSAA.... If the card has the power to use it, then use it!

We don't need to give the nVidiots more ammo... 'HAH WE BEAT U AT TEH BENCHMARKS LOLOL'... I can see it now :cry:
 
Status
Not open for further replies.
Back
Top