Wich card is the king of the hill (nv40 or R420)

Wich card is the king of the hill (nv40 or R420)

  • Nv40 wins

    Votes: 0 0.0%
  • they are equaly matched

    Votes: 0 0.0%

  • Total voters
    415
Status
Not open for further replies.
Often, spending $500 on something colours the perception becuase you don't want to feel a fool for buy buying it. However, its also the case that many will not be able to do a direct comparison so they have no competing frame of reference.

And that is exactly why people need to weigh as many professional and consumer reviews as possible IMHO.

Anyway, its the minority of people that read "consumer reviews" in forums - the majority of perceptions will come from the web and magazine reviews.

This doesn't imply that consumer reviews are not useful or helpful.
 
I don't expect to see big IQ improvements, if any. It might bring additional performance, but we don't know how much.

Like it or not, SM 3.0 is the standard that the entire industry is moving towards. Many developers are embracing SM 3.0 as we speak.

And a lot of developers said there would be no IQ gains with SM3.0.

Performance and IQ go hand in hand. If you can increase performance to any appreciable level, then you can also increase IQ while maintaining a given framerate.

Besides, I believe 3Dc is meant to improve IQ. That's different to most of the NV40's advanced features which are about ease of programming and increasing speed for the most part.

I don't think that is an accurate statement. Read above about how IQ and performance go hand in hand.

Nothing wrong with that at all. I just don't expect the performance jump to be so big that the NV40 in the end will significantly overtake the R420 in performance in the majority of games. I might be wrong, but right now it's just everybody's guess.

Who said anything about the NV40 significantly overtaking the R420 in performance in the majority of games?

My point is that we simply don't know how much of a performance advantage NV40 will have when its special features you advertised are fully used.

We don't know about how much of a performance advantage any card will have when it's special features as advertised are fully used, so what's your point? It seems like you are going very much out of your way to downplay any advantages that the NV4x cards may have over the R42x cards. I wonder why that is.

Perhaps you expect wonders in performance increase, I don't.

Who said anything about "wonders in performance increase"? You have low expectations based on what exactly? You have not even seen any benchmarks yet. Seems somewhat silly.

If you base your claims on Toms Hardware alone, then nobody is going to give any meaning to what you say.

Care to prove Tom wrong here? I don't think you can. There are other websites that came to the same conclusions. Again, do the research and talk to the reviewers.

I like a fast card which runs cool and has good IQ.

The GeForce 6 series have been described by reviewers as being a fast card that is relatively cool to the touch and has good IQ. :D

If I check out the reviews, the NV40 wins most OpenGL benchmarks. Nevertheless the R420 seems slightly faster overall, especially in high resolutions with high AA+AF.

Again, you are totally missing the point. OpenGL performance is a plus for a *gamer* who enjoys playing these games. Why is that so hard to understand? Also, you are again being overly simplistic in comparing NV40 vs R420. You have to be more specific than "NV40" or "R420", as they have more than one card in each grouping, each with different performance capabilities. For instance, while the X800 XT PE is generally slightly faster than the 6800 Ultra, the 6800 GT is generally slightly faster than the X800 Pro.

Well, you were advertising features, which might improve performance of NV40, but we don't know how much of an improvement those features will bring. So why do you suddenly have a problem with me completing the picture by also mentioning that R420 might have performance improvements in store, too?

You constantly talk about how NV's well-regarded featureset is dubious, while at the same time propping up the OpenGL rewrite that is also unquantifiable at the moment. This is just you being inconsisent.

Even the X800 XT draws less power than the 6800 GT.

And what's your point? The 6800 GT does not have power requirements that are any different from a 9800XT or 5950 Ultra, so it's somewhat of a moot point really for a *gamer* with a good 350 watt PSU.

I still think you're wrong here.

Prove me wrong.

But the feature set will probably "only" bring more speed.

LOL! Talk about grasping for straws. SM 3.0 is the standard that the entire industry is moving towards. Get used to it, because we all know you will reverse your argument in the future once ATI gets on board.

Ah, so you have experience with the cards mentioned here?

I'm not the one who fancy's one card over the other, like you. :D I am open-minded enough to realize that they each have strong points that make them worth buying.
 
jimmyjames123 said:
Performance and IQ go hand in hand. If you can increase performance to any appreciable level, then you can also increase IQ while maintaining a given framerate.

Who said anything about the NV40 significantly overtaking the R420 in performance in the majority of games?

We don't know about how much of a performance advantage any card will have when it's special features as advertised are fully used, so what's your point?
So what was the advantage of the NV40 again? Remember, all the special features of the NV40 are mainly meant for improving performance. Your argumentation doesn't make much sense to me. If you think the advanced NV40 features are so important, then you should also expect the NV40 to significantly improve in speed in the future, shouldn't you? What other sense do those advanced features have? But now you seem to say that you don't expect the NV40 to overtake the R420 in performance in the future. You seem to like the work "inconsistent". That's what your argumentation sounds like to me.

It seems like you are going very much out of your way to downplay any advantages that the NV4x cards may have over the R42x cards. I wonder why that is.
So because I have (based on all the reviews and discussions I've read) come to the conclusion that personally I like the R420 better, that makes me an ATI fan? Yeah, sure, go on. Mark everybody who likes 6xAA as being an ATI fan. Perhaps that will hide your own bias. But wait - maybe it won't?

Care to prove Tom wrong here?
I've already proved you wrong about 3dCenter. So now it's up to you to prove your claim. Neither B3D nor 3dCenter support your claim. And those are 2 of the most thourough reviews on the net.

Again, you are totally missing the point. OpenGL performance is a plus for a *gamer* who enjoys playing these games. Why is that so hard to understand?
Because the majority of games use Direct3D. And because the games I'm playing are all Direct3D (e.g. need for speed series).

And what's your point? The 6800 GT does not have power requirements that are any different from a 9800XT or 5950 Ultra, so it's somewhat of a moot point really for a *gamer* with a good 350 watt PSU.
I'm building a silent PC. I'll downclock the R420/NV40 and put a passive heatsink on it. So it matters quite much to me how much power each card draws.

Get used to it, because we all know you will reverse your argument in the future once ATI gets on board.
Are you able to discuss without classifying others as fans? Before NV30 I've always recommended and used NVidia cards, I've even told the firm I'm working for to stop buying ATI cards, because they ran so unstable at that time.

I'm not the one who fancy's one card over the other, like you. :D
Ah. So I'm not allowed to favor one card because it fits my needs/wishes better than the other. Ah yes, alright.
 
LOL! Talk about grasping for straws. SM 3.0 is the standard that the entire industry is moving towards.

Could we stop that non sense argument ? The "entire industry" did not move to PS2.0 yet.

You are messing PR stuff with the thousands of games created each year; I have yet to see any use of PS2.0 in the mmorpg i play on a daily basis.

Sure some games (less than 20 ? ) are using PS2.0 really and may benefit from PS 3.0 but stop with that entrire industry BS. It remembers me too much the "Our millions customers" crap.
 
madshi said:
So what was the advantage of the NV40 again? Remember, all the special features of the NV40 are mainly meant for improving performance.

He explained it to you. Performance enables new effects with better IQ to become feasible.

*ANY* 3D card can do anything a PS2.0/PS3.0 can do with enough passes. It's a well known result published in a couple of papers. The old fixed function multitexture pipelines are sufficient to perform any shader calculation with enough passes. So strictly speaking, anything done in PS3.0 is "possible" in PS2.0

The question is, is it feasible. The point about "PS3.0 can add performance" is that what was once perhaps too slow to use under 2.0 becomes feasible to use under 3.0.

Ditto goes for PS2.0 performance. There are now PS2.0 shaders which can run on the X800 fast enough that are essentially too expensive in fillrate to be run on a 9600 for example.

Performance (e.g. fillrate, shader throughput) enables new effects.

The point is: claims "3.0 is just for performance" is disingenous. In fact, the biggest benefit of 3.0 is ease of development.
 
DemoCoder said:
He explained it to you. Performance enables new effects with better IQ to become feasible.
That's right. But we don't know yet how much of a performance improvement SM3.0 will bring in practice. Maybe it will really help a lot, but maybe only a tiny bit. That's still to be decided, since we haven't seen any benchmarks for it yet - not even synthetic ones!

SM3.0 may help with future games, but we don't know yet how big the improvement will be. On the other hand, 6xAA and lower power consumption is useful today. That's why I prefer the R420 right now.
 
I don't think it is as simple as just asking yourself whether new features will be very useful in the card's life time, then deciding that it probably wont and thus choosing one card over another.

If you really want progress such as new advanced features in 3d tech, it is also about walk the way you talk: It is about putting your money on the company that gives the game developers the option to start coding now on a game with the features you want to see in 18 month time.

Think about it for a second (and forget about your fondness for either ATI or nVidia): what if small company X was the only one to support SM 4.0 with a lot of really great features while both ATI and NV stayed put with SM 3.0. You wanted those new features badly but the card X’s performance in DX9 was merely good and the chip ended up being sold mainly for developers in order to code for your next generation game. Newcomers Company X would of course fold because you choose to let others pay for the progress.

I’m not too happy to use this example on the R420 vs NV40 because I sense a lot of people here (otherwise tech aficionados) would loose the point straight away in the ATI vs NV holy war. Nonetheless just remember that someone has to pay those engineers that deliver progress in the 3d tech industry – and in the end that is you.
 
LeStoffer said:
Think about it for a second (and forget about your fondness for either ATI or nVidia): what if small company X was the only one to support SM 4.0 with a lot of really great features while both ATI and NV stayed put with SM 3.0. You wanted those new features badly but the card X’s performance in DX9 was merely good and the chip ended up being sold mainly for developers in order to code for your next generation game.

There's a limit as to what you can sacrifice for performance though. Especially since you don't know when you'll be able to use the extra features.
 
Bjorn said:
There's a limit as to what you can sacrifice for performance though. Especially since you don't know when you'll be able to use the extra features.

Yes, of course. Maybe I went a bit too far with my example in order to stay out of the R420 vs NV40 discussion. :eek:

Hmmm, I could add that the R300 was the perfect example of very high speed with the games at launch and next generation features in my book. But I probably wont be playing any PS 2.0 games on it, and the point is that I knew I probably payed for something beyond the card's life time. It was money well spend.
 
DemoCoder said:
My method is a perfectly sensible interpretation. If I want to count the number of customers above "neutrality" who chose R420 over NV40, it is perfectly within reason to do so.
You claimed the R420 has only a 12% lead over the NV40 in the poll. This is not reasonable. Even with this skewed methodology the lead is 24%.
 
Fred da Roza said:
DemoCoder said:
My method is a perfectly sensible interpretation. If I want to count the number of customers above "neutrality" who chose R420 over NV40, it is perfectly within reason to do so.
You claimed the R420 has only a 12% lead over the NV40 in the poll. This is not reasonable. Even with this skewed methodology the lead is 24%.
Well, using Demo's interpretation, The nv40 is behind by 59%
The number of people above "neutrality" who chose nv40 over R420...
 
Althornin said:
Fred da Roza said:
DemoCoder said:
My method is a perfectly sensible interpretation. If I want to count the number of customers above "neutrality" who chose R420 over NV40, it is perfectly within reason to do so.
You claimed the R420 has only a 12% lead over the NV40 in the poll. This is not reasonable. Even with this skewed methodology the lead is 24%.
Well, using Demo's interpretation, The nv40 is behind by 59%
The number of people above "neutrality" who chose nv40 over R420...

He claimed this skewed methodology was reasonable. So how many polling firms use it? Because in my opinion any firm that did would be out of buisness pretty quick. Does that explain how reasonable it is.
 
He claimed this skewed methodology was reasonable. So how many polling firms use it? Because in my opinion any firm that did would be out of buisness pretty quick. Does that explain how reasonable it is.

There are lies, damn lies and statistics. With the right level of biased stupidity you make numbers say whatever you feel they should be sayin.
 
Heathen said:
There are lies, damn lies and statistics. With the right level of biased stupidity you make numbers say whatever you feel they should be sayin.

That may be true but the 50/50 split doesn't hold water. We have three choices (nVidia, ATI, equal). If there were no bias and the products were identical the vote would be 100% equal (not 50% nVidia, 50% ATI). The 50% split can only be argued if the voter had 2 choices (ATI or nVidia). In that senario the likely outcome of the poll would have been 27% nVidia, 72% ATI. Even then that’s an unconventional interpretation at best. And guess what, we are close to the 24% lead again.
 
I still don't understand how anyone can say sm3 is the standard everyone is moving to.

There are still no cards out that i can go up and buy that support this .

Meanwhile nvidia is still selling crippled sm 2.0 Yet the industry already has 2 years of ati sm 2.0 cards in which to program for.

I can't see anything but sm 2 to be the standard .

This is the one thing that upsets me everytime i hear someone say sm3 is going to be the standard .
 
Perhaps it's because you're using "standard" in the wrong way.

3.0 is already a standard. It's part of DirectX.

If you mean "will all games be written for 3.0", I think it's a misleading question.

The question is, will 3.0 be supported in upcoming games? The answer is yes.

It's not "either/or", especially with the more modern tools for creating shaders and the D3D effects framework.
 
DemoCoder said:
Perhaps it's because you're using "standard" in the wrong way.

3.0 is already a standard. It's part of DirectX.

If you mean "will all games be written for 3.0", I think it's a misleading question.

The question is, will 3.0 be supported in upcoming games? The answer is yes.

It's not "either/or", especially with the more modern tools for creating shaders and the D3D effects framework.

Its not I that is using it that way. It is jimmy . At least through his posts .

There is only 3 reasons why i don't have a geforce 6800ultra in my system (assuming i could buy it )

1 ) heat and the the 2 slot cooling system.

2 ) power supply . I have alot of stuff on my power supply and do not want to upgrade it or have it give out and damage anything in my pc .

3) its mostly much slower than the xt .

Yes it will be supported by compilers . But i believe that will mostly show speed increases . For me i don't know if it will make it faster than the x800xt . Not to mention that the x800xt isn't just a faster r3x0. There are new things that can be supported to help it keep the distance.
 
jimmyjames123 said:
This is due to bad code, it has nothing to do with the hardware. And being based on code it can be changed and/or rehauled, as ATI is currently doing.

And what's your point? Software and hardware go hand in hand. We all know that ATI is hard at work trying to overhaul their OGL code. This is a good thing. But at the same time, isn't this just another "unquantifiable" element that DaveB alluded to earlier? Whether you like it or not, NV has the clear edge in OpenGL performance at the moment. This point is hardly debateable.

I'm not debating anything. My point was that claiming nvidia's better OGL performance to be an advantage of the 6800 may very well no longer exist once ATI releases it's rewritten code in the near future.

DemoCoder said:
He explained it to you. Performance enables new effects with better IQ to become feasible.

*ANY* 3D card can do anything a PS2.0/PS3.0 can do with enough passes. It's a well known result published in a couple of papers. The old fixed function multitexture pipelines are sufficient to perform any shader calculation with enough passes. So strictly speaking, anything done in PS3.0 is "possible" in PS2.0

The question is, is it feasible. The point about "PS3.0 can add performance" is that what was once perhaps too slow to use under 2.0 becomes feasible to use under 3.0.

Ditto goes for PS2.0 performance. There are now PS2.0 shaders which can run on the X800 fast enough that are essentially too expensive in fillrate to be run on a 9600 for example.

Performance (e.g. fillrate, shader throughput) enables new effects.

The point is: claims "3.0 is just for performance" is disingenous. In fact, the biggest benefit of 3.0 is ease of development.

I think the bigger picture here is not the benefits of SM3.0 over SM2.0 but rather when those benefits will start to be used. You keep saying we're downplaying SM3.0, but maybe it's the opposite, with you playing it up? It's obviously better then SM2.0, however, like I have already said games aren't anywhere near the instruction limit for SM2.0, by the time they surpass that limit the 6800's speed won't be enough to run them, even with the performance enhancments. You may argue differently but I ask you this, how then would you explain UE3?
 
Status
Not open for further replies.
Back
Top