Wich card is the king of the hill (nv40 or R420)

Wich card is the king of the hill (nv40 or R420)

  • Nv40 wins

    Votes: 0 0.0%
  • they are equaly matched

    Votes: 0 0.0%

  • Total voters
    415
Status
Not open for further replies.
DemoCoder said:
If NVidia's PR was so incredible, they would have did a much better job of evangelizing SM3.0.

I think the whole NV3x ordeal is still stuck in alot of poeple's minds. It's not that nvidia failed to "evangelize" SM3.0, it's that people tend to not trust them as much anymore. Even the PR department can only do so much.
 
DaveBaumann said:
But for the end user was that much of a concern? The PS2.0 on introduction boards provided tangible benefits in almost all areas, and not many detriments.

We don't know that, because the hypothetical comparison brought up in this thread was to compare the "opportunity cost" of implementing new features. (e.g. how much more could they have done if they saved the SM3.0 transistors for something else)

Thus, we'd have to compare the PS2.0 introduction to what a hypothetical PS1.4 8-pipeline 256-bit bus card would have delivered, before we start talking about the "cost" of PS2.0 vs the benefits.

There are aspects of 3.0 that haven't even been touched on yet. Like using the gradient instructions with texture fetches to increase IQ. We have no idea how this would perform or improve IQ.

NVidia did a really piss-poor job evangelizing their features. If they had written more demos comparing models and effects for performance and IQ, we would have had an easier time.

As it is right now, all we have is people speculating on performance and IQ based on paper specifications.
 
NVidia did a really piss-poor job evangelizing their features. If they had written more demos comparing models and effects for performance and IQ, we would have had an easier time.

It's also possible the performance difference is small enough that nvidia would rather not bring it to attention. And we all know it offers no new visual enhancments.
 
DemoCoder said:
DaveBaumann said:
But for the end user was that much of a concern? The PS2.0 on introduction boards provided tangible benefits in almost all areas, and not many detriments.

We don't know that, because the hypothetical comparison brought up in this thread was to compare the "opportunity cost" of implementing new features. (e.g. how much more could they have done if they saved the SM3.0 transistors for something else)

Thus, we'd have to compare the PS2.0 introduction to what a hypothetical PS1.4 8-pipeline 256-bit bus card would have delivered, before we start talking about the "cost" of PS2.0 vs the benefits.

There are aspects of 3.0 that haven't even been touched on yet. Like using the gradient instructions with texture fetches to increase IQ. We have no idea how this would perform or improve IQ.

NVidia did a really piss-poor job evangelizing their features. If they had written more demos comparing models and effects for performance and IQ, we would have had an easier time.

As it is right now, all we have is people speculating on performance and IQ based on paper specifications.

Well lets see . Ati put out a 8500pro 128 meg card before that. The jump from that to the 9700pro was huge . Not only was the 9700pro faster than ati's previous cards but it was also faster than nvidias (ti4600,4800) and the upcoming geforce fx 5800ultra (which was never shpped in volume) and as you can see by farcry it still plays sm 2.0 very well and quite quickly. While the nv30 does not .

Now we have the geforce 6800 which isn't shipping yet. Is on par with atis current shipping product (the x800pro) but offers p.s 3.0 which is an unkown factor. Then a week or two after or before the 6800s ship we will have the x800xts which are much much faster than the 6800ultras .

So it is very diffrent than the 9700pro vs nv30. where card a had more advanced usable features and was faster

Its more akin to the 8500vs geforce 4 . Card a had mroe advanced features and were slower .
 
ANova said:
NVidia did a really piss-poor job evangelizing their features. If they had written more demos comparing models and effects for performance and IQ, we would have had an easier time.

It's also possible the performance difference is small enough that nvidia would rather not bring it to attention. And we all know it offers no new visual enhancments.

False assertion. You keep pretending that the only thing the 6800 adds is dynamic branches. No matter how often you repeat it, it's still wrong.
 
however, NV40's vertex performance appears to be lower than R420's; will the use of vertex instancing allow NV40 to regain that ground? We don't know until we've tested it in a wide variety of scenarios.

3dmark03 does seem to show that Vertex Shader performance is higher on the X800 Pro/XT than the 6800 cards at the moment. I'd say, however, that the NV drivers do not seem to be very mature at the moment in comparison to the ATI drivers.

however, R420's PS2.0 performance is generally higher than NV40's - will the cost of state changes for unrolled PS2.0 shaders on R420 be faster or slower than NV40 dynamically branching.

I don't think this statement is really accurate. You also have to be more specific when you say "R420" vs "NV40". The X800 XT PE generally leads in shader tests in Shadermark. However, the 6800 Ultra does win some of the tests against the X800 XT PE. Also, the 6800 GT generally seems to have higher performance in Shadermark than the X800 Pro. Let's not forget again that the NV drivers seem to be less mature in comparison too. 3dmark03 actually shows the 6800 cards as having faster pixel shader 2.0 performance.
 
DemoCoder said:
We don't know that, because the hypothetical comparison brought up in this thread was to compare the "opportunity cost" of implementing new features. (e.g. how much more could they have done if they saved the SM3.0 transistors for something else)

I think the "opportunity" cost was in relation to what else is brought to the market. In all cases the capabilities of the chips have been raised.
 
jimmyjames123 said:
3dmark03 does seem to show that Vertex Shader performance is higher on the X800 Pro/XT than the 6800 cards at the moment. I'd say, however, that the NV drivers do not seem to be very mature at the moment in comparison to the ATI drivers.

As I said, we don't necessarily know that the new instructions added to ATI's VS have been utilised by their compilers yet.

I don't think this statement is really accurate. You also have to be more specific when you say "R420" vs "NV40".

Generally speaking its assumed that we're comparing the X800 XT PE against the 6800 Ultra.

The X800 XT PE generally leads in shader tests in Shadermark. However, the 6800 Ultra does win some of the tests against the X800 XT PE.

It appears that the usual case of full precision execution mostly going to the XT.

Let's not forget again that the NV drivers seem to be less mature in comparison too.

And this is an in-quantifiable factor. You can't necessarily bank on ATI not gaining further performance as well as there are general architctural changes, a memory bus that clearly needs some learning and a new shader compiler on the way - I can't quantify what, if any, performance increased these may give for ATI, can you?

3dmark03 actually shows the 6800 cards as having faster pixel shader 2.0 performance.

If you look at the fill-rate graphs of the PS2.0 you'll note that is not really limited by the PS shader performance.
 
Yes, but the question is, how much more could they have been raised. For example, could they have had 48 or 64 FX16 ALUs instead of 16 FP alus?

ATI also added features which may or may not be usable in vast majority of circumstances (HDR rendering, for example). Should we chide them for wasting transistors or congratulate them for pushing things forward. Should we demand that 100% of transistors go to performance, and none go to advanced features which have an uncertain uptake?

The situation of the NV40 vs R420 in terms of performance is nothing like it was with the NV3x and R3x0. The card has "in the ballpark" performance on PS2.0 games, regardless of whether the R420 ultimately squeaks out a win in shader throughput, the card goes good enough on games that the "wasted" transistors on advanced features don't hurt it.

Now, the f*nb*ys can rant all they want about how people should only by that which has maximum performance in everything, but the greater market, especially the midrange, is not looking for maximum performance, they are looking for a balance of performance, quality, and features, and they will evaluate cards based on differing subjective needs.

I just don't think there is an "overall winner" this time around.
 
DemoCoder said:
Now, the f*nb*ys can rant all they want about how people should only by that which has maximum performance in everything, but the greater market, especially the midrange, is not looking for maximum performance, they are looking for a balance of performance, quality, and features, and they will evaluate cards based on differing subjective needs.

And are differing polls around the web such as the one displayed in this thread not a reflection of the balance of features many people are looking at right now? I'd assume so.

I just don't think there is an "overall winner" this time around.

While some may think there is, I don't recall many saying that there is a definitive winner here - it horses for courses as there is a distinction between what they offer and what you get. But, again, people are voting in polls like this according to what they take out of the reviews in relation to their own needs.
 
I'd contend that online polls, especially on enthusiast boards, do not represent anything. Whenever people used to run polls about operating systems, MacOS always won (pre-OSX), due to selection bias.
 
YeuEmMaiMai said:
since ATI will be in my PC R420 wins. I see no reason to put a Nvidia card in my PC.

Well, I'd choose Nvidia if I were trying to make the loudest PC ever, well over 100db ideally. One of the initial FX dustbusters... it'd work awesome.

Why? No, I don't know.
 
YeuEmMaiMai said:
since ATI will be in my PC R420 wins. I see no reason to put a Nvidia card in my PC.

Really compelling argument there. Tastes great! No, less filling! Tastes great!

Hey, if I put an NV40 in my computer and see no reason to put a different card in, does that mean "NV40 wins!"

Wow, I never knew it was so simply. Maybe I should put a Volari in. Volari wins!
 
DemoCoder said:
I'd contend that online polls, especially on enthusiast boards, do not represent anything. Whenever people used to run polls about operating systems, MacOS always won (pre-OSX), due to selection bias.

Somehow I suspected that might be the answer. However, It would appear that the polls for the graphics market over the past few years have been very reflective of where things have stood in the market place (and I'm not just talking about here). Two years ago you'd be hard pushed to see ATI gather a quarter of the vote over NVIDIA's offerings - which was true of the market then - and last year that swung sharply in favour of ATI - which was again, true of the high end market.
 
The answer is still correct. A sample size of 350, selected from a highly zealous community does not a good poll make. That it concidentally coincided with the market for the R300/NV3x doesn't prove anything. Everyone thought Howard Dean was going to win and then Kerry destroyed him in Iowa.

Roughly 40% of the poll here thinks the cards are equal or the NV40 is better (for the record, I voted "about equal" and I think many fair minded people who admit things are uncertain did the same, vs others who are loyal/zealous on particular vendors), what do you think the margin of error is on this poll? ;) If you believe in your own poll, and with B3D being the most "non biased" of polls vs fansites, then ATI only has a 12% lead (with unknown error margin) in the fact of

a) unknown quality of final NV40s shipped
b) driver updates
c) future FarCry patches, and Doom3 performance
d) new SKUs to come this summer, plus NV41 and 44. Many people might be concerned over power/size/noise issues which might be rectified

I wouldn't expect 50-50 parity, but a 12% lead given the error margins and uncertainty of the market right now might just constitute an irrelevent or statistically insignificant data point.

We'll find out when NVidia and ATI announce report sales figures in 2 quarters.
 
The answer is still correct. A sample size of 350, selected from a highly zealous community does not a good poll make. That it concidentally coincided with the market for the R300/NV3x doesn't prove anything.

Again, as I stated, similar results are being shown in other polls at other locations asking more or less the same questions - these same polls represented the market as it stood in terms of overall features and performance and hence where the sales are - I see no reasons why these don't stand today as the current user preferences for what they feel will best meet their needs.
 
Status
Not open for further replies.
Back
Top