CNET REVIEW

CNet probably realised that FS and Splinter cell are completely CPU bound at medium resolution.
Those numbers are therefore irrelevant. People wanting to run FS2004 in 1024 should keep their old 9800XT....

See what happens when you ignore those CPU bound situations. So just looking at the high-end bars:

comparing the high-end settings of X800XT PE and 6800u:
+27%, +7%, +25%, +65%

And the X800 Pro compared to the 6800u:
6%, -13%, -5%, 23%


Now, when a card is running on average 30% faster than it's equally priced competitor, while using far less power and space,
and its much cheaper little brother is equally fast as the competitor, isn't it then logical to call it a stunning defeat?

Just consider if it weren't videocards, but CPU's....There people are arguing over 3% performance difference, which type to choose...

I would almost say: It's a stunning defeat, from a certain point of view. ;) :LOL:
 
I think the main problem with this situation at the minute is that Nvidia pushed farcry like mad at their launch and that this was the game which would show nvidia's strength, The mainstream press saw this and picked up on it. :D

After this show the same mainstream press will have been focusing on farcry as the example of nvidia's new found strength and then were greeted with the X800 all but trounceing it in every resolution and setting while showing better quality. :oops:

At this point it SEEMS nvidia panick released new drivers to improve the performance of farcry on the 6800U while causing a huge amount of bugs and flaws at the same time which COULD be attributed to these gains in frames. :oops:

I am not against or for either IHV but I think Nvidia brought this one on themselves as they focused everyones attention on to farcry including the highly gullible mainstream press, so when the X800 came out looking better visually with no faults and far more performance in that particular PS3.0 title (hey I know its not actually PS3.0 yet but mainstream press are as I said gullible) In there words that would be a stunning defeat.

I think this is a self inflicted wound which Nvidia have created with farcry, so what will they do now denounce it like futurmark or cheat, so far we have what could be described as cheats in the new drivers or they could be bugs, None of us know, and we will not know for about 2 months, if the problem still resides then it is an intentional problem which is unlikely to be removed until performance beats that of the X800 which is wrong :devilish:
 
John Reynolds said:
I have to say I disagree with the wording of that article. It's not a "stunning defeat" by any stretch of the imagination. The two high-end boards are fairly close in performance, yet the differences are because of the approaches the two companies took for this generation. Because of the added transistors required for features like 32fp and SM 3.0 nVidia has a larger chip with a lower clock speed and higher power consumption and ATI has a chip that can be put on boards that'll work in even SFF rigs.

I don't think, though, that anyone put a gun to nVidia's head and forced them to go SM3.0 and fp32, right?...;) I mean, fp24 has been the full-precision baseline for DX9 for--18 months now? nVidia plowed millions last year into moving developers away from ps2.0+ and towards ps1.x, and so 3.0 is destined for a rocky start at best in terms of *real* developer support, even if nVidia's yields for nV40U are optimal. (This is an example of why negative PR is bad for companies--if they change their minds they end up having to undo what they've already done--which is costly and time consuming at the least.)

Right now there's not a single game destined to be released this year, or probably next, that will even require fp24 precision, much less fp32, to render as intended without artifacts (one reason for that is that there are still plenty of integer-only 3d cards in circulation, which developers certainly do not wish to exclude at this time, and the other is that developers haven't matured and refined their own internal tools to support the color precision capabilities of full-precision fp rendering as of yet.) Neither D3 or HL2 will require it--that's a given at this stage. It's certainly no defense this product generation to claim that nVidia "didn't know" anything about DX9 and that it's "M$'s fault" again (just speaking of those who blame M$ for nV3x instead of nVidia), is it? Heh...;)

The real area to me in which this might well actually become a "stunning defeat" for nVidia is in the prospect of yields--which are usually poor/poor-er with higher transistor counts and larger dies which require more power and dissipate more heat. I think nVidia's already had to come off of its initial target of 475MHz for the 6800U and drop it back to 400MHz for the sake of yields, which may still be problematic even at that MHz clock in terms of a profitable situation for nVidia. They'll probably yield well at 350MHz, though, for the GT, I would imagine and would certainly hope, for their sakes. I also think that in terms of system OEM deployment (Dell, etc.) that even with acceptable nV40U yields it will be the added power and heat dissipation issues relative to nV40U cards that will heavily move these companies into the ATi x800 Pro/PE camp (again, nVidia should have learned this lesson from the company it absorbed, 3dfx, and the V5 series.)

I mean, it all adds up to what *could be* a stunning defeat for nVidia, this time as well, since nV40U yields may put nVidia in a spot similar to where it was for most of '03 with respect to yields out of nV30/5/8. And if that should happen it won't be anybody's fault except nVidia's, for over-engineering the nV40 part. But I'll agree with you that based on a purely academic comparison of prototype cards provided reviewers thus far (not counting marketing bullets like nVidia's "ps3.0" support versus ATi's 2.0a/b+ support since we don't yet know if the added ps3.0 features nVidia supports that ATi doesn't will actually be incentive for either developers to support it or customers to acquire it), there isn't a lot of apparent difference at this point in overall performance. That's why I think the real battlefield between nV40U and x800Pro/Pe will be which one is brought to market in greater numbers to meet demand, and which one the system OEMs show a preference for. I think from what I've seen so far that ATi has a perceivable advantage over nVidia in both categories at this time.

I just checked here when writing this post:

http://www.bestbuy.com ..and both the x800 Pro and x800 PE are listed, with respective availability dates soonest of 5/14 and 6/15. Best Buy has no 6800U products listed whatsoever, with any availability dates whatever. So, at least right now it sure looks like ATi handled the situation better than nVidia in regards to actually making something they could bring to market in a timely fashion. nVidia's situation could improve, of course, any day now, but right now that's the way it appears.
 
Like the previous poster, I only look at the high res settings. That's the least CPU limited area and thus by far the most interesting to me. I also found it interesting that they used flyby rather than botmatch, surely a more useful test in these CPU limited times?

Trounce may be a strong word, but if they though the nv35 was a good card, then I suppose the surprise we got over a year ago is only just hitting them.
 
WaltC said:
I don't think, though, that anyone put a gun to nVidia's head and forced them to go SM3.0 and fp32, right?...;) I mean, fp24 has been the full-precision baseline for DX9 for--18 months now?

Walt, my point was that by their own benchmarks it's hardly a "stunning defeat". My bone of contention was the word choice. I'm not trying to make fp32 or the 3.0 model as more important for this generation's lifespan than they are. In fact, I've argued rather strongly that 2.0 will be the inflection point amongst developers for quite some time for obvious reasons (well, obvious to me).

And fp24 is pp for SM 3.0, so once you decide to support the latter. . . .
 
John Reynolds said:
Walt, my point was that by their own benchmarks it's hardly a "stunning defeat". My bone of contention was the word choice. I'm not trying to make fp32 or the 3.0 model as more important for this generation's lifespan than they are. In fact, I've argued rather strongly that 2.0 will be the inflection point amongst developers for quite some time for obvious reasons (well, obvious to me).

And fp24 is pp for SM 3.0, so once you decide to support the latter. . . .

Yes, I was quoting you there, John, but really I was moving off of what Cnet said (I don't much care for Cnet reporting generally as I often find it both politically and commercially partisan) just to the idea that a "stunning defeat" was possible--but not for any reasons Cnet presented, as you said...:) I also know your position on 3.0 and 2.0, and agree with it.

BTW, if fp24 is pp for 3.0, I guess that rules out nVidia being able to do pp under its 3.0 implementation (or is fp16 also pp under 3.0)? Anyway, I can't wait to see the first game that requires fp24 precision to render properly--so I think we'll have a wait before seeing that particular aspect of ps3.0 supported by anybody in a game--don't you?
 
Ylandro said:
Now, when a card is running on average 30% faster than it's equally priced competitor, while using far less power and space,
and its much cheaper little brother is equally fast as the competitor, isn't it then logical to call it a stunning defeat?
This is my thinking. The X800XT won C|Net's tests, and it did so with a smaller, more frugal card. A "stunning" defeat? Maybe if you think nV won the last gen (and thus the last umpteenth gens since the TNT/2), and if you discount SM3.0 (honestly, not an entirely unwise decision, given how slowly all new features tend to be adapted, though I see SM3.0 offering more immediate benefits simply b/c it seems easier to implement). ;) But a defeat by their limited testing, nonetheless.
 
Back
Top