Another ATIvsNVIDIA review from gamersdepot

5million sounds a little more reasonable but still a heafty amount. I am in full agreement here with doomtrooper that its fine to have multiple paths in game but benchmarking should force the arb 2 path for all dx9 hardware to show an even workload. Isn't that the point of benchmarking seeing how well multiple types of hardware do on the same code?
 
Not really, you just want to see how well doom3 runs on both cards.

You should IMO present data of nv30 path vs arb2 path. Then say in the conclusions about the quality difference so people can make up their own minds as to whether the higher frame rates outweighs the quality difference.

Thats unless you want to use doom 3 as an indicator of dx9 performance which you shouldnt do anyway really.
 
So if Doom3 is run at arb2 path does that mean nv30 does 32bit, or 16bit? In either case it is still not the same as the R300, so basically if it is 16 bit you can say the r300 is faster and better, if it is 32 bit you say the r300 is lower quality and faster. Either way they are still not being compared accurately.


Hmmm... perhaps nvidia chose to make it different from the r300's 24 bit simply so they could make a comparison impossible.
 
LeStoffer said:
... not a bad one. Let's face it: If Half Life2 were shipped at the end of September, Doom III would still have plenty room to sell a ton up to the holiday. Now things have changed with HL2 somewhat delayed and Id will still earn very decent bucks by selling the game through those dark get-inside-your-house! winter months (most games are still sold in the Northern Hemisphere, okay?).

The problem, I think, as ID might have seen it, is that the release of two major titles in the period of, say, a single calendar quarter, would inevitably result in many direct comparisons between the two titles across the Internet. While I'm sure a great number of people would buy both, I'm also sure that a sizable number would choose between the two at release. I think ID wants to put some space in between the releases, so that when D3 is released HL2 will be somewhat "stale" at the time (maybe 6 months old.) Additionally, this will allow ID to avoid the kind of direct comparisons I'm talking about, at least to a far greater extent than would be possible if both titles were released in a single quarter. Along those lines, I think ID wants to be able to look at HL2 to evaluate how their current build of D3 measures up, not only in graphics presentation, but in the overall story itself, at a point in time some months before they ship D3. Putting D3 back to 2004 is simply advantageous for them on many fronts, I think.
 
Sxotty said:
So if Doom3 is run at arb2 path does that mean nv30 does 32bit, or 16bit? In either case it is still not the same as the R300, so basically if it is 16 bit you can say the r300 is faster and better, if it is 32 bit you say the r300 is lower quality and faster. Either way they are still not being compared accurately.


Hmmm... perhaps nvidia chose to make it different from the r300's 24 bit simply so they could make a comparison impossible.

Or, I would imagine, they were thinking that their fp16 support would give them a performance leg up on competitors running fp24 (which it really hasn't.)
 
On the original topic - I see tomshardware has finally cottoned on to this report, decided it is newsworthy and reported it as such on their main page - boy weren't they fast on the uptake
 
digitalwanderer said:
PaulS said:
Whatever your feelings on the matter, iD need to ensure the game runs acceptably on as many cards as possible. There's no point them forcing the NV3x cards to run the ARB2 path just to try and make a statement.

This is a game, and it needs to be playable - iD are ensuring that's the case, no matter what they have to do. In the real world, that's how it works.
Well, let's at least be fully honest about it and admit that nVidia is paying iD up the wazzoo to insure that "the game runs acceptably on as many cards as possible"....it's not like JC just wrote nVidia their own path out of the goodness of his heart. :rolleyes:

EDITED BITS: Added "JC", it makes more sense that way.

Do you know how insulting that statement is? Its not far from accusing a sportman of match fixing. JC has probably worked long nights and put in a lot of effort just so his customers get the best experience of his art regardless of which hardware they happen to have. No conspiracy, no cheating, just a bunch of guys working for the love of it, so you get a better game.

As for you evidence of a 'bung', thats marketting dollars and is standard business for big titles. iD probably had no say or involvement in it.

The few times I talked with John he has always been very helpful (its was a few years ago on a Mac OpenGL mailing list) and generally goes out of his way just to be helpful.

Maybe I'm just pissed off by that statement as I've been sitting programming for the last 13 hours for no reason other than to produce the best game possible. Just out of the goodness of my heart, I get paid the same if I only did 8 hours a day rather then 12 but still I (and quite a few others in the office) are here trying to improve a game, likely no different from whats happening in the iD office a couple of thousand miles away.
 
dan2097 said:
Not really, you just want to see how well doom3 runs on both cards.

You should IMO present data of nv30 path vs arb2 path. Then say in the conclusions about the quality difference so people can make up their own minds as to whether the higher frame rates outweighs the quality difference.

Thats unless you want to use doom 3 as an indicator of dx9 performance which you shouldnt do anyway really.
This depends on your reason for bench-marking. If you are trying so see which card runs Doom 3 better then you are correct. If you are trying to ascertain which card is better at running a myriad of games then it is better to use the same path. There is no guarantee that other game developers will go to the same lengths as iD in order to level the playing field by providing different paths.
 
DeanoC said:
digitalwanderer said:
PaulS said:
Whatever your feelings on the matter, iD need to ensure the game runs acceptably on as many cards as possible. There's no point them forcing the NV3x cards to run the ARB2 path just to try and make a statement.

This is a game, and it needs to be playable - iD are ensuring that's the case, no matter what they have to do. In the real world, that's how it works.
Well, let's at least be fully honest about it and admit that nVidia is paying iD up the wazzoo to insure that "the game runs acceptably on as many cards as possible"....it's not like JC just wrote nVidia their own path out of the goodness of his heart. :rolleyes:

EDITED BITS: Added "JC", it makes more sense that way.

Do you know how insulting that statement is? Its not far from accusing a sportman of match fixing. JC has probably worked long nights and put in a lot of effort just so his customers get the best experience of his art regardless of which hardware they happen to have. No conspiracy, no cheating, just a bunch of guys working for the love of it, so you get a better game.

As for you evidence of a 'bung', thats marketting dollars and is standard business for big titles. iD probably had no say or involvement in it.

The few times I talked with John he has always been very helpful (its was a few years ago on a Mac OpenGL mailing list) and generally goes out of his way just to be helpful.

Maybe I'm just pissed off by that statement as I've been sitting programming for the last 13 hours for no reason other than to produce the best game possible. Just out of the goodness of my heart, I get paid the same if I only did 8 hours a day rather then 12 but still I (and quite a few others in the office) are here trying to improve a game, likely no different from whats happening in the iD office a couple of thousand miles away.
What's the word I'm looking for...oh yeah, "Meh.". :p

JC showed his bias on the so-called Doom3 "benchmark" that nVidia heralded far and wide as proof of the FX superiority over ATi and I pretty much lost all respects for his business ethics at that point.

I really don't give a damn what your feelings on it are, I know what mine are. :)
 
Ack.

The issues with the Doom 3 benchmarks were, AFAICS, access, driver behavior, and presentation. I am personally very severely disappointed with the occurrence, and I am indeed surprised at JC's allowing his name to be associated with it in the way that it was without making some comment to address some of the issues in this regard.

But I don't see how this supercedes the technical details of Doom 3 development. Reminds me of the "MS and all game developers are blindly ATI biased" argumentd that disregards that ATI hardware can be fairly said to be more capable for shader using games, and that the ways offered by MS and games to expose this have independent merit. In this case, the NV3x can be fairly said to excel at Doom 3's particular demands, and what Doom 3 tries to achieve can be fairly said to have IHV independent merit and logical reasons not to be aimed at "DX 9".

Doom 3 is not a "DX 9" (please note the quotes usage throughout "DX" references) game. It is an approximate "DX 8.1" game, with modifactions added to enhance it based on new capabilities allowed by cards in excess of "DX 8.1" type capabilities.

Therefore, a special Doom 3 path for the NV3x makes perfect sense, as the NV3x "sucks" at "DX 9", and performs best at "approximate 'DX 8.1'", maybe including some things "in excess of 'DX 8.1'".

There is also a "DX 8.1" path specifically for ATI's applicable cards. The effort for these paths doesn't seem quite trivial enough to dismiss due to the above issues, and nVidia requiring a larger "count" of specific paths is just an issue of their current lack of gaming shader capability in hardware as far as exceeding Doom 3's requirements.

Perhaps considering all of it together, just maybe a case of blind nVidia bias isn't the most complete, or even accurate, explanation at the moment? :-?
 
demalion said:
Perhaps considering all of it together, just maybe a case of blind nVidia bias isn't the most complete, or even accurate, explanation at the moment? :-?
Mebbe not and I'm not knocking JC for his technical prowess at all and am not saying he ain't a helpful, friendly, cool/nice fella. All I really have problems with are....

The issues with the Doom 3 benchmarks were, AFAICS, access, driver behavior, and presentation. I am personally very severely disappointed with the occurrence, and I am indeed surprised at JC's allowing his name to be associated with it in the way that it was without making some comment to address some of the issues in this regard.
Yeah, I'm STILL pissed about that and found out another lesson about idols with feet-o-clay. :(
 
DeanoC said:
Do you know how insulting that statement is? Its not far from accusing a sportman of match fixing. JC has probably worked long nights and put in a lot of effort just so his customers get the best experience of his art regardless of which hardware they happen to have. No conspiracy, no cheating, just a bunch of guys working for the love of it, so you get a better game.

As for you evidence of a 'bung', thats marketting dollars and is standard business for big titles. iD probably had no say or involvement in it.

The few times I talked with John he has always been very helpful (its was a few years ago on a Mac OpenGL mailing list) and generally goes out of his way just to be helpful.

Maybe I'm just pissed off by that statement as I've been sitting programming for the last 13 hours for no reason other than to produce the best game possible. Just out of the goodness of my heart, I get paid the same if I only did 8 hours a day rather then 12 but still I (and quite a few others in the office) are here trying to improve a game, likely no different from whats happening in the iD office a couple of thousand miles away.

I am somewhat sympathetic with JC, but at the same time he's got to realize that it's been his own portrayals of the last few years of being an unalloyed nVidia backer that create this kind of sentiment. I understand why he did it--he's been an OpenGL evangelist for quite awhile--and for a long time nVidia had the best OpenGL drivers going. I think the situation in that regard has changed, though, and I think JC knows it very well. I mean, the fact that he publicly stated last year that "nVidia had nothing competitive" to R300 to run his software at the time should satisfy even his most ardent critics in that regard.

However, I will say this--if JC is still using nv3x as his "primary development platform" (and I don't know that he is) then he undoubtedly has a bit of explaining to do, IMO, because the objective facts don't really back him up on that decision (if, indeed, that is still his decision.) I mean, I can't conceive of a developer who would have preferred the slings and arrows of the outrageously noisy 5800U to the 9700P last year, but JC made just such a declaration as I recall (even though he said he loathed the fan noise.) OTOH, I haven't seen any pro-nVidia comments out of him lately...;)

The thing that bothered me about ID this year was the nVidia-sponsored D3 "demo" (not for public release) that nVidia commissioned out of ID that did Lord knows what to the code--and was released by selected sites to help prop up the nV35 announcement marketing. People were actually detoured from ID when requesting the demo for themselves to nVidia--ID itself wanted as little to do with that software publicly as they could manage. That's the kind of thing ID needs to avoid going forward, IMO.
 
JC (Son of GOD) has been reported (by the Matrox BBz) to have backed out of making an optimised Parhelia path at the very last minute before it's release, despite receiving a card and promising to write a path for it. :rolleyes:
 
dw, remind me again why the Doom 3 benchmarks were misleading? No one knows how D3 will play, and JC himself has indicated that the NV30 was faster than the R300, IIRC. I'm still skeptical about the state of ATi's drivers, but my real disappointment was the generally uninformed sites that were invited to test. None of them seemed to know the diff b/w MQ and HQ (and if they did, they didn't publish it), and THG didn't press nV too hard about its apparently abberrant HQ scores. And don't forget JC said he created his own demo in place of the ones nV provided, as he considered nV's demos potentially playing favorites.

WaltC, The Carmack already explained why he uses an FX card as his primary development platform, and it should be obvious even without his explanation: the FX allows him to code for more codepaths on a single card. FX gives him NV2x, NV3x, and ARB2 on one card; R3xx gives him only R2xx and ARB2. I'm sure he has the (dozen) odd R3xx's in various machines to test performance, but I'd imagine the convenience of one machine to test more codepaths is more important to him.

Don't be so fast to attribute bad intentions to a coder who has been repeatedly heralded as brilliant and (bluntly) honest. And don't forget that nV was in the driver's seat for quite some time.

Still, I've been rooting for AMD + ATi for a while now. We'll see how HL2 and D3 perform on an Opteron/Athlon64 + 9800/XT setup when they all debut. :)
 
Pete said:
dw, remind me again why the Doom 3 benchmarks were misleading? No one knows how D3 will play, and JC himself has indicated that the NV30 was faster than the R300, IIRC. I'm still skeptical about the state of ATi's drivers, but my real disappointment was the generally uninformed sites that were invited to test. None of them seemed to know the diff b/w MQ and HQ (and if they did, they didn't publish it), and THG didn't press nV too hard about its apparently abberrant HQ scores. And don't forget JC said he created his own demo in place of the ones nV provided, as he considered nV's demos potentially playing favorites.
I believe JC said the NV30 ran the nVidia coded path faster than ATi ran the regular one, if we want to be exact.

Also, nVidia had a chance to optimize their drivers for the benchmark that as you point out is NOT indicative of the final product...whenever that is coming out.

The whole benchmark was run without even mentioning it to ATi, they actually learned about it on the boards just like we did....that's pretty cheesy and very much blatant favoritism.

No, I don't have any evidence to point to any other discrepencies...as far as I know the benchmark was never given/leaked anywhere besides to nVidia and [H].

Kudos to Johnny-boy on his brilliant programming, but he lost me when he was taking part in perpetrating fraud on consumers. (Label it how you want, that's what the result was.)
 
digitalwanderer said:
I believe JC said the NV30 ran the nVidia coded path faster than ATi ran the regular one, if we want to be exact.
Right, I forgot to add that. The important point is that JC said there wasn't a significant IQ difference between the NV30 and ARB2 paths. Obviously, we'll have to see it to believe it, but I still trust the guy.

Kudos to Johnny-boy on his brilliant programming, but he lost me when he was taking part in perpetrating fraud on consumers. (Label it how you want, that's what the result was.)
Again, I don't think he was the one perpetrating "fraud." It was the sites invited to benchmark the game that were responsible to clarify the conditions and potential pitfalls of the benchmark, IMO--JC provided a tool, no more, no less. Again, we'll soon see for ourselves whether JC was less than forthright.
 
Pete said:
Right, I forgot to add that. The important point is that JC said there wasn't a significant IQ difference between the NV30 and ARB2 paths. Obviously, we'll have to see it to believe it, but I still trust the guy.

This coming from a Guy that is pushing for 64-bit color :rolleyes: :LOL:

http://www.projectdoom.com/info.html

'We need more bits per color component in our 3D accelerators. I have been pushing for a couple more bits of range for several years now, but I now extend that to wanting full 16 bit floating point colors throughout the graphics pipeline. A sign bit, ten bits of mantissa, and five bits of exponent (possibly trading a bit or two between the mantissa and exponent). Even that isn't all you could want, but it is the rational step. To accurately model the full human sensible range of light values, you would need more than even a five bit exponent. This wasn't much of an issue even a year ago, when we were happy to just cover the screen a couple times at a high frame rate, but real-time graphics is moving away from just "putting up wallpaper" to calculating complex illumination equations at each pixel. It is not at all unreasonable to consider having twenty textures contribute to the final value of a pixel. Range and precision matter. A few common responses to this pitch: "64 bits per pixel??? Are you crazy???" Remember, it is exactly the same relative step as we made from 16 bit to 32 bit, which didn't take all that long. Yes, it will be slower. That's ok. This is an important point: we can't continue to usefully use vastly greater fill rate without an increase in precision. You can always crank the resolution and multisampling anti-Aliasing up higher, but that starts to have diminishing returns well before you use of the couple gigatexels of fill rate we are expected to have next year. The cool and interesting things to do with all that fill rate involves many passes composited into less pixels, making precision important. "Can we just put it in the texture combiners and leave the frame buffer at 32 bits?" No. There are always going to be shade trees that overflow a given number of texture units, and they are going to be the ones that need the extra precision. Do we need it in textures as well?" Not for most image textures, but it still needs to be supported for textures that are used as function look up tables. There is nothing like actually dealing with problems that were mostly theoretical before... 64 bit pixels. It is The Right Thing to do. Hardware vendors: don't you be the company that is the last to make the transition

Hyprocritical I say, (seems that word can be used alot in the graphic world) if FX 12 is now OK for future precision.
 
You guys are unbelievable, as well as being childish.

JC's .plan file should be appreciated for what it is -- snippets of information. If he wants floating point frame buffer, he will use the .plan to say he wants it.

It doesn't mean he will get it, nor get it with hardware that runs it fast enough.

I find it incredible, and childish, that DT wrote what he wrote.

If you'll read what JC has said thus far, you'll know what video card he recommends without saying it. Picking on words he has posted publicly (OT : do you know how careful he is in replying to emails?) is stupid and lacks maturity and foresight.

As well as not knowing prcisely what JC and his involvement with id Software is about.
 
Childish....riiight.

Here we have a Developer that clearly has been pushing higher precision, but then turns around and states "well the NV 30 path is lower precision and there is no image loss"..since we know the NV30 and 31 and 34 operate mostly in FX 12 precision it is simply hyprocritical to state FX 12 is acceptable, especially from their 'flagship' super duper hyped Doom 3.

Make up his mind, Doom 3 isn't a good game for precision example obviousally....but it sure is mentioned alot in 'plan' updates.


you'll read what JC has said thus far, you'll know what video card he recommends without saying it

Sure, latest plan update states he uses a 5800 Ultra as his primary development card..the card that has been cancelled.
That is a snippet of information that was written by him, no one else.
 
Back
Top