Another ATIvsNVIDIA review from gamersdepot

Well the review sucked, here is why I think it did.

1) all he did is parrot stuff others have already said
2) he was "in yer face" simply to gain noteriety and make people look at it.

As someone said he comes accross as simply likeing to sslam people. You see alot of reviewers think they seem more credible if they slam someone, like hey I must be honest if I say this sucks, they are pathetic and what not. Even though I don't see why the numbers cannot speak for themselves. The conclusion can state the usual nvidia's performance in dx9 is sub-standard, and the price is more therefore little reason to buy.

Ending an "review" with a PR message from one IHV is silly. I can just see the letter they sent Nvidia, "we were wondering why your DX9 feature set on your cards is terrible, it is extremely slow and we have been writing an article to describe how it sucks, we were just wondering if you wanted to give us any input hint hint nudge nudge$$$. "

:rolleyes:
 
Dave H said:
JC has said that the 9700 runs the R200 path slightly faster than the R300 path. Both use a shader to implement the lighting model in a single pass; the R200 path uses PS 1.4 functionality via ATI_fragment_shader, while the R300 path uses PS 2.0 functionality via ARB_fragment_program.

Right.

Whether the R200 path is faster due to not using FP (but would this matter on an R300 which has FP24 pipes throughout? Perhaps if FP values are ever read/written to the graphics DRAM.), ATI_fragment_shader being for some reason slightly more efficient than ARB_fragment_program on R300 (or perhaps it was so back when Carmack made his .plan update), or due to the R300 path enabling a few extra features, isn't completely settled I don't think.

R300 uses FP no matter which path you use this is not what makes the difference in speed nor in quality.
FP values are definately not read/written as this is a one pass shader (you just said that), and the artwork is in R8G8B8.

The difference is:
- the R200 path uses normalization cubemaps for vector normalization, and uses a low order approximation to specular power.
- the R300 path uses arithmetic vector normalization (dp3/rsq/mul), and arithmetic specular power calculation (log/mul/exp).

The R200 path is likely bandwidth limited (the cubemap lookups can be stressful because of the non-standard read pattern).
The R300 path is likely shader computation limited.
The fact that the difference is small only shows how well balanced the R300 really is.

Clearly the reason to prefer the R300 path to the R200 is IQ; I believe at least a large part of the IQ benefit is from the use of FP color, but, again, perhaps other features are involved as well.

No, not the FP color (you have that in the R200 path too), but the availability of the rsq/rcp/exp/log instructions.
Those are hard and imprecise to approximate.

Meanwhile, for NV30-34, clearly the use of FX12 is speeding things up greatly, as one would expect given their architecture. Just as clearly, the use of FP color must have some positive IQ benefit, because otherwise the NV30 path would be entirely FX12, which it is certainly not.

What is not clear is whether the NV30 path uses cubemaps and what kind of approximations used. I woldn't be surprised if using the FP unit for the rsq in normalization would be faster than using cubemaps (if everything else is using the FX combiners).
 
Here is the verbatim exchange:

GD: John, we've found that NVIDIA hardware seems to come to a crawl whenever Pixel Shader's are involved, namely PS 2.0..

Have you witnessed any of this while testing under the Doom3 environment?

"Yes. NV30 class hardware can run the ARB2 path that uses ARB_fragment_program, but it is very slow, which is why I have a separate NV30 back end that uses NV_fragment_program to specify most of the operations as 12 or 16 bit instead of 32 bit."

While Carmack doesn't actually use the word "crawl", neither does he object to the terminology, and his own words are "it is very slow." So, I don't think using the word "crawl" is that inappropriate...;)
 
Heres a quote from the toms hardware doom3 benchmarks. In all fairness he was the only site (with the benchmarks) to address the rendering paths issue and got this response from nvidia.

Due to a bug, ARB2 currently does not work with NVIDIA's DX9 cards when using the preview version of the Detonator FX driver. According to NVIDIA, ARB2 performance with the final driver should be identical to that of the NV30 code. So a lot has happened since John Carmack's latest .plan update (www.bluesnews.com).

It seems that things are back as they were at the .plan update! Hows that happened? :)

link
http://www20.tomshardware.com/graphic/20030512/geforce_fx_5900-10.htm
 
Question is, is it really a bug ... or are they just going to do what Carmack does by hand now in the driver?
 
MfA said:
Question is, is it really a bug ... or are they just going to do what Carmack does by hand now in the driver?

As I said, I had a meeting at ECTS with a number of NVIDIA's dev rel guys. I was talking about the Doom3 paths and said "seeing as id will be using the NV fragement program extensions I assume you won't need to optimise the shaders yourselves" - their reply: "Oh, no, we've optimised them".
 
DaveBaumann said:
MfA said:
Question is, is it really a bug ... or are they just going to do what Carmack does by hand now in the driver?

As I said, I had a meeting at ECTS with a number of NVIDIA's dev rel guys. I was talking about the Doom3 paths and said "seeing as id will be using the NV fragement program extensions I assume you won't need to optimise the shaders yourselves" - their reply: "Oh, no, we've optimised them".

Doom3 is the GFFX killer app, and should have been here a long time ago. It's all but been coded with a special GFFX only path, so I expect to see every honest and dishonest "optimisation" you can think of from Nvidia. D3 has to be good on GFFX because HL2 will make GFFX look like terrible cards. Without D3, Nvidia GFFX cards have nothing to show.

It's worth noting that what we see on GFFX cards running D3 in terms of IQ and performance will be totally unrepresentative of almost every other game. AFAIK, there is no other major game that has had the extreme customisation from both Nvidia and the developer in order to make it run this well.
 
Anyone here not think we'll see another Doom3 benchmark exclusive when the halflife 2 benchmark is released? A way for Nvidia to try to save face.
 
DaveBaumann said:
Mr Perez was suggesting that we might not see a Doom3 benchmark.
Do you mean, "we might not see a Doom3 benchmark before it's released" or "we might not see a Doom3 benchmark ever"?!?
 
Myrmecophagavir said:
DaveBaumann said:
Err... :eek: :oops: ?

I second that :oops:

Dave, you're saying that iD won't have benchmarking capability in the Doom3 game? (Makes no sense to me).

Or are you saying the NVIDIA won't ever sponsor any more Doom3 benchmarks, ever? Makes a lot of sense to me, especially considering that Carmack was probably did not like how the whole NV30 benchmark thing ended up going down...menaing who won't want ANY IHV sponsoring Doom3 benchmarks going forward...
 
Joe DeFuria said:
Dave, you're saying that iD won't have benchmarking capability in the Doom3 game? (Makes no sense to me).

Or are you saying the NVIDIA won't ever sponsor any more Doom3 benchmarks, ever?
What's the difference anymore? nVidia has got iD on a tight enough leash that it's all the same thing, isn't it? :(
 
jjayb said:
Anyone here not think we'll see another Doom3 benchmark exclusive when the halflife 2 benchmark is released? A way for Nvidia to try to save face.

Yes. Since Doom III has slipped into 2004 it could be troublesome for nVidia to co-marketéer such a demo. They could be torn between marketing a top-of-the-line NV40 with excellent image quality and very good speed (ARB2 path) or a NV3x with excellent speed and fairly good image quality (NV_30 path) for their price.

They would not have a sure win against ATI during a time they might wanna sell out the stock of NV3x or would be changing their PR focus between raw speed to speed and image quality (I presume R420 is launched after NV40)
 
Joe DeFuria said:
Dave, you're saying that iD won't have benchmarking capability in the Doom3 game? (Makes no sense to me).

My understanding was that there is the possability that Doom3 will not have a benchmarking facility. And that does actually make some sense to me as well.
 
digitalwanderer said:
Or are you saying the NVIDIA won't ever sponsor any more Doom3 benchmarks, ever?
What's the difference anymore? nVidia has got iD on a tight enough leash that it's all the same thing, isn't it? :([/quote]

And to move away from that type of remark is exactly why it makes sense.
 
DaveBaumann said:
My understanding was that there is the possability that Doom3 will not have a benchmarking facility.

Wow...that's...disturbing to me.

And that does actually make some sense to me as well.

Care to elaborate? Is it more of what you implied above...that iD just might not want it's games being used as a tool for p*ssing matches?
 
DaveBaumann said:
As I said, I had a meeting at ECTS with a number of NVIDIA's dev rel guys. I was talking about the Doom3 paths and said "seeing as id will be using the NV fragement program extensions I assume you won't need to optimise the shaders yourselves" - their reply: "Oh, no, we've optimised them".

Heh-Heh...:) Don't doubt that at all....

My own theory about D3 slipping to next year is that ID didn't want to go head-to-head with HL2 this fall. Probably too, they looked at some of the HL2 movies and decided to take 6 extra months to polish things up a bit...just MO, of course.
 
DaveBaumann said:
My understanding was that there is the possability that Doom3 will not have a benchmarking facility.

Wow... That would break a long tradition over at ID.
I guess it was because NV was scared it would show their boards under a "bad light"
 
Back
Top