Carmack on Geforce FX and HL2

Check it out:
http://english.bonusweb.cz/interviews/carmackgfx.html
"No doubt you heard about GeForce FX fiasco in Half-Life 2. In your opinion, are these results representative for future DX9 games (including Doom III) or is it just a special case of HL2 code preferring ATI features, as NVIDIA suggests?" (Bonusweb)

"Unfortunately, it will probably be representative of most DX9 games. Doom has a custom back end that uses the lower precisions on the GF-FX, but when you run it with standard fragment programs just like ATI, it is a lot slower. The precision doesn't really matter to Doom, but that won't be a reasonable option in future games designed around DX9 level hardware as a minimum spec." (Carmack)
 
Wow, you guys are a quick draw. Figures, since WaltC practically sleeps in the Rage3D forums. I wonder which thread he sleeps in? ;)
 
Luminescent said:
Wow, you guys are a quick draw. Figures, since WaltC practically sleeps in the Rage3D forums. I wonder which thread he sleeps in? ;)

Heh...:) Believe it or not, I saw that item on Blue's News...:) And then I came over here and checked out several forums and finally concluded (to my surprise) that no one had posted it--so I posted it.....

Actually, I've spent most of my available forum time over here for the last couple of months--no reflection on R3d, mind you, at all--just a matter of priorities and time...:D
 
Yes its just JC has never stated it so the uniinitiated could realise the truth so easily - does it sound like him.

I'd love him to say it but he has been candid in saying anything bad about NVidia - this is so direct are we sure its an official quote?
 
g__day said:
I'd love him to say it but he has been candid in saying anything bad about NVidia - this is so direct are we sure its an official quote?

Never said anything "bad" about nVidia?

How about:

Carmack said:
Do not buy a GeForce4-MX for Doom.

Nvidia has really made a mess of the naming conventions here. I always
thought it was bad enough that GF2 was just a speed bumped GF1, while GF3 had
significant architectural improvements over GF2. I expected GF4 to be the
speed bumped GF3, but calling the NV17 GF4-MX really sucks.

Camack calls it as he sees it.

That is both good and bad, btw. Because how "Carmack sees it" is usually restricted to how it impacts his games / projects, and not others. So people tend to read his statements, and assume it applies across the board. (NV30 is classic example...people don't understand that DX9 games can't necessarily just have a special NV30 path and be fine, like Doom3.)

The quote sounds official to me.

Even if it isn't, it's getting enough press that I'd expect Carmack to denounce the quote if he did not in fact, make it.
 
I believe I recall someone mentioning that John Carmack initially stated that with NV35 he should be able to run the ARB path as fast as ATI - is this the case and if so, does anyone have a link to it?

Cheers
 
Due to a bug, ARB2 currently does not work with NVIDIA's DX9 cards when using the preview version of the Detonator FX driver. According to NVIDIA, ARB2 performance with the final driver should be identical to that of the NV30 code. So a lot has happened since John Carmack's latest .plan update (www.bluesnews.com). The questions about floating point shader precision will soon be answered, as well.

You might be thinking of this, said by Nvidia via toms!

http://www6.tomshardware.com/graphic/20030512/geforce_fx_5900-10.html#doom_iii_special_preview
 
Vortigern_red said:
Due to a bug, ARB2 currently does not work with NVIDIA's DX9 cards when using the preview version of the Detonator FX driver. According to NVIDIA, ARB2 performance with the final driver should be identical to that of the NV30 code. So a lot has happened since John Carmack's latest .plan update (www.bluesnews.com). The questions about floating point shader precision will soon be answered, as well.

You might be thinking of this, said by Nvidia via toms!

http://www6.tomshardware.com/graphic/20030512/geforce_fx_5900-10.html#doom_iii_special_preview

LOL... in hindsight thats pretty funny. Of course the ARB2 performance will be *identical* to that of the NV30 path... because they'll just replace all the shaders in the driver and make ARB2=NV30 path. Funny how past comments look through the lenses of Nvidias conduct this last year...
 
The Carmack has Spoken

bow.gif
 
DaveBaumann said:
I believe I recall someone mentioning that John Carmack initially stated that with NV35 he should be able to run the ARB path as fast as ATI - is this the case and if so, does anyone have a link to it?

This is the closest he came to that, basically repeated what nVidia had assured him.

The R200 path has a slight speed advantage over the ARB2 path on the R300, but only by a small margin, so it defaults to using the ARB2 path for the quality improvements. The NV30 runs the ARB2 path MUCH slower than the NV30 path. Half the speed at the moment. This is unfortunate, because when you do an exact, apples-to-apples comparison using exactly the same API, the R300 looks twice as fast, but when you use the vendor-specific paths, the NV30 wins.

The reason for this is that ATI does everything at high precision all the time, while Nvidia internally supports three different precisions with different performances. To make it even more complicated, the exact precision that ATI uses is in between the floating point precisions offered by Nvidia, so when Nvidia runs fragment programs, they are at a higher precision than ATI's, which is some justification for the slower speed. Nvidia assures me that there is a lot of room for improving the fragment program performance with improved driver compiler technology.

http://www.webdog.org/cgi-bin/finger.plm?id=1
 
Deathlike2 said:
According to NVIDIA, ARB2 performance with the final driver should be identical to that of the NV30 code

Hm... doesn't that tell everyone something's wrong here?

No, because at the time NV35 was said to have twice the shading speed of NV30. So people believed the problems might have been fixed. It later became clear that this wasn't the case.
 
Deathlike2 said:
According to NVIDIA, ARB2 performance with the final driver should be identical to that of the NV30 code

Hm... doesn't that tell everyone something's wrong here?

Yes, the quote is old, and the phrase "final driver" should be the tip off...:)
 
No, because at the time NV35 was said to have twice the shading speed of NV30. So people believed the problems might have been fixed. It later became clear that this wasn't the case.

I think you missed my point Stealth..

It's not that the NV35 would be 2x as fast...

It's the idea the ARB code would be "as fast" as the NV30 code...

The ARB code technically couldn't be faster than the NV30 specific code...

The NV30 specific code may lend to forcing lower precision, and the ARB code SHOULD be rendering the full precision... (which the NV3X hardware is not as powerful running with full FP32 on all operations)..

What I am suggesting is that the ARB code (on the NV3X series) will be the EQUIVALENT of the NV30 code (forcing lower precision on ALL operations)
 
Back
Top