Carmack's comments on NV30 vs R300, DOOM developments

Reverend said:
Hmm... while informative, I'm shaking my head (like you guys here) with new questions raised pertaining to JC's .plan

Will shoot him an email and hope he replies (dang, hope he remembers "Reverend"!).

Please do. I forgot to point out that while NV30's issues with ARB_fragment_program (ARB2) is okay with Doom III bacause of the NV30 path/backend, ARB_fragment_program could pose similar problems with other OpenGL engines. :?
 
You're drawing conclusions as to his motives and feelings, and clearly none of what you're drawing is stated in his words which is all we have to go on.

Surely, your theory is a possibility, but not a 'certainly'.
 
antlers4 said:
Since I was talking about emulating existing functionality, and since DX9 wasn't released for months after the R300 was released, the availability of DX9 drivers is hardly relevant here, is it? :D ATI drivers were plenty buggy initially, of course, but they worked fine in many games and cleaned up quickly.
They could have released drivers supporting the functionality in OpenGL.

But why does it need 125 m transistors and excessive clock speeds to keep up? I argue that it is because NVidia didn't make the bold choice that ATI made to run everything through their FP pipe. NVidia made this decision because they didn't believe that they could/or needed to make their FP pipe run fast enough. And if you don't try, you don't succeed.
I don't think so. I think that drivers will very significantly improve the performance in later revisions.
 
You're drawing conclusions as to his motives and feelings, and clearly none of what you're drawing is stated in his words which is all we have to go on.

Surely, your theory is a possibility, but not a 'certainly'.

Surely, I didn't say it was a certainty. Why you you insist on arguing against points I don't make? Talk about going "on words that people didn't say."

Surely, I did say that I don't see how you can call my "theory" far reaching, considering that you have not supplied any altrenative other than "it's not the only possible reason."

I said that I see no other explanation. That doesn't mean other explanations don't exist. I just don't see them.

The implication being.....YOU try and provide me with some other plausible explanation, considering you assert that my explanation is far reachin. Again, I ASK for "some other reason" why Carmack would support an NV30 path, if he has confidence that FX's ARB2 performance and quality would be "up to snuff" by the time Doom3 ships..
 
Joe DeFuria said:
So certainly, carmack would drop the NV30 path altogether if he felt it would offer no significant performance or quality increase over the ARB2 path.

I see no valid reason for still supporting the NV30 path.

I'm calling your theory far reaching because your basing speculations on speculations. And the above looks like you've discounted every other reason, including the possibilities I've presented in passing (That he, in fact, is not "supporting" the vendor specific path. Goodness, he even says he's not supporting the vendor specific path any more. ).
 
I ASK for "some other reason" why Carmack would support an NV30 path, if he has confidence that FX's ARB2 performance and quality would be "up to snuff" by the time Doom3 ships..

There's no reason to suppose that the ARB2 path will ever be as fast as NV30-specific code on NVidia hardware. NVidia has as much as said so; it's part of their justification for Cg.
 
antlers4 said:
I ASK for "some other reason" why Carmack would support an NV30 path, if he has confidence that FX's ARB2 performance and quality would be "up to snuff" by the time Doom3 ships..

There's no reason to suppose that the ARB2 path will ever be as fast as NV30-specific code on NVidia hardware. NVidia has as much as said so; it's part of their justification for Cg.

Eh? The "Arb2 path" means he's using the ARB2 method of specifying the shader. There's no particular reason it can't be compiled to be as efficient as what Cg can do.
 
I'm calling your theory far reaching because your basing speculations on speculations.

Huh?

Where is there speculation in the fact that the performance of the ARB2 path on NV30 is 50% of the NV30 path?

This is speculation based on fact.

And the above looks like you've discounted every other reason, including the possibilities I've presented in passing

Yes, I just said that I cannot think of any other possibility. And I'm waiting to hear of some other valid one.

(That he, in fact, is not "supporting" the vendor specific path. Why, he even says he's not supporting the vendor specific path any more. I do declare! What a revelation!).

And I REBUTTED that "possibility" by using carmack's words to prove that vendor specific vertex programs are being DROPPED. That is, it is in fact a PROOF, unless you "don't just read" Carmack's words, and throw in speculation yourself.

I repeat. CARMACK SAID:

but the final decision was that we are going to require a modern driver for the game to run in the advanced modes. Older drivers can still fall back to either the ARB or NV10 paths.

HE SAID REQUIRE. Surely, you're not going to "SPECULATE" that Carmack didn't really mean "require", when he said require, but that he meant "vendor specific vertex programs will still be in there and usable, just that we won't actively support it."
 
RussSchultz said:
Eh? The "Arb2 path" means he's using the ARB2 method of specifying the shader. There's no particular reason it can't be compiled to be as efficient as what Cg can do.

Excepts Cg will theoretically know more about what the shader is "supposed" to be doing than can be expressed in the ARB2 assembly language; information is lost in compiling. (I'd have to check up on what you can do with ARB fragment program compared with NV fragment program before I talk more about this though...)
 
Sigh. What is in the code now doesn't mean that's what will be in the code when the game ships. And the inverse of that is also true--simply because he's saying Doom WON'T support it doesn't mean it isn't in the code base right now.

It may very well work better RIGHT NOW using the NV30 extensions, but he's said quite clearly he's dropping vendor specific instructions, which we can assume include the NV30.

But regardless, none of that says he has no confidence ARB2 will be up to snuff for the NV30, which is your conclusion that is going well beyond what is written.
 
Does anyone else find it strange that this is indeed a compiling problem? With all of NVidia's expertise in compiling technology from introducing Cg, you'd think that they can at least feed the PS 2.0 instructions correctly to the chip. ARB2 instructions should also be a breeze. AFAIK, his ARB2 code path doesn't include any HLSL, so the translation to NV30's machine code should be very straightforward.

All this just doesn't make any sense. I don't think it's very likely that the early drivers are so pathetic at putting shader code on the chip, especially when JC was bragging about how GF3 was so flawless even in beta stages.

My best guess is there is something wrong with the current revision of the chip. Maybe half the shaders are disabled or something.
 
Yes, and here comes the typical influx of "nVidia sympathizers" once they feel "one of their kind" is in trouble....

[edit]...to be clear...this is directed at Diespinnerz
 
Diespinnerz said:
There goes that thread thanks as usual Joe. Your "right" as usual, although what your "right" about I have no idea...

I'm sorry, Diespinnerz, but it's you that seems to be having problems here. The last couple of posts you've made are only about personalities - and are negitive. And, you've only made 17 posts total. Joe & Russ may disagree, but they both have that right. Either start adding to these threads, or just stop posting, period.
 
Sigh...

Russ, we're going full circle here. And you're still not giving any alternatives other than "That might not be the case."

What is in the code now doesn't mean that's what will be in the code when the game ships.

Agreed. Has no bearing on my argument though, so I don't see the point.

And the inverse of that is also true--simply because he's saying Doom WON'T support it doesn't mean it isn't in the code base right now.

Again, agreed. And again, has no bearing on the argument.

It may very well work better RIGHT NOW using the NV30 extensions, but he's said quite clearly he's dropping vendor specific instructions, which we can assume include the NV30.

Again...agreed. And again, has no bearing on the argument.

But regardless, none of that says he has no confidence ARB2 will be up to snuff for the NV30, which is your conclusion that is going well beyond what is written.

Of course my conclusion goes beyond what is written. That's what speculation is, isn't it?

Please, tell me again (actually, tell me for the first time) WHY carmack would be "currently supporting" and developing the NV30 pipeline, IF HE IS CONFIDENT that he WILL BE DROPPING IT.

That just makes absolutley no sense to me.

AGAIN, I am NOT SAYING that FX ARB2 pipline won't be up to snuff. I'm NOT saying that NV30 path won't be dropped in the final product. I'm asking for any reason WHY there WOULD BE a "currently supported" NV30 path, IF Carmack believed it would be ultimately dropped?
 
Mintmaster said:
My best guess is there is something wrong with the current revision of the chip. Maybe half the shaders are disabled or something.

I don't think that has to be the case. I just think NVidia designed the chip thinking that a certain FP shader performance was adequate, and compared to the R300 it is looking less adequate (in some circumstances; I'm not forgetting that the NV30-specific path runs fine).

As I argued a couple of pages back, the R300 runs everything through its FP24 path, so it has to be fast. NVidia chose to maintain a 32-bit int path for speed, so it didn't have to make its FP path as fast.
 
Joe DeFuria said:
AGAIN, I am NOT SAYING that FX ARB2 pipline won't be up to snuff. I'm NOT saying that NV30 path won't be dropped in the final product. I'm asking for any reason WHY there WOULD BE a "currently supported" NV30 path, IF Carmack believed it would be ultimately dropped?

Because maybe he wanted to see what the capabilities of NV30 specific extensions are? Because he was curious? Becuase, like any good engineer, he wanted to explore the capabilities of a piece of hardware that he was excited about? :rolleyes:
 
Back
Top