Which path will NV40 use in Doom3?

I asked him directly about the validity on the quote and some more precisions and he's just answered me this :

The quote is from me. Nvidia probably IS "cheating" to some degree,
recognizing the Doom shaders and substituting optimized ones, because I
have found that making some innocuous changes causes the performance to
drop all the way back down to the levels it used to run at. I do set the
precision hint to allow them to use 16 bit floating point for everything,
which gets them back in the ballpark of the R300 cards, but generally still
a bit lower.

Removing a back end driver path is valuable to me, so I don't complain
about the optimization. Keeping Nvidia focused on the ARB standard paths
instead of their vendor specific paths is a Good Thing.

The bottom line is that the ATI R300 class systems will generally run a
typical random fragment program that you would write faster than early NV30
class systems, although it is very easy to run into the implementation
limits on the R300 when you are experimenting. Later NV30 class cards are
faster (I have not done head to head comparisons with non-Doom code), and
the NV40 runs everything really fast.

Feel free to post these comments.

John Carmack

But it's a chicken and egg problem now you have to believe me that this really comes from John Carmack :D

I just hope that this will clear things up and that it will stop people from implying that he is biaised :rolleyes:
 
Zeross said:
I just hope that this will clear things up and that it will stop people from implying that he is biaised :rolleyes:
Great quote and info, but I'm not getting over Johnny-boy's Doom3/nV30 benchmarking fiasco quite that easy.

He's still biased, at least IMHO.
 
Zeross said:
I asked him directly about the validity on the quote and some more precisions and he's just answered me this :

The quote is from me. Nvidia probably IS "cheating" to some degree,
recognizing the Doom shaders and substituting optimized ones, because I
have found that making some innocuous changes causes the performance to
drop all the way back down to the levels it used to run at.

Well, that pretty much confirms my earlier suspicions. Again, what's unfortunate about this, is people believing that NV3x and R3xx running the ARB2 paths are doing "apples to apples" work, because they are using the "same path."

And it just reinforces the notion that NV3x performance is highly dependent on either

1) The developer having the time / resources to code specifically for NV3x hardware

or

2) nVidia believing that the app is important enough (in other words, will be used for benchmarks), to code app specific (shader replacement, etc) code on a game by game basis.
 
The way I see it is, that at least now the early NV30 cards poor performance is something that Nvidia will have to deal with and prop up rather than forcing the developer to do it for them.

If anything has caused debates over the last year or so , to do with D3 , it's the special path for the NV30 , now at least he can hold his hands up and say "What special path?" , it's almost seems as though he might be trying to distance himself from all the accusations about favouritism.

Mark
 
Zeross said:
I asked him directly about the validity on the quote and some more precisions and he's just answered me this :

The quote is from me. Nvidia probably IS "cheating" to some degree,
recognizing the Doom shaders and substituting optimized ones, because I
have found that making some innocuous changes causes the performance to
drop all the way back down to the levels it used to run at. I do set the
precision hint to allow them to use 16 bit floating point for everything,
which gets them back in the ballpark of the R300 cards, but generally still
a bit lower.

Removing a back end driver path is valuable to me, so I don't complain
about the optimization. Keeping Nvidia focused on the ARB standard paths
instead of their vendor specific paths is a Good Thing.

The bottom line is that the ATI R300 class systems will generally run a
typical random fragment program that you would write faster than early NV30
class systems, although it is very easy to run into the implementation
limits on the R300 when you are experimenting. Later NV30 class cards are
faster (I have not done head to head comparisons with non-Doom code), and
the NV40 runs everything really fast.

Feel free to post these comments.

John Carmack

But it's a chicken and egg problem now you have to believe me that this really comes from John Carmack :D

I just hope that this will clear things up and that it will stop people from implying that he is biaised :rolleyes:


This disapoints me. He does seem to imply that Nv35/Nv36 cards run his game very well.
 
anaqer said:
ChrisRay said:
This disapoints me. He does seem to imply that Nv35/Nv36 cards run his game very well.

Uhm... got some solid ( heck, any ) info contradicting that? :rolleyes:

I'm not sure what you are implying. From this quote I gather he's implying that the Nv35 and Nv36 are running his game quite a bit better than NV31/Nv34/NV30 cards in his Arb Code.

Later NV30 class cards are
faster (I have not done head to head comparisons with non-Doom code), and
the NV40 runs everything really fast.
 
It doesn't surprise me that the later NV3x chips can run this game faster than their R3x0 counterparts.

After all, we already know that this chip architecture is virtually built around the Doom3 programming techniques enabling the 8x0 "zixel" mode to be used. NV then have their higher clocks (in general) than competing parts as well as "Ultrashadow" on some chips and their shader replacements so it seems entirely logical to think that they will be faster - this game is the best possible case for NV3x.

Wider use of PS2.0 shaders in other games will undoubtedly redress the balance in favour of the R3x0 series.
 
Zeross said:
But it's a chicken and egg problem now you have to believe me that this really comes from John Carmack :D
In my experience, JC will readily answer any e-mail that is interesting to him, so I, for one, believe it :)

Anyway, if you remember I quoted some time ago that the NV40 architecture's compiler was written during the design of the architecture, and the hardware was modified to improve the compileability, whereas the NV30's compiler was written after the hardware was done.

It may therefore be the case that there is no fundamental difference between how the NV40's shaders and the NV30's shaders handle FP registers. It's just that the NV40's hardware has an additional hardware scheduler that allows optimal use of the FP registers (i.e. it may still be possible to find a particularly bad scenario where there is a register performance hit on the NV40. This may, for example, be why some synthetic benchmarks show a performance drop when using FP16: FP16 registers may be harder for the hardware to "swap around" dynamically).

If this is the case, than it is conceivable that much of the NV30's FP register performance hit is not related to some baked-in performance penalty, but rather due to how well the drivers can schedule register usage on the fly. A hand-written shader (by nVidia: as it can be done at a much lower-level) may be able to better-schedule the FP register usage, a problem that nVidia has apparently not figured out for their runtime shader compiler in the drivers.

Still, it does seem apparent that even hand-written shaders cannot completely remove the FP performance hit, but it is obvious that hand-written shaders can hugely increase performance on the NV3x (still, for absolute proof of this concept we'd need to see one hand-written shader that doesn't drop any precision, but still provides a huge performance increase: I'm not sure we can say that we've certainly seen one....but I think it is something to consider).

digitalwanderer said:
He's still biased, at least IMHO.
Just remember that JC sees a completely different side of these companies than most of the people on these boards see.
 
Chalnoth said:
In my experience, JC will readily answer any e-mail that is interesting to him, so I, for one, believe it :)
No he doesn't! He never once replied to all the e-mails I sent him... :oops:

Just remember that JC sees a completely different side of these companies than most of the people on these boards see.
That don't make him right, not by a long shot. ;)
 
That don't make him right, not by a long shot.

I don't think that is what he was saying. Just that JC's reality is seperate from the consumers, even their needs are fragmented not to mention the various values people attribute to different qualities of graphics boards.
 
digitalwanderer said:
Zeross said:
I just hope that this will clear things up and that it will stop people from implying that he is biaised :rolleyes:
Great quote and info, but I'm not getting over Johnny-boy's Doom3/nV30 benchmarking fiasco quite that easy.

That benchmark was organized by nVidia. He even changed the demo because he felt nVidia picked a "best-case scenario" for their hw and JC wanted a fair benchmark. For all we know, if ATi had organized that he would have agreed to it as well; unless you have solid information that invalidates what I said.

[/quote]He's still biased, at least IMHO.[/quote]

He used the word "cheating" when he could have used optimisation. I don't know how he could have beat nVidia any harder.

You made elitebastard's frontpage with the "nVidia discrepancies in FarCry benchmarks?" when all that article proved is what we all suspected, nv3x shenanigans are throwing a curve ball at the nv40. Why don't you quote JC's latest email in your front page? How's this for a headline:

"John Carmack says nVidia is 'Cheating'!" :devilish:

What I'm trying to say is that, if there was any doubt, this last email (if true of course) reveals is that JC is partial to nVidia when it comes to development because they give him (almost) everything he wants for his future engines. This doesn't prevent him, however, from pointing out when nVidia is doing "bad things" that hurt joe public.
 
digitalwanderer said:
Zeross said:
I just hope that this will clear things up and that it will stop people from implying that he is biaised :rolleyes:
Great quote and info, but I'm not getting over Johnny-boy's Doom3/nV30 benchmarking fiasco quite that easy.

He's still biased, at least IMHO.
So? what was he supposed to do/ he didnt let nvidia use there "own" demo. He did what he did and it was the right thang to do, and that was nothing. Because it was SO plain to see the PR BS of the Hardcop "DEMO". Why would he go out of his way to say anything? EVRYBODY knows it was BS. Except for the blind fans and tards at [T]. Yeah he is bias, hes JUST like you and I ( except with way more money and brains) :LOL:
 
Thats really an excellent quote 8)
Has it made the inq yet?
Assuming it gets linked widely, I expect to see references in many doom3 reviews & benches.
It can certainly be brought up anytime someone tries to say 'nv30 roksors coz it beats r300 in doom3' :)

back in the ballpark of the R300 cards, but generally still a bit lower...
The bottom line is that the ATI R300 class systems will generally run a
typical random fragment program that you would write faster than early NV30 class systems...
Later NV30 class cards are faster
That doesn't quite add up to fx5950 being faster at doom3 than 9800xt to me.
Depends quite a bit on what he means by the last 'faster'.
a> faster than early nv30
b> faster than r300

I read it as 'a' & possibly but not necessarily 'b' but then I have a bias towards ATI...
Even if he does mean 'b' does he mean early r300 or late r300 class?
 
Zeross said:
I asked him directly about the validity on the quote and some more precisions and he's just answered me this :

The quote is from me. Nvidia probably IS "cheating" to some degree,
recognizing the Doom shaders and substituting optimized ones, because I
have found that making some innocuous changes causes the performance to
drop all the way back down to the levels it used to run at. I do set the
precision hint to allow them to use 16 bit floating point for everything,
which gets them back in the ballpark of the R300 cards, but generally still
a bit lower.

Removing a back end driver path is valuable to me, so I don't complain
about the optimization. Keeping Nvidia focused on the ARB standard paths
instead of their vendor specific paths is a Good Thing.

The bottom line is that the ATI R300 class systems will generally run a
typical random fragment program that you would write faster than early NV30
class systems, although it is very easy to run into the implementation
limits on the R300 when you are experimenting. Later NV30 class cards are
faster (I have not done head to head comparisons with non-Doom code), and
the NV40 runs everything really fast.

Feel free to post these comments.

John Carmack

But it's a chicken and egg problem now you have to believe me that this really comes from John Carmack :D

I just hope that this will clear things up and that it will stop people from implying that he is biaised :rolleyes:

kinda ot but jc's email is still johnc@idsoftware.com right?
 
Mordenkainen said:
Why don't you quote JC's latest email in your front page? How's this for a headline:

"John Carmack says nVidia is 'Cheating'!" :devilish:
Tempting headline, but I'm not posting up any HL2 or D3 news until they announce a release date....I'm just tired of all the pre-hype. :rolleyes:
 
Waltar said:
kinda ot but jc's email is still...

Just a note - don't everyone start e-mailing him now, okay? Not necessarily directed at you Waltar, but just to everyone. Too often I've seen it just get out of hand.
 
digitalwanderer said:
Chalnoth said:
In my experience, JC will readily answer any e-mail that is interesting to him, so I, for one, believe it :)
No he doesn't! He never once replied to all the e-mails I sent him... :oops:
That is interesting to him :)

He responded to a couple of mine. I do expect that he gets a lot of e-mails, however, and so even if he might otherwise like to respond to one that the "average Joe" sends, he might miss one in the spam.
 
PaulS said:
Just a note - don't everyone start e-mailing him now, okay? Not necessarily directed at you Waltar, but just to everyone. Too often I've seen it just get out of hand.
Yeah, I can see how that would get annoying. I think a generally good rule is simply that you shouldn't bother to e-mail somebody in a public position at random unless you have something specific to say/ask that can't be answered by somebody on the forums, for example. These people are sure to get many more e-mails, so I would think it would just be a common courtesy to not add to the noise.
 
Back
Top