Carmack's comments on NV30 vs R300, DOOM developments

OpenGL guy said:
Hey now, I've heard you on conference calls (and in person! ;)). The other day someone said an average talking level is about 75 db... apparently they'd never met Dio :D
I was expecting that but from Andy.

It's not my fault I project well. It's 'leadership qualities' you know - i.e. I'm tall and have a very loud voice. :)
 
Doomtrooper said:
Bioware chose a proprietary Nvidia extension to show those effects, not exactly great for the consumer is it.
In defense of Bioware, at the time, ATI only had their command-line shader extension. Now that they have added the ATI_text_fragment_shader extension, Bioware had a much nicer codepath available in supporting ATI graphics cards. I think ATI really screwed up with their original ATI_fragment_shader extension.
 
GL_ATI_text_fragment_shader is only available on Mac platforms, so the situation haven't changed. I sense that this extension has a very low priority and probably wont be supported ever in Windows.
 
Chalnoth said:
In defense of Bioware, at the time, ATI only had their command-line shader extension. Now that they have added the ATI_text_fragment_shader extension, Bioware had a much nicer codepath available in supporting ATI graphics cards. I think ATI really screwed up with their original ATI_fragment_shader extension.

Morrowind released as a DX8 title and guess what...shiny water..no issues, a good example why I'm starting to prefer DX titles.
 
Humus said:
GL_ATI_text_fragment_shader is only available on Mac platforms, so the situation haven't changed. I sense that this extension has a very low priority and probably wont be supported ever in Windows.
Interesting. I guess that changes the situation somewhat, but I can still believe why somebody would support the NV shader extensions and not the ATI shader extensions (no reason now...but with the 8500's...).
 
Really? The GL_NV_register_combiner is much more confusing and inconvenient than GL_ATI_fragment_shader ever was. Smack on the GL_NV_register_combiners2 + GL_NV_texture_shader{2|3} and it's even worse.
 
Humus said:
Really? The GL_NV_register_combiner is much more confusing and inconvenient than GL_ATI_fragment_shader ever was. Smack on the GL_NV_register_combiners2 + GL_NV_texture_shader{2|3} and it's even worse.

I think Chalnoth might have been referring to other-than-technical reasons (such as marketshare) for Bioware to have preferred the NVidia extensions.
 
Interesting reading the corver interview between Reverend and John Carmack on Beyond3d today. SO JC views the NVidia drivers really aren't fully tuned by a long shot reading page 2.

DOes anyone have a guess how much longer it will take them to iron out teh major performance bottlenecks - days, weeks or months?
 
What exactly does that mean for FX users?
Does anyone follow what JC's "significant improvements with futre drivers" refers to in terms of ARB2 performance? Are massive 40-50 % performance gains coming accross the board, or 4-5%? Are we only talking about drivers not yet optimized for dx9? Or like any other driver just needs tweaking here to ensure compatability with all games/API's? What's with the generic and almost misleading statements?
:rolleyes:
 
Thanks demalion, that a bit more clear. Carmack's statements still show nothing definitive. It's going to take alot of work by Nvidia and alot of working with developers before what carmack is saying means anything at all. Is there a tangible guarantee we'll ever see what he's talking about in practice?
 
My opinion is that John had a whole lot of NV goodness during Quake3 development work. He probably got (most of) what he asked for from NV during that time and NV probably responded the quickest. I believe NV delivered most of what they promised to John during that time. That is probably why John appears to continue (i.e. from GF1 to GF2 to GF3 to GF4 to GFFX) to give NV the benefit of doubt wrt DOOM3 and the NV30. What I read from John's .plan and his answers to my questions is simply that, no more - giving NV his benefit of doubt.

I honestly think all John wants is have DOOM3 run the best all things considered. I doubt he'd ignore the fact that the Radeon 9700Pro was available commercially 6 months ago.
 
Humus said:
Really? The GL_NV_register_combiner is much more confusing and inconvenient than GL_ATI_fragment_shader ever was. Smack on the GL_NV_register_combiners2 + GL_NV_texture_shader{2|3} and it's even worse.
Well, Humus, that may be your opinion, but which one did you learn first? Often people tend to prefer the one that they program with first. And please note that this has purely to do with the programming interface, not the underlying assembly code.

Anyway, I was going directly off of John Carmack's statements that ATI's programming path is messier than nVidia's (w/ respect to the GF3/R8500 extensions).

And no, I wasn't talking about marketshare at all. In particular, I find that it could be a fair bit more cumbersome to support both types of extensions than it would be if the extensions were much more similar in structure.
 
Reverend said:
My opinion is that John had a whole lot of NV goodness during Quake3 development work. He probably got (most of) what he asked for from NV during that time and NV probably responded the quickest. I believe NV delivered most of what they promised to John during that time. That is probably why John appears to continue (i.e. from GF1 to GF2 to GF3 to GF4 to GFFX) to give NV the benefit of doubt wrt DOOM3 and the NV30. What I read from John's .plan and his answers to my questions is simply that, no more - giving NV his benefit of doubt.
From what he's said in the past, I really believe that he'd like to drop NV30-extension support if he has the chance (assuming he doesn't move to any HLSL, which seems unlikely at this juncture). He seems to prefer industry-standard extensions by quite a bit over proprietary extensions. For example, the NV30 vertex program extension obviously offers a fair bit more functionality than the ARB vertex program extension, so why doesn't he use the NV30 extension as well? I really don't think it has much to do with programming ease as far as he's concerned, but more of a "purist" standpoint to 3D coding.

Anyway, if nVidia brings the ARB2 performance up to the NV30 fragment program extension performance, I really think that JC will drop the NV30 fragment program support (I doubt the performance can actually be identical, but it seems possible that it will get close).
 
Chalnoth...

Well, Humus, that may be your opinion, but which one did you learn first? Often people tend to prefer the one that they program with first. And please note that this has purely to do with the programming interface, not the underlying assembly code.

Anyway, I was going directly off of John Carmack's statements that ATI's programming path is messier than nVidia's (w/ respect to the GF3/R8500 extensions).

So, um, did you ask Carmack which path he coded for first, GF3 or Radeon 8500? Why do you apply this restriction to Humus, and not Carmack?

In any case, I don't believe carmack criticized ATI fragment extensions at all. If anything, it was the other way around. IIRC he favored the nVidia over ATI Vertex Prgram extensions, was agnostic about vertex array extensions, and favored ATI fragment extensions.
 
Joe DeFuria said:
In any case, I don't believe carmack criticized ATI fragment extensions at all. If anything, it was the other way around. IIRC he favored the nVidia over ATI Vertex Prgram extensions, was agnostic about vertex array extensions, and favored ATI fragment extensions.

I think you hit the nail on the head there.

Carmack .plan feb 11 2002

The vertex program extensions provide almost the same functionality. The ATI hardware is a little bit more capable, but not in any way that I care about. The ATI extension interface is massively more painful to use than the text parsing interface from nvidia. On the plus side, the ATI vertex programs are invariant with the normal OpenGL vertex processing, which allowed me to reuse a bunch of code. The Nvidia vertex programs can't be used in multipass algorithms with standard OpenGL passes, because they generate tiny differences in depth values, forcing you to implement EVERYTHING with vertex programs. Nvidia is planning on making this optional in the future, at a slight speed
cost.

And for the fragment extensions:

The fragment level processing is clearly way better on the 8500 than on the Nvidia products, including the latest GF4. You have six individual textures, but you can access the textures twice, giving up to eleven possible texture accesses in a single pass, and the dependent texture operation is much more sensible. This wound up being a perfect fit for Doom, because the standard path could be implemented with six unique textures, but required one texture (a normalization cube map) to be accessed twice.
 
Bjorn said:
And for the fragment extensions:

The fragment level processing is clearly way better on the 8500 than on the Nvidia products, including the latest GF4. You have six individual textures, but you can access the textures twice, giving up to eleven possible texture accesses in a single pass, and the dependent texture operation is much more sensible. This wound up being a perfect fit for Doom, because the standard path could be implemented with six unique textures, but required one texture (a normalization cube map) to be accessed twice.
What he is talking about there is the hardware of the GF4 vs. the 8500 (the fragment shaders themselves), not the software interface to access the hardware (the fragment shader extension).

He is saying that the fragment shader of the 8500 is better than the fragment shader of the GF4. He is not saying anything about the fragment extensions. The reason the GF4 can't sample more than 4 textures in a pass is not because NVIDIA made a worse fragment extension than ATI, it's because NVIDIA made less flexible hardware than ATI.
 
Reverend said:
I honestly think all John wants is have DOOM3 run the best all things considered. I doubt he'd ignore the fact that the Radeon 9700Pro was available commercially 6 months ago.

Why would it matter to him? He doesn't play any games and his "game" (and I use the term loosely) isn't out yet.
 
Back
Top