Carmack's comments on NV30 vs R300, DOOM developments

If Nvidia made the standards I am sure they could support them well too. Whoever has a next gen card out first (from ATI, or Nvidia) will define the standard, so it is completely unimpressive from my point of view. Do you OpenGl guy work for ATI?

Luminescent said:
I guess you/we could ask OpenGl guy or someone else from Ati. OpenGl guy, are you there? :D


If you do work for ATI, that is awesome, and I am happy you are not a crazy Zealot, I figured people that worked for one of the companies might be.
 
The standards are not made by ATI, they are made from input from all the ARB members:

The OpenGL Architecture Review Board (ARB), an independent consortium formed in 1992, governs the OpenGL specification. Composed of many of the industry's leading graphics vendors, the ARB defines conformance tests and approves new OpenGL features and extensions. As of June 2002, voting members of the ARB include 3Dlabs, Apple, ATI, Dell Computer, Evans & Sutherland, Hewlett-Packard, IBM, Intel, Matrox, NVIDIA, Microsoft, SGI, Sun.

http://www.opengl.org/developers/about/arb.html
 
Sxotty said:
If Nvidia made the standards I am sure they could support them well too. Whoever has a next gen card out first (from ATI, or Nvidia) will define the standard, so it is completely unimpressive from my point of view.
The Radeon 9700 Pro has been out for months. I don't know when the extension was adopted, but don't you think nvidia could have adopted it as well considering how much later their part is?

Checks web: I assume the extension in question is "ARB_fragment_program". According to http://oss.sgi.com/projects/ogl-sample/registry/ARB/fragment_program.txt it was adopted on Sept. 18, 2002. That's over 4 months ago.
Do you OpenGl guy work for ATI?
That would be a good assumption, but it doesn't answer my question.
If you do work for ATI, that is awesome, and I am happy you are not a crazy Zealot, I figured people that worked for one of the companies might be.
Oh, I'm just as crazy as the next guy ;)
 
Actually most of the people who work for companies tend to be relatively reserved, largely because you want your work to speak for itself and because this industry is far too small to want to make any enemies.

OpenGL_guy is just the loudest :)
 
If Nvidia made the standards I am sure they could support them well too. Whoever has a next gen card out first (from ATI, or Nvidia) will define the standard, so it is completely unimpressive from my point of view.

The Radeon 9700 Pro has been out for months. I don't know when the extension was adopted, but don't you think nvidia could have adopted it as well considering how much later their part is?

How many times has it been said? The NV30 extension has more features than the ARB_fragment_program extension. The NV30 does support the ARB2 path perfectly well, except for the issue about the accuracy of calculations being different on the ARB2 and NV30-specific paths.
 
OpenGL guy said:
The Radeon 9700 Pro has been out for months. I don't know when the extension was adopted, but don't you think nvidia could have adopted it as well considering how much later their part is?
Considering how long it took ATI to get any of its advanced programmability supported, I really don't see how this has any relevance. nVidia's drivers are fully-functional, though they appear to be running at a fraction of the performance that should be there.
 
The NV30 extension has more features than the ARB_fragment_program extension

Of course it does, It was designed by Nvidia, the ARB2 was designed by the OpenGL Architecture Review Board to be OPEN (you know OpenGL, it's not a proprietary extension to make their hardware look superior, in the end if all the card companies wanted to write their own extensions they very easily could, thats the whole idea to simplify programming on a broad range of hardware (Like DX9) :rolleyes:
 
Chalnoth said:
Considering how long it took ATI to get any of its advanced programmability supported, I really don't see how this has any relevance. nVidia's drivers are fully-functional, though they appear to be running at a fraction of the performance that should be there.

The reason it took time was because ATi didn't want to repeat the NV_register_combiner/ATI_fragment_shader debacle again, so instead of rushing a GL_ATI_ extension to get it supported they made sure to get it standardized. This takes time. Not to mention that the extension is quite complex and most likely takes quite some time to implement. nVidia could start coding on their ARB_fragment_program support at the same time.
 
Doomtrooper said:
Of course it does, It was designed by Nvidia, the ARB2 was designed by the OpenGL Architecture Review Board to be OPEN (you know OpenGL, it's not a proprietary extension to make their hardware look superior, in the end if all the card companies wanted to write their own extensions they very easily could, thats the whole idea to simplify programming on a broad range of hardware (Like DX9) :rolleyes:
ARB2 was designed by ATI, then accepted by the OpenGL Architecture Review Board after some modifications.
 
Humus said:
The reason it took time was because ATi didn't want to repeat the NV_register_combiner/ATI_fragment_shader debacle again, so instead of rushing a GL_ATI_ extension to get it supported they made sure to get it standardized. This takes time. Not to mention that the extension is quite complex and most likely takes quite some time to implement. nVidia could start coding on their ARB_fragment_program support at the same time.
Now that we're moving into a time when higher-level languages should become the norm, I think that proprietary shader assembly is nothing but a good thing.

As a side note, I would really like to know how well Cg compiles to ARB2.
 
Chalnoth said:
Doomtrooper said:
Of course it does, It was designed by Nvidia, the ARB2 was designed by the OpenGL Architecture Review Board to be OPEN (you know OpenGL, it's not a proprietary extension to make their hardware look superior, in the end if all the card companies wanted to write their own extensions they very easily could, thats the whole idea to simplify programming on a broad range of hardware (Like DX9) :rolleyes:
ARB2 was designed by ATI, then accepted by the OpenGL Architecture Review Board after some modifications.

And? He talked about NV30 extensions... :rolleyes:
 
I dont understand:

Now that we're moving into a time when higher-level languages should become the norm, I think that proprietary shader assembly is nothing but a good thing.

How is that a good thing? Any time you have to create a seperate path or do extra work for proprietary "stuff" it does not justify the cost for that "stuff" usally....
 
jb said:
I dont understand:

Now that we're moving into a time when higher-level languages should become the norm, I think that proprietary shader assembly is nothing but a good thing.

How is that a good thing? Any time you have to create a seperate path or do extra work for proprietary "stuff" it does not justify the cost for that "stuff" usally....

He specifically mentioned higher level langues. Which means that it won't be any extra work for the developers since it will be up to the compiler to use (or not) the proprietary extensions.
 
I think he means, as per DX 9 versus Cg, that if the intermediate layer of abstraction is removed, then, theoretically, further optimization can happen before the shader needs to be sent to the GPU for execution.

For one example I recall being mentioned previously, which I think was a math operation, I tend to disagree, since optimizing a standardized assembly for such an operation should be as trivial. But for other theoretical optimizations, I think it depends on how fundamental the concepts of the LLSL are.

For a language maintained by more than one interest, as for example OpenGL HLSL, then I tend to agree it can only offer advantages, but I don't know how significant they will be. It depends on how fast and how optimized the LLSL "JIT" remapping to the architecture is...things like inefficient looping and high level optimizations would have been done by the HLSL->LLSL compilation anyways.

ATI seems to argue (in the case of DX 9) that HLSL->LLSL, then LLSL optimization/recompiling to GPU code, can achieve the same level of efficiency. Since, to my current level of understanding, this is based on the High->Low level optimization opportunities for both approaches being the same, it becomes a question of how effective the Low level->Architecture level remapping is for the "DX 9 HLSL" approach, and what opportunities, if any, for High->Architecture level optimizations might be missed by it.

Also, if the drivers are given the opportunity to remap LLSL to the architecture beforehand, and not "JIT", I don't think the HLSL->Architecture approach will offer any advantages for the foreseeable future (the entire advantage is based on getting more optimizations done earlier, AFAICS).

It seems to be the case, depending on how closely NV_Fragment instructions are mapped and optimized to the hardware functionality, that such optimizations (at current) do not yield significant gains, but that is only in one sample case (DOOM 3), and certainly doesn't establish anything as of yet.
 
Dio said:
Actually most of the people who work for companies tend to be relatively reserved, largely because you want your work to speak for itself and because this industry is far too small to want to make any enemies.

OpenGL_guy is just the loudest :)
Hey now, I've heard you on conference calls (and in person! ;)). The other day someone said an average talking level is about 75 db... apparently they'd never met Dio :D
 
jb said:
I dont understand:

Now that we're moving into a time when higher-level languages should become the norm, I think that proprietary shader assembly is nothing but a good thing.

How is that a good thing? Any time you have to create a seperate path or do extra work for proprietary "stuff" it does not justify the cost for that "stuff" usally....
It's a good thing because it allows for the highest-possible performance on various hardware, as well as allowing for possible future hardware improvements.

That is, the code the programmer writes (HLSL) should be identical across all similar video cards, just with different runtime compiler targets. By having proprietary assembly, this forces HLSL's to be flexible enough to allow future hardware designs to not be backwards-compatible with older shaders, potentially allowing for very significant performance improvements as time goes on (particularly if branching ever becomes relatively commonplace)
 
Back
Top