Rendering paths in Doom3 revisited..

silhouette

Regular
Hi guys,

I did not see anything about this, but maybe someone has..

We know that NV30 path is optimized for geforceFX line, but is ARB2 path optimized for R3x0?

Silhouette
 
?

The ARB2 path is, well, the ARB2 path. It is not optimized for any hardware. Hardware have to deal with the ARB2 path themselves, usually through drivers.

So, to answer your question - No, the ARB2 path is not optimized for the R3x0, or any other hardware for that matter. More than anything, it should be the other way round -- hardware and drivers may have to be optimized for the ARB2 path.
 
chavvdarrr said:
hm, what OTHER game-racds with ARB2 support in drivers exist?

Well, I thought that the GF-FX series of cards could use this path, and that they were simply uncomfortably slow. At least, that's what Carmack wrote. Is this incorrect? (That the specific drivers that the recent DOOM3 benchies were run with DIDN'T support that path - well, draw your own conclusion as to the reasons for that. But it's not inability to support the path per se.)

Furthermore, the beauty of OpenGL is that it is available on a wide range of platforms and operating systems, and drivers exist for a range of hardware that is not limited to PC gaming. To many, this is seen as a desireable quality in and of itself.
The restriction you are making is artificial. DOOM3, as previous id titles, will run on a much wider gamut of machines than Windows PCs. Additionally, the ARB2 path makes excellent sense in terms of future proofing.

Entropy
 
Assembly standards for shading are going the way of the dinosaur, at least in OpenGL (in DX, too, if Microsoft has any brains).
 
Entropy said:
Well, I thought that the GF-FX series of cards could use this path, and that they were simply uncomfortably slow. At least, that's what Carmack wrote. Is this incorrect?

Due to a bug, ARB2 currently does not work with NVIDIA's DX9 cards when using the preview version of the Detonator FX driver. According to NVIDIA, ARB2 performance with the final driver should be identical to that of the NV30 code.

http://www.tomshardware.com/graphic/20030512/geforce_fx_5900-10.html#doom_iii_special_preview
 
chavvdarrr said:
hm, what OTHER game-racds with ARB2 support in drivers exist?

Tenebrae has support for the ARB vertex and fragment program extensions (dunno why they are commonly called ARB2).
 
jpaana said:
chavvdarrr said:
hm, what OTHER game-racds with ARB2 support in drivers exist?

Tenebrae has support for the ARB vertex and fragment program extensions (dunno why they are commonly called ARB2).

i think carmack called the arb_fragment_program path arb2 because he already had a standard arb path
 
Indeed. I just wish people would recognice that ARB2 is nothing but a DoomIII term and has nothing to do with the OpenGL API itself.
 
Humus said:
Indeed. I just wish people would recognice that ARB2 is nothing but a DoomIII term and has nothing to do with the OpenGL API itself.

Heh.
All renderpaths in the Doom3 engine are OpenGL, obviously.
Chavvdarrr implied that the ARB2 path is made for the R300, i.e. that it is the equivalent to the NV30 path, only for the R3X0 family of processors.

This, to the best of my understanding, is simply wrong.

Entropy
 
Entropy said:
Chavvdarrr implied that the ARB2 path is made for the R300, i.e. that it is the equivalent to the NV30 path, only for the R3X0 family of processors.

This, to the best of my understanding, is simply wrong.

Entropy
I'm not so sure about that. What other video card available today will use that rendering path? The path is going to be optimized using the R3xx, and therefore it is going to be optimized for the R3xx.
 
Humus said:
Indeed. I just wish people would recognice that ARB2 is nothing but a DoomIII term and has nothing to do with the OpenGL API itself.

Ditto!

According to NVIDIA, ARB2 performance with the final driver should be identical to that of the NV30 code.

I'd be interested to see what comes of that - can't see how it'd be possible (without falling back to FP16). They aren't even honest with their own employees these days. :rolleyes:

MuFu.
 
Chalnoth said:
Entropy said:
Chavvdarrr implied that the ARB2 path is made for the R300, i.e. that it is the equivalent to the NV30 path, only for the R3X0 family of processors.

This, to the best of my understanding, is simply wrong.

Entropy
I'm not so sure about that. What other video card available today will use that rendering path? The path is going to be optimized using the R3xx, and therefore it is going to be optimized for the R3xx.

Sure, there is a point there, but it's fairly weak. Carmack has given no implication of such. Rather the implication has been that if the R3xx would require something different than the default, he would have coded a path for it, just as for the NV30.

By all accounts, what he calls the ARB2 path will by default be used by all upcoming cards that support it. I wouldn't be surprised at all if all nVidia cards from the NV40 and onwards used it, for instance. That rendering path is meant to be standard one for the foreseeable future of the Doom3 engine. A lot will happen on the graphics card front during the effective lifetime of the D3 engine, and I have seen no indication that Carmack has tailored it to the R3xx family specifically. It would be foolish, and by his own words, unnecessary.

Entropy
 
Nice o_O this is what I expected anyway... So, this means that Doom III is optimized for GF FX, but not for R3x0 :cry:

I guess that's the reason why R3x0 series does not seem very good in the initial benchmarks :( Hopefully, before the game releases, it is also optimized for it as well..

Silhouette
 
Is there anybody here that can fire up John Carmack this question? (like reverend)

Is he going to write a R3x0 optimized path? I mean still uses the same ARB2 path, but utilize R3x0 optimized shader code when R3x0 is detected..

Or does he let ATI do the same job in the drivers :?

Silhouette
 
Entropy said:
and I have seen no indication that Carmack has tailored it to the R3xx family specifically. It would be foolish, and by his own words, unnecessary.

Entropy
Which is the point. The R3xx architecture is so closely-tied to the specs (it appears that ATI got pretty much exactly what they wanted for the standard assembly shader language specification in both DX9 and OpenGL) that there won't be much difference in optimizing for the R3xx in ARB2 and optimizing for any other video card in ARB2.

By all accounts, what he calls the ARB2 path will by default be used by all upcoming cards that support it. I wouldn't be surprised at all if all nVidia cards from the NV40 and onwards used it, for instance.
ARB2 still has one glaring flaw: no instruction-level specification for data types. I see no reason why future hardware should only support one data type.

And, very soon, I expect JC to move the standard to the GL2 HLSL (GLSlang, I believe it is). ARB2 is just a stopgap: video card architectures just should not be tied to a specific assembly language specification.
 
Chalnoth said:
Entropy said:
and I have seen no indication that Carmack has tailored it to the R3xx family specifically. It would be foolish, and by his own words, unnecessary.

Entropy
Which is the point. The R3xx architecture is so closely-tied to the specs (it appears that ATI got pretty much exactly what they wanted for the standard assembly shader language specification in both DX9 and OpenGL) that there won't be much difference in optimizing for the R3xx in ARB2 and optimizing for any other video card in ARB2.

By all accounts, what he calls the ARB2 path will by default be used by all upcoming cards that support it. I wouldn't be surprised at all if all nVidia cards from the NV40 and onwards used it, for instance.
ARB2 still has one glaring flaw: no instruction-level specification for data types. I see no reason why future hardware should only support one data type.

And, very soon, I expect JC to move the standard to the GL2 HLSL (GLSlang, I believe it is). ARB2 is just a stopgap: video card architectures just should not be tied to a specific assembly language specification.

Carmack said his next engine would definitely be coded using a gfx HLSL. I haven't heard anything more about that project though, nor anything further regarding his view on the current alternatives other than that he views the direction as more important than the specifics at this point. It may or may not be a practical proposition to build an engine based on the current alternatives, but he has been quite emphatic in that this is the direction he envisions the industry to move. Which seems to make excellent sense, although he pointed out that it may make it harder for IHVs to differentiate their products based on feature set.

On the other hand, he has given no indication that I'm aware of that he would recode the Doom3 engine using a HLSL. That would seem like quite a bit of work for little practical gain and a likely lowering of performance unless the compilers did a better job than JC. I seriously doubt they would. And the Doom3 engine is likely to be with us for a fairly long time. HLSL prime time doesn't seem to be here just yet, and frankly the need for them isn't nearly as acute in gfx as in general purpose computing. Nice and useful, of course, but hardly essential at this point in time.
We'll see. But they have had no impact on the upcoming generation of games obviously, and it is not a certainty that they will dominate the development of the next generation either.

Whether the data-types issue is a major problem is debateable. I would tend to see it as the domain of the programmer, but these are GPUs we are talking about, not CPUs, and there is no doubt whatsoever that there are quite competent folks on that OpenGL standards comittee, with quite wide-ranging needs and priorities. If they don't think it's needed, they probably know what they are talking about, regardless of my personal scepticism. FP16 is more of a stopgap than OGL2, IMHO.
The view of some actual game coders would be welcome on this issue.

Entropy
 
Entropy said:
On the other hand, he has given no indication that I'm aware of that he would recode the Doom3 engine using a HLSL.
Right. I think it'll come sooner than you'd think, however.

Regardless, DOOM3 will always have available all rendering modes, and I'm certain that you'll always be able to force the use of one or another.

That means that future nVidia cards (the only company with proprietary shader assembly extensions at the DX9-level) will not need to be great at ARB_fragment_program: they'll always be able to use NV_fragment_program.

HLSL prime time doesn't seem to be here just yet, and frankly the need for them isn't nearly as acute in gfx as in general purpose computing. Nice and useful, of course, but hardly essential at this point in time.
We'll see. But they have had no impact on the upcoming generation of games obviously, and it is not a certainty that they will dominate the development of the next generation either.
The issue isn't how useful they are now. It's about the advancement of 3D graphics. If they don't come into widespread use ASAP, we may always be stuck with standardized assembly. This will lead to problems like we have today with the NV3x vs. R3xx, and will prevent innovation in hardware design.

If they don't think it's needed, they probably know what they are talking about, regardless of my personal scepticism. FP16 is more of a stopgap than OGL2, IMHO.
Just food for thought:
8-bit data types are still used commonly in software made for CPUs.
 
Chalnoth said:
Just food for thought:
8-bit data types are still used commonly in software made for CPUs.
Looking through my own stuff. . . Small range counters, strings (collection of 8 bit values), booleans, small constants. . . not much else. Mostly I use full 32 bit integers or floats.

In shaders the only use I can think of for integers is for counters and some constants (used in loops), but that's it. . .
 
Back
Top