Is OpenGL hardware specific (PC)?

Ken2012

Newcomer
Simply: Generally if, say, a D3D9 game is running on D3D8 hardware, various features will be optomised/disabled to run on that hardware correctly (given an additional D3D8 render path is coded, obviously).

How does OpenGL work in the PC space? Does this same method of down-scoping of features so to speak occur, or is it merely a question of raw performance (i.e fillrate, or lack of)?

The reason I ask is I recently aquired my friend's old GF Ti4200 (DX8.1-class, I think), comparing it to my GF 7800GTX (DX9.0c) both running Doom 3. Although I was using two different monitors (one 15" CRT and a 19" LCD), I noticed very little difference in IQ at a glance... I know that Doom 3 was originally optomised for GeForce 3, so perhaps this game/engine is too good an example of OGL structure for the test.

Thanks.
 
Ken2012 said:
How does OpenGL work in the PC space? Does this same method of down-scoping of features so to speak occur, or is it merely a question of raw performance (i.e fillrate, or lack of)?

Yes, depending on which extensions (which may or may not be ARB certified) you use. See below.

The reason I ask is I recently aquired my friend's old GF Ti4200 (DX8.1-class, I think), comparing it to my GF 7800GTX (DX9.0c) both running Doom 3. Although I was using two different monitors (one 15" CRT and a 19" LCD), I noticed very little difference in IQ at a glance... I know that Doom 3 was originally optomised for GeForce 3, so perhaps this game/engine is too good an example of OGL structure for the test.

Btw, the GF3/4 (except the MX) are DX8.0 class parts. Now onto D3, that game was made to look very similar between low and high end but the engine does have different paths depending on your video card.

Your friend's GF4 is using the NV20 path while your GF7 is using the ARB2 path. You have fragment programs (pixel shaders) that the GF4 doesn't (heat haze, mirror distortion) and your light interactions are higher quality too. There are also other more subtle differences.
 
Mordenkainen said:
You have fragment programs (pixel shaders) that the GF4 doesn't (heat haze, mirror distortion) and your light interactions are higher quality too. There are also other more subtle differences.

Ahh, of course. I was concentrating on wall textures for the most part- which indeed look vitually identical on both cards/paths- need to pay more attention :oops:

"Light interactions": including shadow quality I take it?

Thanks again.
 
Ken2012 said:
"Light interactions": including shadow quality I take it?

Not shadows; basically, better looking specular highlights. Shadow-wise there are no visual differences between paths (though D3 does take advantage of the two-sided stencil OGL extension for speed if your card/driver supports it).
 
Textures are just textures too... Normal maps won't look any better because of some newer DX/OGL featureset (ignoring compression for which the different available formats aren't generally used beyond DXTC, so it's a moot point kinda). Textures only look better with higher resolution and/or good artists.

edit: I suppose you meant differences between regular textures, normal maps, virtual displacement mapping... well...Mordenkainen already mentioned the bit about Doom3's development. :p
 
Last edited by a moderator:
Ken2012 said:
Simply: Generally if, say, a D3D9 game is running on D3D8 hardware, various features will be optomised/disabled to run on that hardware correctly (given an additional D3D8 render path is coded, obviously).

How does OpenGL work in the PC space? Does this same method of down-scoping of features so to speak occur, or is it merely a question of raw performance (i.e fillrate, or lack of)?
OpenGL works pretty much the same way as D3D in almost all games, as long as the programmer does the CAPs check ala D3D; in OpenGL, this would be checking for extensions (that a programmer uses in his game) the host hardware supports.

So, yes, when you start up an OpenGL game, it may "down-scope" available features in the menu depending on what video card you have.

The reason I ask is I recently aquired my friend's old GF Ti4200 (DX8.1-class, I think), comparing it to my GF 7800GTX (DX9.0c) both running Doom 3. Although I was using two different monitors (one 15" CRT and a 19" LCD), I noticed very little difference in IQ at a glance... I know that Doom 3 was originally optomised for GeForce 3, so perhaps this game/engine is too good an example of OGL structure for the test.

Thanks.
Doom3 does what I explained above. The reason you didn't immediately really notice any difference when using a DX8- and DX9-class video card is primarily due to what Mordenkainen explained, as well as what a general gamer may or may not notice due to his preferences when it comes to quantifying image quality. For example, if you have a DX9-class video card, the game will auto-detect that and use the ARB2 path, which means using floating point calculations, something a DX8-class video card can't do. In this example of the difference between a DX8- and DX9-class video card, the image quality differences won't jump out at you. The other possible reason that you don't immediately notice any image quality differences may be that if you use a DX8-class video card, the game uses non-DX9-class (or "just" DX8-class) features (it more or less falls between DX8-class features and DX9-class features, going strictly by the MS DX CAPs adherence rules for any hardware's specifications for every new major iterations of DirectX) that really does not provide the usual impact/wow factor.

Based on your observation of what may constitute "immediate impact", maybe the IHVs (and MS, wrt its DX adherence rules) ought to do more when it comes to things like textures (everyone almost immediately notices ever higher resolution textures more than anything else; for all I care, I can live with 24-bit floating point precision for another 3 years, as an example of where my preferences and priorities lie!) and texture compression.

Hope this helps and that I didn't ramble on unnecessarily!
 
regarding opengl, it's a bit fucked up for the DX8 era. there is no official (ARB) extensions for "pixel shader 1.x", you have the nvidia extension for geforce 3/4 and the ATI extension for R2xx. and nothing for matrox parhelia and sis xabre I think (but who cares).
Now there's ARB_fragment_program (or such) extension which is some equivalent of Pixel Shader 2.0 (where 3.0 fits I don't know)

for doom3, yes it's the same rendering. on geforce 4 though I notice much the banding/checker boards on specular maps (which are, er, some form of light on a surface I'd say)
 
Blazkowicz_ said:
regarding opengl, it's a bit fucked up for the DX8 era. there is no official (ARB) extensions for "pixel shader 1.x", you have the nvidia extension for geforce 3/4 and the ATI extension for R2xx. and nothing for matrox parhelia and sis xabre I think (but who cares).
Now there's ARB_fragment_program (or such) extension which is some equivalent of Pixel Shader 2.0 (where 3.0 fits I don't know)

Now I don't know Direct3D at all, but I got the impression that ARB_fragment_program was the 1.x era equivalent, and GLSL/ARB_shading_language_100 cover the 2.0-3.0 range.
 
ARB_fragment_program corresponds to PS2.0, not 1.x . Much like PS2.0, it requires at least FP24 floating-point precision (whereas PS1.x generally only required 9-bit fixed-point); its program lengths are similar to PS2.0 and much longer than PS1.x (PS1.4: 6tex+16arith, PS2.0: 32tex+64arith, ARB: 24tex+48arith); also, its dependent-texturing constraints (4 levels, no particular limits on how much code you can run per level) are similar to PS2.0 and very, very different from what appeared in PS1.x.

As for GLSL (OpenGL 2.0 standard), its current feature set rather closely matches PS3.0, except that it gets rid of a few restrictions (mainly temporaries and program length) and - like PS4.0 - no longer permits the shader program to be supplied in assembly form.

GLSL is to some extent available on sub-PS3.0 platforms as well; on these platforms, trying to use GLSL language features that aren't actually supported by the underlying hardware (e.g. dynamic branching on a PS2.0-class GPU) will cause the GL driver to revert to software rendering.
 
Thankyou all for the responces so far

arjan de lumens said:
As for GLSL (OpenGL 2.0 standard), its current feature set rather closely matches PS3.0, except that it gets rid of a few restrictions (mainly temporaries and program length) and - like PS4.0 - no longer permits the shader program to be supplied in assembly form.

Perhaps out of the scope of the thread, but does a high-level language for shaders have any immediate advantages over assembly code in terms of performance?
 
Ken2012 said:
Perhaps out of the scope of the thread, but does a high-level language for shaders have any immediate advantages over assembly code in terms of performance?
Not in the short term, as they generally compile down to an assembly-like format in the first place. In the medium-to-long term, however, if the assembly optimization rules change, it is generally less effort to tweak the high-level language compiler than it would be to hand-re-schedule long reams of assembly code. Also, getting rid of top-down-imposed assembly languages would potentially allow the IHVs to better tweak the underlying shader instruction sets to provide better matches between constructs commonly used in high-level shaders and the hardware that is going to run them.

The short-term benefit of high-level shaders is more that they are much easier to write, debug, maintain and even generate on the fly, greatly increasing game developer productivity.
 
Back
Top