What is the OpenGL eqivalent to Sm2.0/3.0?

You know how DX9 has PS2.0 and PS3.0? Well, what's the open GL eqivalent to these? Or does openGl just use features on cards through extensions, regardless of shader model?
 
GLSL covers most of PS2.0/3.0. Since it doesn't compile down to any particular assembler instruction-set, implementation is all up to the IHV (though there are some minimum requirements). DirectX's HLSL seems alot more rigid in this sense, compiling down to the assembler specifications.
 
Hmm...so, I still don't understand though, in a game, say Doom3. In it's ARB2 path, how is it doing the shader stuff and what DX9 thing is it comparable to? I mean, what's "ARB_fragment program"?

Also, in Doom3 there's a Cg path, let's assume it works. What is different between that and HLSL and what PS version is it equivalent to? I'm so confused as to the OpenGL shader stuff.
 
XxStratoMasterXx said:
Hmm...so, I still don't understand though, in a game, say Doom3. In it's ARB2 path, how is it doing the shader stuff and what DX9 thing is it comparable to? I mean, what's "ARB_fragment program"?

Also, in Doom3 there's a Cg path, let's assume it works. What is different between that and HLSL and what PS version is it equivalent to? I'm so confused as to the OpenGL shader stuff.

ARB_fragment_program is basically the OpenGL equivalent of PS2.0
There's no ARB extension for PS1.x functionality and there won't be low level extension anymore so to have PS3 functionality you'll have to use GLSL
 
Ostsol said:
ARB_fragment_program is a low level shading specification equivalent to PS2.0.

Equivalent in functionality? low level? does that mean you have to write it in assembly or something?

Also, what about Cg?
 
XxStratoMasterXx said:
Ostsol said:
ARB_fragment_program is a low level shading specification equivalent to PS2.0.

Equivalent in functionality? low level? does that mean you have to write it in assembly or something?
Yes, it's basically the same as PS2.0 without HLSL.
Also, what about Cg?
Cg can compile down to multiple shader specifications (including DirectX ones), but only as long as a profile is written for it. It can support NVidia's OpenGL extensions for fragment shading that were written for the Geforce 3 and 4 (I really don't know much about them; I started out learning on a Radeon 8500). It doesn't support ATI's extensions, but this comes as no surprise. Cg does support ARB_vertex_program and ARB_fragment_program, though, and ATI does support those. However, there may still be an issue with how Cg optimizes its code, which may not be particularily good for ATI cards.
 
Ok...is HLSL a quality improvement thing or just does it ease the creation of Shaders?


Also, someone knowledgable told me that with OpenGl, to take advantage of the features on cards (like PS3.0 stuff) all you need to do is modify the actual fragment program to have the features in it and no modifications to an OpenGL based engine is needed, how would this work?
 
XxStratoMasterXx said:
Ok...is HLSL a quality improvement thing or just does it ease the creation of Shaders?
HLSL is an acronym for high level shading language. Instead of using an assembler-like language, one codes using a langauge similar in syntax to C/C++ or Java. It's simply a much easier way to code and is much more readable to programmers (especially those who are too lazy to add comments to their code).
Also, someone knowledgable told me that with OpenGl, to take advantage of the features on cards (like PS3.0 stuff) all you need to do is modify the actual fragment program to have the features in it and no modifications to an OpenGL based engine is needed, how would this work?
Well, assuming the inputs and outputs are all the same, then yeah all you have to do is modify the fragment program (and hope that the video drivers support those features).
 
Ostsol said:
HLSL is an acronym for high level shading language. Instead of using an assembler-like language, one codes using a langauge similar in syntax to C/C++ or Java. It's simply a much easier way to code and is much more readable to programmers (especially those who are too lazy to add comments to their code).
So it's an ease of use thing. There are no visual benefits to be gained by using a high-level shading language over a low-level one. What is the performance hit using HLSL, over Assembly shader say like the ones Carmack wrote for D3?

Well, assuming the inputs and outputs are all the same, then yeah all you have to do is modify the fragment program (and hope that the video drivers support those features).

What do you mean by inputs and outputs? Sorry, i'm very illiterate at advanced graphics coding (as many of you could tell :p)

But I could gather that from what you said, it was a yes to my question, as long as the video card drivers and the video card support the feature? Does that put OpenGL one up on D3D in that respect?


Also, how does the shader interface in Doom3 compare to a D3D9 game such as HL2's or FarCry's? Any major differences?
 
Ultimately whatever API your using your targetting the same hardware. There are benefits to DX ASM, OpenGL ASM, GLSL, HLSL, Cg, micro-code level (for consoles) but ultimately they all achieve roughly the same thing.

People get very religious that one way it 'correct' but in reality it varys. MS HLSL has the advantage of MS compiler team (they are shit hot) but the disadvantage of compiling to an immediate language. GLSL has the advantage of direct compilation to the hardware but in all likely hood they compiler tech isn't quite as good as MS's.

ASM might shave a few cycles in theory but sometimes having access to higher level syntax can help the driver optimisers (GLSL). Micro-code should be the fastest but its usually the hardest to write.
 
Assembler will be compiled by the driver for the specific architecture, so it would be better to speak of P-code. Trying to write in real assembler (if that would be possible), that can be executed by the hardware directly would be a really bad idea, as it takes lots of time to get it right and it will only run on the chips you targetted.
 
XxStratoMasterXx said:
So it's an ease of use thing. There are no visual benefits to be gained by using a high-level shading language over a low-level one. What is the performance hit using HLSL, over Assembly shader say like the ones Carmack wrote for D3?
Nope, there's no visual benefits except perhaps where a feature of the high level language is not supported by the assembler language. With DirectX's HLSL, everything has an assembler equivalent. With GLSL, the only equivalents are in some of NVidia's proprietary extensions. Still, ATI doesn't currently seem to support much beyond what is in ARB_fragment_program, so right now there's no visual benefit to using GLSL. For the future, though, high-level languages are certainly the way to go.
What do you mean by inputs and outputs? Sorry, i'm very illiterate at advanced graphics coding (as many of you could tell :p)
I mean that as long as the same textures and other data (constants and other parameters) are the same and as long as what the engine does to the framebuffer afterwards is the same, there is no problem. Problems might occur if the programmer wants to use dynamic branching to handle multiple lights, but the engine is designed to processes a fixed number of lights per pass. In that case, some changes to the engine will have to be made.

Of course, some engines are more versataile and their shader files contain much more than just the shader programs themselves. ATI's Sushi Engine, for example, defines render passes and several other items in the shader file. This is one of the reasons why wrappers for the Ruby demo is possible.
But I could gather that from what you said, it was a yes to my question, as long as the video card drivers and the video card support the feature?
Yes.
Does that put OpenGL one up on D3D in that respect?
I'm not entirely sure about that. I'm not a DirectX programmer, so I can't give a definite answer. I know that the compiler requires the shader version must be specified, so a basic engine may not be able to automatically support PS3.0 features. However, I suppose that an engine could store the shader version in the shader file. Of course, there's still the matter of detecting support for PS3.0. I suppose it's actually the same for OpenGL. If the engine doesn't know how to detect these new capabilities, it might try and use them on hardware that doesn't support it, which could cause some errors.
Also, how does the shader interface in Doom3 compare to a D3D9 game such as HL2's or FarCry's? Any major differences?
I have no idea about that. I only have Doom 3 and I haven't taken much of a look at the tools for it or at its material files.
 
Go to www.opengl.org and download the glSlang specifications, read it, go to online msdn library and read about DirectX. Compare it. That's the only way to get answers you search for. You can't ask this questions if you don't understand basics of shading and fundamental differences between this two APIs

Hope it helps
 
DeanoC said:
GLSL has the advantage of direct compilation to the hardware but in all likely hood they compiler tech isn't quite as good as MS's.

Who is "they"? GLSL only defines the language and the driver is responsible for taking care of the code. ATI provides it's own compiler, NVIDIA does the same, as 3DLabs, etc. That means that each ISV can optimize the code properly to the architecture it is currently running on. Not just the code, but also the uniforms and varying managment. Also, with GLSL, you just have to wait for a driver release for speed improvements and bug fixing, instead of a DX instalment for HLSL.

EDIT: and since GLSL eliminates a layer (comparing to HLSL), there is no need to target the code for something (like PS3.0, etc). The driver does what is best. It can even compile to PS1.0 if it fits. Or can take imediate advantage of PS3.0 features or PS4.0 or whatever...
 
Curious does ATI's GLSL compiler support in software the features the hardware doesn't actually support (gradient & noise functions off the top of my head).

Now that I think about it is there some functions for testing what functions are actually supported in GLSL by a specific platform? I've not actually done much testing with GLSL so only remember learning the functions for compiling and using your shaders.


Also anyone think we will get programmable alpha blending of some sort? You could always use render to texture and flip flopping but be much more useful with programmable alpha blending at some point in time.
 
Cryect said:
Also anyone think we will get programmable alpha blending of some sort? You could always use render to texture and flip flopping but be much more useful with programmable alpha blending at some point in time.
Certainly. Wildcat Realizm already has a programmable FP16 blending stage (unfortunately, 3DLabs use the term "pixel shader" for this), but I haven't seen a GL extension for it yet.
 
Cryect said:
Also anyone think we will get programmable alpha blending of some sort? You could always use render to texture and flip flopping but be much more useful with programmable alpha blending at some point in time.
Oh, probably. Sooner or later, framebuffer access will be granted, and at that point, alpha blending will be rolled into pixel shading (just like fog calculation is PS3.0, IIRC).

That's the only reasonable way I can think of doing it, anyway. Otherwise, you need to make another stage programmable and define yet more instruction limits, etc.
 
Back
Top