GL2 Shading spec released

texture lookup in a vertex program :)

I wonder how many of the current crop of cards can support this feature (also part of DX VS_3)
 
It looks like Microsoft and 3DLabs made sure the two shading languages become as incompatible as possible. :(
 
Hyp-X said:
It looks like Microsoft and 3DLabs made sure the two shading languages become as incompatible as possible. :(
I thought from the description given in nvidia's Siggraph 2003 paper that there weren't that many differences between the various "C-like" shading languages. Perhaps I've just not looked closely enough at them all?
 
Well, they made some really dumb choices, like instead of calling something float4 to remain in sync with HLSL and Cg, they call it vec4. They discuss the issue, but still end up with vec4. There are other places where syntax could have been harmonized, but has been kept different for no real reason. Atleast with C, C#, and Java, most of the primitive types have similar names.

It's not that big of a deal, just slightly annoying. I don't quite get their rationale against other datatypes or hints besides FP32. They sweep it away with a few arguments (that I disagree with), meanwhile, they leave an INT datatype in the language anyway that on most hardware gets promoted to a FP32 anyway and has many of the same disadvantages they argue against.

They see the need for framebuffer reads, tencil ops, and aux buffers, but leave it out of this spec. Why not bite the bullet all at once? No one's hardware runs this anyway at the moment. (although I suspect 3dLabs will, since this is essentially a language designed for 3dlab's HW rammed through ARB)

I find it a real let down. I thought the Stanford guys had a much better approach for abstracting and unifying everything.
 
DemoCoder said:
Well, they made some really dumb choices, like instead of calling something float4 to remain in sync with HLSL and Cg, they call it vec4.
Please feel free to tell me I'm deranged, but wasn't the OGL 2 spec/proposal out before the others?
 
I must admit, after skimming through the spec, I was a bit bewildered by them using column-major order for matrices when C uses row-major. I can't quite see the point.
 
Simon F said:
DemoCoder said:
Well, they made some really dumb choices, like instead of calling something float4 to remain in sync with HLSL and Cg, they call it vec4.
Please feel free to tell me I'm deranged, but wasn't the OGL 2 spec/proposal out before the others?

Float4 comes from Stanford's RTSL published in 2000. Cg and DX9 HLSL picked up from this. Both Cg and HLSL shipped as products. Fact is, they had ample time to rename some things to be in compliance not only with historical source material (Stanford RTSL), but also two shipping products which people are using today and for which published books and SDKs exist.

Like I said, it's not a big issue, but since all three of the main HLSLs are mainly C subsets, and do not really have any features that differentiate their semantic power, why not harmonize? Kinda lame that I have to write in once HLSL for OpenGL and another for DX9.

I guess this will be taken care of once people start producing cross compilers, but again, the nature of the GL2 binding to OGL might make this problematic. The DX9 HLSL is actually more abstract.
 
One of the striking differences in how shader inputs / outputs are declared.

I think as long as you model yourself from C, I find it more natural to pass arguments to a function as - well - arguments, instead of using global variables.

It's just not good programming practice.

On the other hand I'm not too fond of i/o semantics in HLSL, they could just define "reserved" variable names as in SLang.
Matching VS output to PS input by name instead of register number is a cool thing. Unfortunately it's impossible to do in HLSL due to architectural decisions.
 
Float4 comes from Stanford's RTSL published in 2000. Cg and DX9 HLSL picked up from this. Both Cg and HLSL shipped as products. Fact is, they had ample time to rename some things to be in compliance not only with historical source material (Stanford RTSL), but also two shipping products which people are using today and for which published books and SDKs exist.

vec4 or something simular (vector4, Vector4, Vec4, etc..) is pretty much the standard name in almost any 3d library used in game programming you care to mention including Direct3d (D3DVECTOR4). Float4 is the abberation.
 
typedefs are irrelevent. The issue is primitive type keywords and the fact that it is derived from Stanford's original work.

I could just as easily argue that "Integer" is way more prevalent in libraries than "int", so their int type should be named "Integer"
 
IIRC, MS and ARB had some disbute about the HLSL. MS claimed some patents etc. Don't know the details but maybe this have something to do with the naming convention too.
 
Popnfresh said:
I thought your original argument was that they should harmonize with existing practice, which they have done so.

You are arguing they should harmonize their primitive type names with conventions that some APIs use for Class or Struct names. I am arguing they should harmonize their primitive type names with the C language, previous shading languages (Stanford RTSL), and existing shipping standards (MS HLSL).

vec4 is a vector of what? ints? floats? doubles? float4 is a vector of floats. "float" implies IEEE. What does "vec" imply?

Again, according to your argument, existing practice in high level languages is to have "Int32" or "Integer" and "Float" objects defined as typedefs or in apis. Therefore, if they harmonized against existing practice across all programming languages, they should have used "Integer" or "Int32" instead of "int"

My opinion is that the harmonization should be against C, IEEE, and currently shipping products (like it or not, DX9 has been shipping for awhile now and has significant position in the market due to MS). At best, you could argue they should harmonize against the OpenGL C binding, but then you'd either have to use arrays (float vec[4]) or GL-style typedefs (GLint, GLvec4)


I mean, why take an adversarial not-invented-here position on something so trivial to fix? ARB considered a request to alter the type names and turned it down for rather irrelevant reasons.

The result will be a real headache in the future when trying to harmonize MS HLSL and GL2 HLSL. Just because Microsoft is doing something doesn't mean you have to go against it.

IMHO, there should be one C-style HLSL, and the only difference between the MS and GL versions would be the stanfard library provided of functions, and the global variables and API state bindings, just like with C compilers today. Single syntax, multiple compiler implementations and provided libraries and OS bindings.

The fact that the two HLSLs are so similar, but subtly different is worse than if they had been radically different, since it is likely to lead people who work in both worlds (DX and GL) to make dumb errors.
 
I think the point that's been missed here is that GL has always attempted to set the standards based on principle - rather than expediency. Whether that's right or wrong - there are many arguments on both sides - it's the GL way.

I'll answer your specific points with this in mind. Note that I have not been involved in any ARB discussions on GL2 and speak only from what I know of GL and GL2.

DemoCoder said:
vec4 is a vector of what? ints? floats? doubles? float4 is a vector of floats. "float" implies IEEE. What does "vec" imply?
Nothing, and that's the point. It explicitly gets away from 'This is represented as this' in the same way that everything else in GL is.

DemoCoder said:
Again, according to your argument, existing practice in high level languages is to have "Int32" or "Integer" and "Float" objects defined as typedefs or in apis. Therefore, if they harmonized against existing practice across all programming languages, they should have used "Integer" or "Int32" instead of "int"
I'm a big C fan. But C is terrible in this respect. The wishy-washy rules on 'int is this big, but might be bigger' are terrible. Why else would so many projects explicitly use 'int32' 'uint32' and similar typedefs - and why do so many projects standardise on 'byte' 'word' and 'dword' types?

Here GL2 defines a semantic but with only limited implementation constraints - in the same way that texture mapping, geometry, etc. in GL are semantics with only very limited constraints such as repeatability. To specify a data type would imply an implementation.

DemoCoder said:
I mean, why take an adversarial not-invented-here position on something so trivial to fix?
I don't think it's meant to be adversarial - just wanting to do it 'properly'.

DemoCoder said:
IMHO, there should be one C-style HLSL, and the only difference between the MS and GL versions would be the stanfard library provided of functions, and the global variables and API state bindings, just like with C compilers today. Single syntax, multiple compiler implementations and provided libraries and OS bindings.
Would there being only one HLL be a good thing? Whether it is or not, it's not likely to happen, because everyone believes every HLL has its flaws.

I think a lot of this will come down to personal preference. Since it is possible that a single header file included could make the type syntax pretty much identical, I don't think it's a particularly big headache.
 
Nothing, and that's the point. It explicitly gets away from 'This is represented as this' in the same way that everything else in GL is.

Well, if you want to go "typeless" for your main primitive, why bother with INT? Why bother with all the verbiage on precision? Fact is, GL is specifically based on saying "this is represented as this with these relaxations" to make outcomes precise. With regards to numerical computation, standards are needed, that's why we have IEEE. If the primitive is an IEEE float, all developers will understand what that means. If it is abstract, results will vary, which will explode the amount of testing that has to be done.

We want write-once run-anywhere (with varying performance)




I don't advocate a single HLL, I have always argued for multiple HLSL. My argument is, don't confuse developers with multiple but subtly different C-look-a-likes.
 
I prefer vec4 over float4 because traditionally in C/C++ when 'name-mangling' variables you do <type><sizeoftype> such as int32 (integer made up of 32 bits), vec4 (vector made up of 4 floats) or mat44 (matrix made up of 4 rows and 4 columns) rather than <type><numberoftype>. Seeing float4 makes me think of a 4-byte float - which is the meaning assigned to it in SQL systems. vec4 matches well with existing practice in a great deal of C/C++ in use today including Direct3D.

You prefer float4 because that what is used in the Stanford Shading Language and its derivatives. float4 matches with existing practice for shading languages and vec4 does not and seems deliberately unharmoneous. (yes, I gave my side a bigger paragraph than yours :devilish: )

I don't think this is the sort of argument were either side can be proven 'correct'. Is it necessary to continue?
 
No, it's an aesthetic preference and a political argument, not a technical one. Ultimately, I don't care, I just want a single C-like syntax harmonized between DX9 and GL2. It would be great if I could go to a library of shaders and just import them into DX9 or GL2 projects without a lot of hassle, save maybe a few #ifdefs based on the way API state is bound to shader state.

If the languages supported typedefs, you could make them portable/cross compiler like

#ifdef GL2
typedef float4 vec4;
#endif

(or the converse)
I guess you could also do
#define vec4 float4

Of course there are other incompatibilities that can't be #defined and typedef'ed away
 
On another note....

It seems ATI has been tracking the beta specifications within thier drivers. Someone has already gotten the 3DLabs SDK code partially working on thier Radeon 9500 card.

I got the simple shader example working on my radeon 9500 Pro by hacking around a bit. Basically I just took out some error checking (the ATI driver doesn't accept some of the newer enums like GL_OBJECT_LINK_STATUS_ARB) and changed the extension name suffixes to GL2.

Full message in link.

Considering ATI's driver release schedule this bodes quite well for it being in a lot of developers hands quite quickly.
 
Back
Top