Deano Calver
Newcomer
Spent today down at a 3DLabs seminar on GLSL and OpenGL 2.0. I'm going to do a proper write-up for Beyond3D, but thought I'd write up a few thoughts before I forget them.
Overall GLSL looks good, managed to convince me that it was the right model (compiler in driver and state tracking) over D3D. Its going into core by Siggraph 2004 so we can expect everybody to support it.
There are still a few issues (as I see them) that need to be sorted before it gets moved from an ARB extension to OpenGL core.
The big one is shader language extension semantics, currently its not defined how IHV's will extend the language. NVIDIA have taken the lead with a bunch of extensions for there NV40 implementation, which seems to have caused some friction in the ARB. ATI are working out an official extension naming convention before GLSL becomes core.
This is already and issue as GLSL doesn't natively support multiple render targets, so an extension is already on its way to enable this to be supported.
Another issue is vertex texturing (vertex shader texture reads), the standard defines that a vertex texture unit must support all sampler types (sampler1D, sampler2D, sampler3D, samplerCube, sampleShadow) even though some of these are of little use per-vertex, a lot of hardware may only support sampler2D and so have to claim no vertex unit units (GLSL allows a valid implementation to have no vertex texturing units, this actually implies that GLSL hardware may be less capable than D3D VS 3.0). While orthogonal its seems silly to deny support for no good reason (hands up how many people need to do shadow mapping in a vertex shader?)
Another sad thing, was a acceptance that some things are just going to be ignored, noise() being the big one. While no hardware has a noise function, OpenGL has a clear rule to support this, fall back to software. But it seems to have been accepted that in this case, it o.k. just to run the shader but ignore the noise instruction.
But overall it looked and felt good, for the 1st time in a long time, OpenGL has managed to get ahead. The shader language, API seems to be the best implementation of hardware shaders so far, it has had the benefit of seeing the other implentations faults so it looks like this one should last in its current form for some times.
Hopefully OpenGL 2.0 will also take the time to bring things like floating point textures and render-targets into the core so that these vital features are properly supported without extensions.
Overall GLSL looks good, managed to convince me that it was the right model (compiler in driver and state tracking) over D3D. Its going into core by Siggraph 2004 so we can expect everybody to support it.
There are still a few issues (as I see them) that need to be sorted before it gets moved from an ARB extension to OpenGL core.
The big one is shader language extension semantics, currently its not defined how IHV's will extend the language. NVIDIA have taken the lead with a bunch of extensions for there NV40 implementation, which seems to have caused some friction in the ARB. ATI are working out an official extension naming convention before GLSL becomes core.
This is already and issue as GLSL doesn't natively support multiple render targets, so an extension is already on its way to enable this to be supported.
Another issue is vertex texturing (vertex shader texture reads), the standard defines that a vertex texture unit must support all sampler types (sampler1D, sampler2D, sampler3D, samplerCube, sampleShadow) even though some of these are of little use per-vertex, a lot of hardware may only support sampler2D and so have to claim no vertex unit units (GLSL allows a valid implementation to have no vertex texturing units, this actually implies that GLSL hardware may be less capable than D3D VS 3.0). While orthogonal its seems silly to deny support for no good reason (hands up how many people need to do shadow mapping in a vertex shader?)
Another sad thing, was a acceptance that some things are just going to be ignored, noise() being the big one. While no hardware has a noise function, OpenGL has a clear rule to support this, fall back to software. But it seems to have been accepted that in this case, it o.k. just to run the shader but ignore the noise instruction.
But overall it looked and felt good, for the 1st time in a long time, OpenGL has managed to get ahead. The shader language, API seems to be the best implementation of hardware shaders so far, it has had the benefit of seeing the other implentations faults so it looks like this one should last in its current form for some times.
Hopefully OpenGL 2.0 will also take the time to bring things like floating point textures and render-targets into the core so that these vital features are properly supported without extensions.