Where is OpenGL 3?

When OpenGL 3 comes out it will only be for Shader Model 2 cards, a few months later the spec for Shader Model > 2 will come out, between this time how will developers interface with GL for support of SM3.0+?

Will they use the Mt Evans interface with extensions to access SM3+ features or will things be as they are now?
 
For someone not initiated in the world of API developers I just have a question about the groups scheduling. It doesn't seem like they (OGL Group) are not hurrying to improve, could they be waiting and talking to ATi and nVidia about what they should push for to make something more evolutionary (revolutionary?) then DirectX 10.1 was/is?

For an outside observer like me it seems Microsoft is very prolific as opposed to OGL with what? 2-3 definitions of DX9 and now DX10 & DX10.1 within their APIs lifetime. Is it a good thing for OGL devs or...?

Lots of questions, i hope you'll bear with me. I do know there is a consortium or forum behind OGL with a different name than "OGL Group", I just don't remember right now what that was hehe.
 
Khronos is made of IHV (ATi, nVidia, PowerVR, 3DLabs...), so yes they are pretty well aware of where they are going...

Each new DirectX version required a lot of code changes for the programmers, and each is tailored to a set range of hardware (more or less, very true for D3D10), whereas OpenGL uses extensions to provide new functionality. (so you keep existing codebase and extend it, will change with OpenGL 3.0 though)

OpenGL 3.0 is the first major OpenGL rewrite since it was born in the mid 80's.
They are making sure it'll be as simple as possible, so that drivers are more robust and faster, and of course as much future proof as previous OpenGL was...

That said it's been a long time since the "OpenGL 2.0 Pure" 3DLabs proposal, which was all about making the API simpler for both programmers and driver writers, so as to make it faster and drivers more reliable...
 
When OpenGL 3 comes out it will only be for Shader Model 2 cards, a few months later the spec for Shader Model > 2 will come out, between this time how will developers interface with GL for support of SM3.0+?

I thought it was actually targeting up to SM3.0, and the Mt. Evans was going to be targeting SM4.0+.
 
Will they use the Mt Evans interface with extensions to access SM3+ features or will things be as they are now?
OpenGL doesn't have "shader models". A GLSL if statement will compile down to a branch on SM3 hardware if the driver feels it'll be faster that way. Any formats for textures, buffers and render targets will also be supported out of the box as creation will just fail if the hardware doesn't support the chosen format.

It's just not correct to say that OpenGL 3 won't support SM3+ hardware. I'll support a lot of features of this hardware, but not all of them (geometry shaders for instance).
 
For an outside observer like me it seems Microsoft is very prolific as opposed to OGL with what? 2-3 definitions of DX9 and now DX10 & DX10.1 within their APIs lifetime. Is it a good thing for OGL devs or...?

At least with respect to the shader languages, GLSL didn't need the same type of revisions that DX did, because the language doesn't have quite the same concept of resources like DX does. So from day one it was basically on par with SM4.0+ with supported features, http://en.wikipedia.org/wiki/High_Level_Shader_Language

Of course the problem with that is that the hardware does not necessarily support it and it leaves it up to the developer to deal with it.
 
Last edited by a moderator:
OpenGL 3.0 is the first major OpenGL rewrite since it was born in the mid 80's. They are making sure it'll be as simple as possible, so that drivers are more robust and faster, and of course as much future proof as previous OpenGL was...

Starting anew I take it they will be using todays hardware as a baseline and their imagnation as the future? How do you see to it that extensions are supported by HW devs?

Thank you for the informative link Killer-Kris
 
Starting anew I take it they will be using todays hardware as a baseline and their imagnation as the future? How do you see to it that extensions are supported by HW devs?

Thank you for the informative link Killer-Kris

GF5+, RadeOn 9500+ for the baseline of OpenGL 3.0, 3.1 should add support for GF8+/RadeOn HD2+, and the future, they can use their own roadmaps and future products research to know where to look at...

Not sure what you mean about extensions, but Khronos is made out of Hardware Vendors, they design the API, they add extensions, Software Vendors use the API and extensions.
 
OpenGL doesn't have "shader models". A GLSL if statement will compile down to a branch on SM3 hardware if the driver feels it'll be faster that way. Any formats for textures, buffers and render targets will also be supported out of the box as creation will just fail if the hardware doesn't support the chosen format.

It's just not correct to say that OpenGL 3 won't support SM3+ hardware. I'll support a lot of features of this hardware, but not all of them (geometry shaders for instance).

Surely geometry shaders will be supported via extensions within the object model when GL 3.0 comes out?
 
So from day one it was basically on par with SM4.0+ with supported features
That's terribly untrue... language design issues aside, GLSL needs at the very least EXT_gpu_shader4 to get anywhere near SM4.0-style features and functionality... and that extension changes a big chunk of stuff in GLSL!

Of course the problem with that is that the hardware does not necessarily support it and it leaves it up to the developer to deal with it.
Incidentally this is one of the biggest issues with using GL in a production environment. You may write some nice code that works flawlessly on whatever hardware you are testing/debugging on but something as seemingly innocuous as a driver update could make everything break. New cards and different IHVs almost always require new paths for non-trivial programs which makes poses a maintainability and support nightmare. FBOs are a particularly terrible spec where these issues are concerned (there's no guarantee that any combination of formats will ever work... the driver is free to just veto stuff at will with no way to query what DOES work), but GLSL has its share of similar issues.

Don't get me wrong, the D3D model has its share of issues (Microsoft controlling the real-time graphics industry anyone? Bad bad bad!)... maybe something in between would be better. That's supposed to be what EXT_ extensions are for, but in several cases the compromises that need to be made on those are just too severe and numerous, as noted above.

I too am also anxiously awaiting a long-overdue cleanup of the GL API, and I hope we hear at least a news update on it soon.
 
Surely geometry shaders will be supported via extensions within the object model when GL 3.0 comes out?
Possibly, but I think they'd rather wait with implementing it until 3.1 where they're going to add SM4 support "for real". It's supposed to be released about 6 months after the 3.0 spec. (So that's 1 year at least if history repeats. ;))

The driver is free to just veto stuff at will with no way to query what DOES work
Isn't that one of the main things they're looking to change in OGL 3 though? Hardware is still fairly free to expose what features it supports, but if you try to do something it doesn't, creation of that format-object, VAO, render-target, etc. will fail. GL3 should never fall back to software rendering like GL2 can do at times.

Now you can of course argue that a capability lookup system would be better than the "create and see" scheme (many have), but that's another story.
 
So do you believe we will see a break in backwards-compatibility at OGL3 Andy, I mean for hardware devs and their drivers to work well with "legacy" code they would need a solid and untouched part in drivers wouldn't they?
 
So do you believe we will see a break in backwards-compatibility at OGL3 Andy
I'm not Andy, but GL3 will be a clean break from GL2. This is done to clean up the API (there's just too much legacy crud in there that no-one should use) and consequently make the driver-writers' jobs easier. There is supposed to be some interoperability between GL2 and GL3 though meaning you can switch to GL3 while still using some of your legacy GL2 code in there (resources like textures and buffers can be used across versions etc.) I don't think this has been properly confirmed yet though. More of a thing they hope they're able to do I think.
 
That's terribly untrue... language design issues aside, GLSL needs at the very least EXT_gpu_shader4 to get anywhere near SM4.0-style features and functionality... and that extension changes a big chunk of stuff in GLSL!

OK, so I definitely over stated the issue. But being that topic I was addressing was the multiple revisions in DX, I still think it's fair to say that GLSL was at or very near fully SM3.0 feature complete on day one, and it still exceeds SM4.0 when it comes to resource limitations.
 
Possibly, but I think they'd rather wait with implementing it until 3.1 where they're going to add SM4 support "for real". It's supposed to be released about 6 months after the 3.0 spec. (So that's 1 year at least if history repeats. ;))

Isn't that one of the main things they're looking to change in OGL 3 though? Hardware is still fairly free to expose what features it supports, but if you try to do something it doesn't, creation of that format-object, VAO, render-target, etc. will fail. GL3 should never fall back to software rendering like GL2 can do at times.

Now you can of course argue that a capability lookup system would be better than the "create and see" scheme (many have), but that's another story.
I've never really understood how OpenGL worked. From a Direct3D developer's point of view, it is pretty simple. DirectX9.0 supports SM2. DirectX9.0c supports SM3. DirectX10 supports SM4. Each DirectX revision contains a "must-have" set of features (especially true of DirectX10, where most features are guaranteed to "just work").

But as I understand it, OpenGL doesn't have the concept of shader models. Instead the core API is a lot more uniform, with extra functionality being added with extensions. Is this correct?

If so, then how do developers specify minimum requirements for their software? I mean, in DirectX, you can say "minimum of DirectX10 is required", and you're guaranteed that SM4, geometry shaders, etc. will just work. How would this be expressed in OpenGL terms?
 
Back
Top