OpenGL 4.3 (with compute shaders!)

I literally have no idea what you are talking about. I don't know why you think I'm against extensions in general (I'm not!). There are situations where the extension system in OpenGL can work well (e.g. closed systems), but for the most part they are unused. Thus when people say things like "OpenGL 4.3 offers features beyond DX11.1 even on older non-DX11.1 hardware or OSes, thanks to a vendor extension mechanism" or "It is a cycle which enables innovation and rapid progress, where no single vendor has absolute control, so things are relatively fair for everyone involved: including consumers, hardware vendors, software vendors, and os vendors" I can't help but to roll my eyes. They play such an insignificant part!!!
 
I disagree here. I get the impression that he believe extensions are a benefit to everyone. That their main purpose is to make OpenGL more agile. Whereas I believe extensions are only useful in a small subset of situations and that, in reality, they do not make OpenGL any more agile. I also don't think extensions make OpenGL "fairer" as he claims.
It may be true that a minority of developers directly benefits from using extensions, but like Timothy Lottes I believe that extensions have a substantial impact on shaping the API. Making a feature accessible to developers and essentially beta-testing it and its API is very useful for deciding what goes into the next core API update.

I didn't mean to imply we should remove extensions. I just don't think they've ever been a good tool at making OpenGL more agile. If Khronos' aim is to allow OpenGL to change more rapidly than Direct3D, they would be better served to do it through small updates instead of relying on extensions.
Again, I don't think there's an "instead" here. But core API updates only make sense when you have an agreed upon and tested set of new features, however large or small, and that's where the "extension first, core later" model helps.
 
Again, I don't think there's an "instead" here. But core API updates only make sense when you have an agreed upon and tested set of new features, however large or small, and that's where the "extension first, core later" model helps.

And yet, despite that DirectX has been more nimble and far faster to innovate and implement than OGL has. Due almost directly to Microsoft's ability to talk to the various IHVs not only to determine where they think the technology is going, how possible it is to implement, and how long it will be to implement; but as well to talk to software developers to find out where they think the technology is going, what they wish to see in the near and far future, and what priority they place on things.

Their ability to work with both sides of the aisle (hardware and software) in order to come up with a coherent and standard set of features (that is both possible to implement on the hardware side and desirable by developers on the software side) and to enforce those has allowed for far more rapid developement in the 3D space than OGL has. NOTE - this isn't to say that the extension system in OGL hasn't lead to some interesting innovations.

The closest thing to the extension system in D3D was caps, but those were unpopular with software developers and lead to product confusion with consumers and hence MS tried to drop them in Dx10 and Dx11. Only, apparently to see them possibly make a reappearance in Dx11.1 (likely at the behest of one IHV or another but not at the request of software devs). I have a feeling they will remain just as unpopular. Basically extensions mostly benefit the hardware vendors. Might benefit a "few" software vendors. And is mostly a mess for your end consumer (do they have the right hardware? did they buy from the right IHV? does their OGL level X card support some esoteric extension only available on one card from one vendor? Etc.).

Regards,
SB
 
I didn't mean to imply we should remove extensions. I just don't think they've ever been a good tool at making OpenGL more agile. If Khronos' aim is to allow OpenGL to change more rapidly than Direct3D, they would be better served to do it through small updates instead of relying on extensions.
Extensions don't substitute core functionality, they are fundamentally a transport for what gets into the core. As such, they are indispensable for the standard. As a 'side-effect', extensions also allow vendors to properly expose their hw to developers when needed - think embedded development.
 
And yet, despite that DirectX has been more nimble and far faster to innovate and implement than OGL has. Due almost directly to Microsoft's ability to talk to the various IHVs not only to determine where they think the technology is going, how possible it is to implement, and how long it will be to implement; but as well to talk to software developers to find out where they think the technology is going, what they wish to see in the near and far future, and what priority they place on things.

Their ability to work with both sides of the aisle (hardware and software) in order to come up with a coherent and standard set of features (that is both possible to implement on the hardware side and desirable by developers on the software side) and to enforce those has allowed for far more rapid developement in the 3D space than OGL has.
OpenGL has been lagging behind for a while, but I doubt that has anything to do with extensions. Unlike Microsoft Khronos essentially is the sum of its members, so it simply can't operate the way Microsoft does. Though you'd be mistaken to think that OpenGL does not take in voices from both sides of the aisle.

The closest thing to the extension system in D3D was caps, but those were unpopular with software developers and lead to product confusion with consumers and hence MS tried to drop them in Dx10 and Dx11. Only, apparently to see them possibly make a reappearance in Dx11.1 (likely at the behest of one IHV or another but not at the request of software devs). I have a feeling they will remain just as unpopular. Basically extensions mostly benefit the hardware vendors. Might benefit a "few" software vendors. And is mostly a mess for your end consumer (do they have the right hardware? did they buy from the right IHV? does their OGL level X card support some esoteric extension only available on one card from one vendor? Etc.).
Unless you absolutely require an extension feature (which only makes sense in very specific circumstances) I'm not sure that consumers would have to care about that kind of low-level information.
Caps blur the line between core and optional feature sets. With extensions that line is quite clear.
 
I have to agree that I think OpenGL - while slowing improving - has been in a unique position to actually change up the landscape of rendering APIs by doing truly forward-looking stuff like moving to explicit command buffer creation, dependencies, etc. But yet they seem to content to just follow DX (almost exactly), which is kind of sad at this point. And with all of their adding checkbox features, they still haven't addressed some of the basic problems in the API (direct state access is a good example, but I want to see far more than just that done)!

But that said, the reality is that the core concept of GL (design by democracy) has just proven far less effective than DX's "benevolent dictator" model. In the GL space there is no one that can really force people to standardize on hardware and features and quite frankly, that's why they end up following as well: because they can count on the features in DX being available on all GPUs as that is the primary design target.

Secondarily but not less important is the fact that no one writes good software infrastructure around GL and drivers. The Khronos conformance tests are a joke in terms of actually testing the entire API (including failure mode testing), which directly impacts driver quality. I shouldn't harp on GL too much here though as CL is 100x worse.

Regarding extensions, it's nice in theory that they exist formally in GL but it doesn't really matter much in practice. GL itself is already dicey enough that you can't really trust it to be fully portable (due not really knowing if you've inadvertently relied on a non-standard feature that just "happens to work" on one implementation), and once you add extensions you might as well be programming directly to a card's command buffer format.

Extensions are a proof of concept to allow key stakeholders to fool around with and then pressure the relevant standards, but none of these uses require a public mechanism like GL has. And let's not forget that all of the big GPU vendors have "unofficial" DX extension APIs as well that actually get used in games, unlike a lot of GL extensions.
 
OpenGL is pretty dominant in the non-gaming realtime 3D rendering sector of the market. D3D has mostly failed there as far as I can tell.
 
OpenGL is pretty dominant in the non-gaming realtime 3D rendering sector of the market. D3D has mostly failed there as far as I can tell.
True, but that has almost nothing to do with the API. In fact, those folks are still mostly stuck on OGL 1.x. I'll also note that the folks that choose "open" APIs for philosophical reasons (like academics) are besides the point of this discussion. While that's great and all, it's unrelated to which API is actually more practically useful for shipping a game on a wide variety of hardware and earning money.
 
But that said, the reality is that the core concept of GL (design by democracy) has just proven far less effective than DX's "benevolent dictator" model.

I disagree that OpenGL is democratic in any way. The extension mechanism is anarchic (non-hierarchical, non-authoritive and voluntary), the core I wouldn't call the result of a democratic process as it's not shaped by consensus, if consesus would have been enough to get features into the core it'd have been a very different story. Instead it's plutocratic, members with enough power set the rules.

In my opinion it's part of the reason why the ideal doesn't coincide much with reality is that there is a great deal of authoritarian acceptance on the suppose consumer (developer) side. They a) accept that change has to be approved, b) supported by hardware vendors, c) comes from the board and or hardware-vendor.

It certainly isn't clear to most who are connected to it, that they can think as much out-of-the-box as they want to and are collectively able to contribute to the shape of it.

Let's take an example. If you know as a software vendor that hardware supports something in a way you need it and an interface doesn't exist, it would be smart to conceptualize and isolate a possible interface and off-load it into extension-form, do a reference implemention and submit it to the database. Other software vendors can pick it up, and in the end even hardware vendors may pick it up.

This of course is idealistic. Put you'll recognize a lot of the cathedral vs. bazaar philosophy discussions from open-source. As long as people believe in the authoritive component in all of OpenGL it can't become the rich hub of experimental evolution which is certainly imaginable with the current structure of the board and the openness to any kind of extension.

In the end of the day, OpenGL+ext fails the way it fails not because of a technical limitation, but because of the mental model and actions of all the individuals involved in it. OpenGL didn't manage what Linux managed.
 
I have to agree that I think OpenGL - while slowing improving - has been in a unique position to actually change up the landscape of rendering APIs by doing truly forward-looking stuff like moving to explicit command buffer creation, dependencies, etc. But yet they seem to content to just follow DX (almost exactly), which is kind of sad at this point. And with all of their adding checkbox features, they still haven't addressed some of the basic problems in the API (direct state access is a good example, but I want to see far more than just that done)!

But that said, the reality is that the core concept of GL (design by democracy) has just proven far less effective than DX's "benevolent dictator" model. In the GL space there is no one that can really force people to standardize on hardware and features and quite frankly, that's why they end up following as well: because they can count on the features in DX being available on all GPUs as that is the primary design target.

Secondarily but not less important is the fact that no one writes good software infrastructure around GL and drivers. The Khronos conformance tests are a joke in terms of actually testing the entire API (including failure mode testing), which directly impacts driver quality. I shouldn't harp on GL too much here though as CL is 100x worse.

Regarding extensions, it's nice in theory that they exist formally in GL but it doesn't really matter much in practice. GL itself is already dicey enough that you can't really trust it to be fully portable (due not really knowing if you've inadvertently relied on a non-standard feature that just "happens to work" on one implementation), and once you add extensions you might as well be programming directly to a card's command buffer format.

Extensions are a proof of concept to allow key stakeholders to fool around with and then pressure the relevant standards, but none of these uses require a public mechanism like GL has. And let's not forget that all of the big GPU vendors have "unofficial" DX extension APIs as well that actually get used in games, unlike a lot of GL extensions.

I think GL's stagnation is just a sign of Khronos' inability to agree on anything. That and the gradual decay of desktop as a platform.

First the latter. If you have to use Linux/Mac, then there is no option whatsoever. Even on Mac, if and when Apple deigns to upgrade to a ~3 year old API, we'll get what has been available to every DX client for 3 years now.

And even on desktop, I see MS as gradually focussing away from it. Next gen consoles will be DX11 class and when they come out every one will switch to DX11 as their primary platform, leaving evolution of the API in nobody's hands.

As for the former, GL's bind to use model is atrocious. While extensions are talked up all the time, the good ones are languishing. Nvidia has been the real innovator here. Direct state access, bindless graphics are great extensions, have been around for several years and why the hell aren't they in core?

The situation on CL front is worse. The hw is miles ahead of CL and all CL has to offer is G80 class programmability. It's API is far better than GL, but the lack of updates is showing. Some years ago, IIRC, there was a presentation going around saying OCL will be here in 2012. Well, it's 2012 by my watch and.... It's telling that AMD decided to not work with Khronos for HSA. Let's hope they will have better luck.

Finally, Khronos has never developed/evolved a major standard as far as I can tell. GL was donated by SGI. CL was donated by Apple. GL has just been following DX since ~2000 and CL has nobody to follow so it is stuck.

The sad reality is, IMO, the only people who use GL are those who are out of choices.
 
True, but that has almost nothing to do with the API. In fact, those folks are still mostly stuck on OGL 1.x. I'll also note that the folks that choose "open" APIs for philosophical reasons (like academics) are besides the point of this discussion. While that's great and all, it's unrelated to which API is actually more practically useful for shipping a game on a wide variety of hardware and earning money.

Universities have to support a student body that is increasingly turning to Macs, ruling out DX. There's more than just philosophy there.
 
Universities have to support a student body that is increasingly turning to Macs, ruling out DX. There's more than just philosophy there.
Maybe today, but GL has always been in wide use in academia, regardless of platform limitations. It was definitely philosophical at my university, although part of it was of course that universities still by and large teach immediate mode OpenGL 1.0 :S

Which actually is another part of the problem... OpenGL doesn't know whether it wants to be a high or low level API. While DX has been driving as quickly towards being low-level as possible and somewhat dragging OpenGL along with it, there are still a vocal minority of people who think that how "easy" something is to write in GL should factor into it (I am not one of them obviously). That sort of fundamental disagreement on where to target the API certainly contributes to its current state.
 
Maybe today, but GL has always been in wide use in academia, regardless of platform limitations. It was definitely philosophical at my university, although part of it was of course that universities still by and large teach immediate mode OpenGL 1.0 :S

Which actually is another part of the problem... OpenGL doesn't know whether it wants to be a high or low level API. While DX has been driving as quickly towards being low-level as possible and somewhat dragging OpenGL along with it, there are still a vocal minority of people who think that how "easy" something is to write in GL should factor into it (I am not one of them obviously). That sort of fundamental disagreement on where to target the API certainly contributes to its current state.

I doubt such people sit on the committee.
 
If anybody has gone through the compte shader spec, is there anything you can do in CL but not with the compute shader?

I couldn't find anything in a quick pass.
 
I doubt such people sit on the committee.
You'd be surprised...

And yes there are a few things you can do in CL but not compute shader, but it's sort of hard to tell for sure since it's unclear what exactly is legal in CL from the spec... it's not the best written spec, which is another part of the problem.
 
Back
Top