OpenGL deprecation myths

That pretty obviously has to be taken with a grain of salt. NVIDIA doesn't want to lose any OpenGL customers because of deprecated features. So they'll go to great lengths to ensure that old and new can live peacefully together.

Deprecation mainly helps the smaller companies. Graphics was increasingly becoming something for a select few who have been in the business since the beginning and have been able to implement the whole lot incrementally over many years.

So everything he writes is entirely true - from NVIDIA's point of perspective.
 
Previously I was under the impression that Nvidia was trying to maintain ISV mindshare with their compatibility support. This presentation puts it into another light. They are clearly not very happy about the deprecation model at all since they are arguing so strongly against it. So this whole thing seems to be driven entirely from within Nvidia rather than from the typical "slow to adopt CAD application programmers" who typically get the blame for slowing down the OpenGL progress. The deprecation model is the best thing that has happened to OpenGL in ages. It has in my mind returned OpenGL to a viable API. It surprises me that Nvidia is the blocker here. If they are so against it, it just casts another shadow of doubt over the future of OpenGL for me.
 
Well, working and well tested legacy OpenGL is one of the things that Nvidia (arguably I guess) does better than anyone else. It's in their interest to keep that advantage, and the deprecation model works against that. Doesn't help OpenGL improve as an API of course, something they need the other IHVs to do, so it feel a bit like they're cutting off their nose to spite the face.
 
I don't understand what the objections are to slide 37. It's perfectly accurate.

You don't have to use every tiny corner feature of OpenGL if you don't want to. If you don't use stipple, it doesn't cost you anything to not turn it on.
 
My questions are

1) doesn't the core profile allow the driver to be much smaller than the compatibility profile?

2) NV is free to ship drivers which make legacy code run, but how does blocking the template/object based API help make ogl good for real time tasks?

3) For developers interested in cross platform real time 3d, shouldn't nv/ati/intel be working on a real-time profile to make ogl suitable/usable instead of pretending compatibility profile works best

4) why does compatibility profile have to be mutually exclusive to the real-time features?
 
You don't have to use every tiny corner feature of OpenGL if you don't want to. If you don't use stipple, it doesn't cost you anything to not turn it on.
No, but it costs me when Mesa's GLSL compiler still sucks because the developers are spending their time making legacy stuff like stipple work. It doesn't hurt Nvidia because to them this is already a solved problem, but it hurts all the other implementations, which in turn hurts OpenGL.
 
rpg.314 said:
My questions are

1) doesn't the core profile allow the driver to be much smaller than the compatibility profile?

2) NV is free to ship drivers which make legacy code run, but how does blocking the template/object based API help make ogl good for real time tasks?

3) For developers interested in cross platform real time 3d, shouldn't nv/ati/intel be working on a real-time profile to make ogl suitable/usable instead of pretending compatibility profile works best

4) why does compatibility profile have to be mutually exclusive to the real-time features?

1) Sure, you could save a few hundred KB of hard drive space with a smaller driver. Not a problem even for netbooks.

For embedded devices, there's already the much simpler OpenGL ES.

2) Huh? First of all, you're blaming the wrong party. Second of all, how does that have anything to do with Mark's presentation? And thirdly, have you tried to write OpenGL code using the template mechanism? After double-facepalming at the glaring complexity of doing even the simplests of tasks, I don't think you'll be wanting to use it either.

3) What's a "real-time profile"? If you mean a "core profile", it's there, and it works. Try it.

4) What are "real-time features"? If you mean "core features", they work in the compatibility profile too.

NVIDIA is pushing for the ability to use new core features along with deprecated features. Look elsewhere for IHVs/ISVs that want to make them be exclusive to one profile or another.

Edit: See here for details.

wien said:
No, but it costs me when Mesa's GLSL compiler still sucks because the developers are spending their time making legacy stuff like stipple work.
Mesa already spent the time to make stipple work. The code is done. No need to tinker with it. It was done for OpenGL 1.0 compatibility.

If you want to roll out a new software renderer, you're free to ignore the compatibility profile for OpenGL. Everybody wins.
 
1) Sure, you could save a few hundred KB of hard drive space with a smaller driver. Not a problem even for netbooks.

I am surprised to know that the drivers will be smaller only by a few 100 kb's. I thought more.
For embedded devices, there's already the much simpler OpenGL ES.

2) Huh? First of all, you're blaming the wrong party. Second of all, how does that have anything to do with Mark's presentation? And thirdly, have you tried to write OpenGL code using the template mechanism? After double-facepalming at the glaring complexity of doing even the simplests of tasks, I don't think you'll be wanting to use it either.

AFAIK, template API was never put into spec/public. So no I haven't tried using the template based API. And afaik, nor has anyone else, publicly atleast.
3) What's a "real-time profile"? If you mean a "core profile", it's there, and it works. Try it.

Quoting from here
The GL 2.1 object model was built upon the state-based design of OpenGL. That is, in order to modify an object or to use it, one needs to bind the object to the state system, then make modifications to the state or perform function calls that use the bound object.
Because of OpenGL's use of a state system, objects must be mutable. That is, the basic structure of an object can change at any time, even if the rendering pipeline is asynchronously using that object. A texture object can be redefined from 2D to 3D. This requires any OpenGL implementations to add a degree of complexity to internal object management.
Under the Longs Peak API object creation would become atomic, using templates to define the properties of an object which would be created with a single function call. The object could then be used immediately across multiple threads. Objects would also be immutable; however, they could have their contents changed and updated. For example, a texture could change its image, but its size and format could not be changed.

Reading this, I was under the impression that an object based API will be more multi-threading friendly and faster.

4) What are "real-time features"? If you mean "core features", they work in the compatibility profile too.
I mean API paths which are fast. I mean something like bindless graphics. glBindX() isn't as fast as it could be. At this point it seems like a good idea to ask, just how fast is the gl_ext_direct_state_access API? Does it lower driver overhead compared to the classic bind-to-use API?

NVIDIA is pushing for the ability to use new core features along with deprecated features. Look elsewhere for IHVs/ISVs that want to make them be exclusive to one profile or another.

Agreed, but that doesn't mean ogl as an api is competitive with dx for real time tasks just yet. And we need some new features pushed into ogl to make it competitive. And in some areas, it seems that big changes are necessary which aren't happening.
 
I don't understand what the objections are to slide 37. It's perfectly accurate.

The first three points are obviously inaccurate. Fourth is anyone's guess, and fifth is subjective.
Fewer features means smaller driver binary which means faster running code.
Fewer features means less functionality to maintain and more driver writer time available for the rest.
How anyone can consider an API without all the legacy crap not to be cleaner just boggles my mind.

You don't have to use every tiny corner feature of OpenGL if you don't want to. If you don't use stipple, it doesn't cost you anything to not turn it on.

It costs me to have it there. My API calls will pass through code in the driver that has to check whether stipple is currently enabled or not. Those checks costs cycles, and take space in the instruction cache, whether I'm using it or not.
 
You don't have to use every tiny corner feature of OpenGL if you don't want to. If you don't use stipple, it doesn't cost you anything to not turn it on.
That's a bit naive, APIs' features are almost never orthogonal. They might cost you extra memory, extra development/debugging time, extra complexity where it wouldn't otherwise be needed, etc..
 
Since when it everyone worried about how much something costs NVIDIA? It's nice to know everyone cares about the company so much.

If you want to use the core profile without any of the old legacy features, it's already there for you. Go ahead. Try it.

Humus said:
The first three points are obviously inaccurate. Fourth is anyone's guess, and fifth is subjective.
Fewer features means smaller driver binary which means faster running code.
Fewer features means less functionality to maintain and more driver writer time available for the rest.
How anyone can consider an API without all the legacy crap not to be cleaner just boggles my mind.
Parts of the driver that aren't used aren't even loaded from disk. I'm not sure why you think larger binaries necessarily translate to faster or slower running code.

The driver code for the old features is already written. If it's being worked on, it's to make them better, which means that those features are actually being used by important customers.

Cleaner API != Easier to use. See below...

My API calls will pass through code in the driver that has to check whether stipple is currently enabled or not. Those checks costs cycles, and take space in the instruction cache, whether I'm using it or not.
How do you even know whether that's true or not? Enabling stipple could just be setting a single bit in the hardware, and the hardware takes care of the details. Btw, the hardware still needs to support stipple whether or not OpenGL 3.2 has it.

rpg.314 said:
AFAIK, template API was never put into spec/public. So no I haven't tried using the template based API. And afaik, nor has anyone else, publicly atleast.
I don't believe the spec was made public, no. That said, there were enough samples out there to make it clear that it was a bad idea in general.

My simple texture creation test took 14 OpenGL-LP calls to create what would require just 1 function call in OpenGL 1.2. It gets worse if you actually wanted to draw stuff.

[quote="Davros]q: does this stop old opengl games from working ?[/quote]
Not on NVIDIA systems. Old games and applications will continue to run, and can continue to be developed with new feature additions to OpenGL.
 
Cleaner API != Easier to use. See below...

I have seen people pine for glVertex3f(x,y,z) in ogl 3.x. Sure it is easier to use. But by pandering to these "needs", drivers have more overhead which kills performance. And if you want speed (I do), then you have to pay the price somewhere. There is no free lunch.

That's what I meant by a "real time profile". Something/some approach that lets me get more fps, even if it means a bit more code.
I don't believe the spec was made public, no. That said, there were enough samples out there to make it clear that it was a bad idea in general.

My simple texture creation test took 14 OpenGL-LP calls to create what would require just 1 function call in OpenGL 1.2. It gets worse if you actually wanted to draw stuff.

Linky?

Bottom line, even though nv is the last vendor standing pushing ogl, it is spouting fud about deprecating new features and is possibly a hindrance to api modernization.
 
rpg.314 said:
But by pandering to these "needs", drivers have more overhead which kills performance. And if you want speed (I do), then you have to pay the price somewhere. There is no free lunch.
Where is this magic overhead you speak of? You do realize that OpenGL still has lower draw-call overhead than D3D, right? Besides, having glVertex3f() does not slow down glDrawElements(), the latter being the fast path.

Now, you can quibble on whether VBOs as they are were a good idea or not, but a) that problem was NOT addressed by OpenGL-LP, and b) bindless graphics fixes that anyway, without throwing away compatibility.

You're right that there is no free lunch. Someone at NVIDIA will have to support those code paths. But that's a cost that NVIDIA will have to determine (why is everyone so worried about NVIDIA all of a sudden? It's like this wave of warm and fuzzy just went through the universe. Have we been visited by Carebears?).

it is spouting fud about deprecating new features and is possibly a hindrance to api modernization.
The FUD is all in your imagination. Here's a little homework question for you: list all the people who pushed for OpenGL-LP, then find out who they work for. You'll be surprised.

If you want to use OpenGL 3.2 Core-only, you can do that today. Go ahead. Try it. Really. If you think that non-accessible features in the driver are slowing you down, then maybe we can talk about that, and perhaps NVIDIA could go and do some profiling.

Arm-waving about a non-existent problem is not helpful.
 
Where is this magic overhead you speak of? You do realize that OpenGL still has lower draw-call overhead than D3D, right? Besides, having glVertex3f() does not slow down glDrawElements(), the latter being the fast path.
I did not know that gl had a lowerdraw call overhead.
Now, you can quibble on whether VBOs as they are were a good idea or not, but a) that problem was NOT addressed by OpenGL-LP, and b) bindless graphics fixes that anyway, without throwing away compatibility.
I have no idea what was planned for ogl-lp and unless bindless graphics has atleast an ati implementation, it's use will be very limited. Which is why I'd like it to be present in gl spec/arb extension.

You're right that there is no free lunch. Someone at NVIDIA will have to support those code paths. But that's a cost that NVIDIA will have to determine (why is everyone so worried about NVIDIA all of a sudden? It's like this wave of warm and fuzzy just went through the universe. Have we been visited by Carebears?).
ATI apparently isn't pushing ogl ahead, so we are really left with only 1 serious ogl vendor. They are slower with api updates, (still not at 3.2, afaik, and bindless is apparently out of the question).

The FUD is all in your imagination. Here's a little homework question for you: list all the people who pushed for OpenGL-LP, then find out who they work for. You'll be surprised.

I have no idea about who pushed for ogl-lp, and why was it shelved. If you can share some details about it, that will be more helpful. Also, I don't buy the argument that implementing ogl-lp would have broken code. Old api would have been there for backward compatibility, and nobody would have broken a sweat over it.

A bigger question is, if deprecation isn't helpful, then why was it done in the first place? Also, how excited are devs with 3.2 core these days?

If you want to use OpenGL 3.2 Core-only, you can do that today. Go ahead. Try it. Really. If you think that non-accessible features in the driver are slowing you down, then maybe we can talk about that, and perhaps NVIDIA could go and do some profiling.

My issues are solely with how much mindshare ogl had lost over the years and what do people here think ogl 3.2 as an api? If they aren't happy with the new api, or think that 3.2 core still has more overhead compared to dx11, then we need a change in approach.

The real question is, how suitable is ogl 3.2 core for real time tasks compared to dx11. What are the performance penalties of this abstraction layer? When those questions are answered, I'll have a better sense of what's going on.

Also, if any amd employees are reading this, do you plan to support bindless graphics extension anytime soon, ie as an ext/arb extension ofcourse? Or is it a no-go.
 
q: does this stop old opengl games from working ?

No.

so that would mean people wanting to play glquake for example couldn't with an ati card ?

No, you'll be able to play glquake in the future as well. But the question is whether an OpenGL 3.2 app should have access to glVertex. My answer is no, it's a waste of resources and impacts the performance on OpenGL 3.2 apps.

Since when it everyone worried about how much something costs NVIDIA?

It's about the performance of my application. I don't want my code to pass through a bunch of legacy crap.
The developer time is a finite resource. Supporting stipple in a 3.2 context is a waste of driver writer time and gives less time available for fixing important bugs and performance of the features that matter.

Parts of the driver that aren't used aren't even loaded from disk. I'm not sure why you think larger binaries necessarily translate to faster or slower running code.

Because we all know that all code related to legacy features is neatly packed together at the end of the dll file. Yeah, sure.

Cleaner API != Easier to use.

OpenGL has spent way too much time on "easy to use". It's about performance. DX10 is harder to use than DX9 too. It's easier to be able to set individual render states than using immutable state blocks, but that's not why we're doing it.

How do you even know whether that's true or not? Enabling stipple could just be setting a single bit in the hardware, and the hardware takes care of the details. Btw, the hardware still needs to support stipple whether or not OpenGL 3.2 has it.

Not sure how much hardware has supported stipple at all. But in any case, when you create a 3.2 context the driver can pass you pointers to functions that do not contain any of the legacy crap, so that you get a clean copy. It'll have to provide a backward compatible function as well for older apps, but newer apps that ask for a context without the crap won't necessarily have to suffer.

My simple texture creation test took 14 OpenGL-LP calls to create what would require just 1 function call in OpenGL 1.2.

How many calls happen at creation is totally irrelevant for anything. The question is if it provides the driver with useful information and an ability to work more efficiently.

Besides, having glVertex3f() does not slow down glDrawElements(), the latter being the fast path.

Of course it does. Not all sequences of API calls are valid and the driver has to check whether to issue an error if you make invalid calls. If that glDrawElements() call is happening between glBegin() and glEnd() you must issue a GL_INVALID_OPERATION error. The code to check for this error condition has to exist as long as immediate mode exists in the OpenGL API. And it's sprinkled all over the API because most API calls are in fact invalid between begin and end, and they all have to check this.
 
What it really boils down to, is how an application developer moves forward. "Does my application need to stop using old features to make use of new features?".

If the answer is No, then that is the old-style GL model (also used by GL3 + Compatability Extension).
If the answer is Yes, then you have the Direct3D model (also used by GL3 and OpenGL ES).

Practically, all vendors are going to have to support all the old API features, because they need to keep existing software working. But that happens in the D3D world too: D3D6 apps still run. However, with the D3D model, there is no need document/implement/test the interactions between all these various features (what if you want to use anti-aliased line stipple and LogicOp along with a sample-rate shader with CSAA?).

In the latter case, all historical features are still supported, but the implementation and QA burden is still quite reduced. It seems that is the direction quite a few GL implementors want to go, except for Nvidia, who seems to feel that sticking with the old-style model will give them a competitive advantage with some (presumably important) ISVs.

Myself? As a GL developer, I would be perfectly happy with the D3D model. Microsoft has proven, beyond any doubt, that ISVs are perfectly capable of handling that tradeoff.
 
Last edited by a moderator:
It's about the performance of my application. I don't want my code to pass through a bunch of legacy crap.
This really cuts to the core of the issue... APIs and languages should guide the developer onto the fast path and ideally prevent them from shooting themselves in the foot. The problem with OpenGL without feature deprecation is that it's often unclear what 5% subset of the language you should be using at any given time, and how it interacts with features from previous versions, which is often nontrivial since we've figured out a few things about how to write graphics code and APIs that were non-obvious back when OpenGL 1.0 was being spec'ed, not to mention some shift in the usage of OpenGL.

The biggest reason for deprecation IMHO is to steer developers into the model that they should be using to write *new* code. Obviously you can still write OpenGL 1.0 code if you want but there's no reason to try and figure out how all that wackiness should interact with modern features... especially since in most cases the features *don't* interact, and of course they just do something random (or more often nothing, or IHV defined) at runtime.

OpenGL has spent way too much time on "easy to use". It's about performance. DX10 is harder to use than DX9 too. It's easier to be able to set individual render states than using immutable state blocks, but that's not why we're doing it.
Yeah precisely, but I'd go a step further and put forward that ease of use is *not* about providing every single possible feature as a high level API entry point, but rather having very clear and totally orthogonal ways to do specific things that can be straightforwardly combined together with predictable results. D3D can possibly by criticized for stepping too far in the direction of "is there already a way to do this efficiently in user code? Then take it out of the API", but OpenGL needs to take a few gigantic steps in that direction to become relevant again IMHO.

There clearly needs to be a path for new applications to write to the most orthogonal, "no performance cliffs", simplest and most current API. That's the point in deprecating irrelevant features, particularly those with non-trivial feature interactions.
 
Last edited by a moderator:
Back
Top