Where is this magic overhead you speak of? You do realize that OpenGL still has lower draw-call overhead than D3D, right? Besides, having glVertex3f() does not slow down glDrawElements(), the latter being the fast path.
I did not know that gl had a lowerdraw call overhead.
Now, you can quibble on whether VBOs as they are were a good idea or not, but a) that problem was NOT addressed by OpenGL-LP, and b) bindless graphics fixes that anyway, without throwing away compatibility.
I have no idea what was planned for ogl-lp and unless bindless graphics has atleast an ati implementation, it's use will be
very limited. Which is why I'd like it to be present in gl spec/arb extension.
You're right that there is no free lunch. Someone at NVIDIA will have to support those code paths. But that's a cost that NVIDIA will have to determine (why is everyone so worried about NVIDIA all of a sudden? It's like this wave of warm and fuzzy just went through the universe. Have we been visited by Carebears?).
ATI apparently isn't pushing ogl ahead, so we are really left with only 1 serious ogl vendor. They are slower with api updates, (still not at 3.2, afaik, and bindless is apparently out of the question).
The FUD is all in your imagination. Here's a little homework question for you: list all the people who pushed for OpenGL-LP, then find out who they work for. You'll be surprised.
I have no idea about who pushed for ogl-lp, and why was it shelved. If you can share some details about it, that will be more helpful. Also, I don't buy the argument that implementing ogl-lp would have broken code. Old api would have been there for backward compatibility, and nobody would have broken a sweat over it.
A bigger question is, if deprecation isn't helpful, then why was it done in the first place? Also, how excited are devs with
3.2 core these days?
If you want to use OpenGL 3.2 Core-only, you can do that today. Go ahead. Try it. Really. If you think that non-accessible features in the driver are slowing you down, then maybe we can talk about that, and perhaps NVIDIA could go and do some profiling.
My issues are solely with how much mindshare ogl had lost over the years and what do people here think ogl 3.2 as an api? If they aren't happy with the new api, or think that 3.2 core still has more overhead compared to dx11, then we need a change in approach.
The real question is, how suitable is ogl 3.2 core for real time tasks compared to dx11. What are the performance penalties of this abstraction layer? When those questions are answered, I'll have a better sense of what's going on.
Also, if any amd employees are reading this, do you plan to support bindless graphics extension anytime soon, ie as an ext/arb extension ofcourse? Or is it a no-go.