Waiting for OpenGL 2.0

K.I.L.E.R

Retarded moron
Veteran
The specs haven't even been finalised.

Once they are finalised do we have to wait another year for them to put together the API?

This is getting ridiculous, how many features are they planning in GL 2.0?
Will it include more features than DX 9?

I read a few spec sheets that have been posted up all over the forums and it seems that GL 2.0 looks almost finalised. Everytime I read a new spec sheet it looks almost finalised.

We have DX9 capable hardware for low-high end. Software is still playing catchup with hardware.

I thought the industry would speed up with the release of low end DX 9 cards. Both Ati and nV have low end DX 9 capable hardware.

So why is software still playing catch-up?

There have been a lot of sales of DirectX 9 hardware ever since the R300 first came out.
Over 1M R300 units shiped. Still there are news reports of shortages.
This isn't including other hardware.
 
The Spec has been approved. If you pop over to http://www.3dlabs.com/opengl2/ you can get the ARB approved shading language spec, and an SDK (Don't know what's in it. I havn't looked).

I think the plan is that it's all going to be released as ARB extensions initially, and once people has some experience with implementing it, it'll be incorporated into the core. That lets any nasty gotcha's that people have missed come to light before it's in the core, but let's people use it now. It seems to be the standard process for all OpenGL development.
 
I asked this about a year ago, and i all i got back then was shrugs. Again : does OpenGL 2 address processing of higher-order primitives, displacement mapping or tessellation issues generally ?
From the presentations i have seen, it does not, its pipeline organization only deals with triangles. Does not sound like a future-proof architecture.
 
OpenGL already has functions for accelerating higher order primitives - and has since version 1.0.

Up to now there's been no consumer-level hardware capable of taking advantage of this. When it becomes available, the existing method in OpenGL (I would guess) will prove to be a decent baseline but somewhat inadequate - at which point it will get extended (e.g. immediate mode being largely superseded for performance applications by vertex arrays which were originally an extension then became standard in OpenGL 1.1).

When the hardware comes, GL will support it.
 
Dio said:
OpenGL already has functions for accelerating higher order primitives - and has since version 1.0.
Could you list some examples? AFAIK, anything starting with "glu-" isn't actually hardware accelerated (though, they of course call some hardware accelerated functions).
 
Ostsol said:
Dio said:
OpenGL already has functions for accelerating higher order primitives - and has since version 1.0.
Could you list some examples? AFAIK, anything starting with "glu-" isn't actually hardware accelerated (though, they of course call some hardware accelerated functions).

OpenGL Spec 1.4 Chapter 5 Section 1 Evaluators, pag. 188.
 
Dio said:
OpenGL already has functions for accelerating higher order primitives - and has since version 1.0.
I had thought that the OpenGL Evaluators were rather restrictive and not used very much anywhere. The spec on nVidia's NV_evaluators extension appears to indicate that this extension solved many of the problems of the OpenGL Evaluators (problems which, by the way, bore a striking resemblence to problems with Quake3's curved surfaces, so I bet Q3 used GL Evaluators), but nVidia's implementation of the extension was too slow, and I'm not sure it really made it flexible enough, either.

The rumored primitive processor in the NV40 will certainly need something more than GL Evaluators for full functionality.
 
Agreed that the current evaluator spec isn't the solution, but some extension of it may well be.

Q3 did not use GL evaluators.
 
Back
Top