Joe DeFuria said:
zeckensack said:
That's just Joe talking out of his ass again.
So, what's your esimate of when Unextended GL will gaining DX Next functionality?
Don't know. Depends on the feature set of DX Next, in particular on whether or not PPPs are in. If PPPs are not in DX Next, the answer is 2.0.
Joe DeFuria said:
Unextended OpenGL 1.2 already has more features than a DX9 device with no caps bits set. But who cares about that?
1) As a
gamer I care what the hardware capabilities are, and don't care much about software fall-backs. I'm pretty sure that game developers care about hardware capabilities in GL as well, even if transparent fall-back exists.
Exactly, hardware capabilities. Installing DX9 on a system doesn't guarantee that PS2.0 will work. It merely guarantees that PS2.0 can be used by applications written to the DXG API
if the device supports it. Not all devices do. A Geforce 4MX will certainly not.
That's a fine line, and it's the basic problem with your comparison. Core GL doesn't have a lot of features because it doesn't
need a lot of features. The driver writers can expose whatever they want through extensions (and many extensions are cross-vendor and widely supported). DXG OTOH
needs new runtime versions that accomodate lots of features, because any DX version limits the maximum usable feature set of the device (with the interesting exception of FourCC codes).
It's just not right if you're comparing the
maximum feature set you can use via a version of DirectX Graphics to the
minimum feature set allowed to claim conformance to an OpenGL version. This is where I was going with the "no set caps bits" thing. Compare minimum possible feature set for both. That would be silly, but fair.
Joe DeFuria said:
2) I'll take your word about GL 1.2...though I was under the impression that DX9 (PS2.0) type fragment shader standard was only recently ratified? And in any case, how long after they were exposed in the DX9 API was the standard in GL ratified?
A DX9 device with zero set caps bits cannot even perform texture filtering. An OpenGL 1.2 implementation can. But as we all know, such things don't exist in practice. In practice, for any given graphics card, you'll have the same hardware capabilites available through both APIs with possibly a few extras in OpenGL, and possibly more unified cross-vendor access to these features in DXG.
ARB_fragment_program (the assembly style, floating point aware fragment shading API) a bit older. See the revision notes at the end of
its spec.
GLslang (the high level language stuff) was much later, and still appears to be in flux. Should be "final" RSN
However you can already play around with it on Radeon 9500+ and Geforce FX cards. Compilation will obviously fail if you exceed hardware limits.