Humus said:
True. It's the regular MS way of doing things, putting up arbitrary dependencies that doesn't make sense. As if instancing has anything to do with shaders.
They used to have way too high granulity, with a caps flag for every tiny bit of hardware behaviour, so that you had to query loads of caps even for simple tasks to ensure everything's supported. Now they tried to move away from that model with shaders, adding shader versions instead which contains a large bunch of functionality with just a simple version number to check. But it seems they couldn't just stop there. Now everything's coming in huge packages instead under the name of the shader versions, even if the feature in itself has nothing to do with shaders as such. Soon I guess they'll put FP blending and filtering under the same umbrella too.
Its a fundemental problem that both major API have suffered with since at least the time I start doing hardware accelerated PC stuff (which is over a decade now).
Lets go back in time to before D3D existed, MS wanted to create a API for hardware accelerated 2D operations (sprites etc.). Unfortanately the hardware was extremely variable, there were only four choices.
1) Emulate in software whatever the card didn't, without telling the programmer.
2) Emulate in software, but let the programmer know.
3) Don't emulate, just let the programmer detect and do extra code.
4) Require all features to be supported in hardware with no options
GDI is 1, MS decided originally on 2 for DirectDraw but time constraints meant that much of the emulation code couldn't be written in time for release, so much of it was actually 3.
Now jump forward to hardware accelerated 3D. OpenGL was a combination of 1 and 3. Any features in core had to be software emulated but new platform specific features would be 3 (extensions).
Games programmers moaned about it because at the time, no video cards were even close to supporting all core features in hardware and minor changes in code could drop you back into software. It was fairly impossible to write performance 3D code that worked on multiple platforms (even if you could get the drivers), miniGL effectively removed the software fallbacks to be replaced with crashing!
D3D also decided on a combination of 1 and 3 but in not at the same time, the software device did everything but in software, HAL did only what they said they could but completely in hardware.
Game developers moaned about it
To many caps bits, especially as the cards got better, OpenGL was now starting to work well. The cards were rapidally doing most of core OpenGL in hardware so you could largely program OpenGL without checking caps or extensions and it just worked. There were a few extensions for things like better palette support but only a few and they were largely cross vendor supported. This was OpenGL heyday in the PC arena, lots of people made there mind up that OpenGL is better than D3D back here and haven't bother looking again since.
D3D started defining less caps bit and 'bundling' features, if a card could do hardware transforms it must also be able to do lighting and clipping. But lots of other features still attracted caps bits. These started going beyond core OpenGL features and with ARB ignoring the game market and PC 3D revolution taking places (this was a time when 3DFX, NVIDIA and ATI weren't major player at the ARB) the only option for the IHV's was extensions. So each chip got a new set extensions just for it, and the extension explosion started.
D3D was rapadly dropping to API type 4 with a number of grades (shader models). They start bundling completely unrelated features because even though there called shader models there really now just grades of hardware. The problem with this is that frequently they included stuff that could be supported on cards but not other bits. First big mistakes was the vertex stream caps in VS2.0, if you read the original headers they are meant to only be supported by VS2.0 hardware but ATI 8500 supported them and there was no reason for them not be useable (they even had caps bits). Newer examples are centroid and vertex frequency caps.
OpenGL started an experiment as the PC card makers took over. They have started allowed rapid ARB promotion. Instead of the years of field testing that orignally extensions were meant to have before entering core, they are promoted to ARB sometimes without even going through IHV extensions. Its looks its solved most of the problems with cross platform adoption speeds but has also inherited the mis-specification D3D is prone to (moving things to quickly so quirks don't get ironed out). Classic example of this is ARB_Fragment program that has a mis-specification that one vendor basically had to break it from day one (the shadow buffer issue). It also suffers from D3D bundling problem, if you don't support all its features the extension talks about you can't do it. Means some cards can almost support some extension so expose them but with 'bugs' or yout get multiple version of the extension but with minor differences.
But OpenGL is looking much healthier with features being cross-platform at a speed sometimes even faster than D3D (which requires MS to issue a new release), of course the slow adoption and then extension explosion have meant many programmers who loved OpenGL in its heyday (when it worked as advertised) have already moved to D3D and getting them back is difficult as D3D is fairly good these days.
At the moment the D3D bundling is looking at bit to harsh but then Dx10 is trying for no caps bits at all, so who knows...
No API has got the balence right except back when OpenGL was at the top of its game. But it only got there because the API was years ahead of the hardware.
There endeth this history lesson, god I'm feeling old
Its never as simple as API zealots make out. I've programmed both over the years and swore at both until the air was blue