Usually, you specify the video card (like "This program requires a 6xxx series or newer card")
Actually, the OpenGL shading language was a bit of design failure. The idea was very nice, but the implementation was really messy. Somethimes, you compile a shader and it compiles ok, starts ok and then runs in software, without a warning. Very inconvenient. Similar with floating-point textures. You may be able to create it, but it can't use linear filtering. Really inconvenient.
The new API will provide a very nice approach: you say what you want to do, it tells you if it can. So you don't really need to know what the hardware is capable of. Like that:
App "I want to do HDR with MSSA: can you create such rendering buffer? BTW. I want to use blending with it"
GL3: "Nope".
App: "Ok, I just want 32-bit FP RGB buffer with blending for HDR, screw the MSAA"
GL3: "Nope, can't do"
App: "Grr. Then give me at least a 16-bit FP RGB buffer!"
GL3: "This is ok".
I personaly find this approach to be very intuitive.
Actually, the OpenGL shading language was a bit of design failure. The idea was very nice, but the implementation was really messy. Somethimes, you compile a shader and it compiles ok, starts ok and then runs in software, without a warning. Very inconvenient. Similar with floating-point textures. You may be able to create it, but it can't use linear filtering. Really inconvenient.
The new API will provide a very nice approach: you say what you want to do, it tells you if it can. So you don't really need to know what the hardware is capable of. Like that:
App "I want to do HDR with MSSA: can you create such rendering buffer? BTW. I want to use blending with it"
GL3: "Nope".
App: "Ok, I just want 32-bit FP RGB buffer with blending for HDR, screw the MSAA"
GL3: "Nope, can't do"
App: "Grr. Then give me at least a 16-bit FP RGB buffer!"
GL3: "This is ok".
I personaly find this approach to be very intuitive.