Simply: Generally if, say, a D3D9 game is running on D3D8 hardware, various features will be optomised/disabled to run on that hardware correctly (given an additional D3D8 render path is coded, obviously).
How does OpenGL work in the PC space? Does this same method of down-scoping of features so to speak occur, or is it merely a question of raw performance (i.e fillrate, or lack of)?
The reason I ask is I recently aquired my friend's old GF Ti4200 (DX8.1-class, I think), comparing it to my GF 7800GTX (DX9.0c) both running Doom 3. Although I was using two different monitors (one 15" CRT and a 19" LCD), I noticed very little difference in IQ at a glance... I know that Doom 3 was originally optomised for GeForce 3, so perhaps this game/engine is too good an example of OGL structure for the test.
Thanks.
How does OpenGL work in the PC space? Does this same method of down-scoping of features so to speak occur, or is it merely a question of raw performance (i.e fillrate, or lack of)?
The reason I ask is I recently aquired my friend's old GF Ti4200 (DX8.1-class, I think), comparing it to my GF 7800GTX (DX9.0c) both running Doom 3. Although I was using two different monitors (one 15" CRT and a 19" LCD), I noticed very little difference in IQ at a glance... I know that Doom 3 was originally optomised for GeForce 3, so perhaps this game/engine is too good an example of OGL structure for the test.
Thanks.