Let me get this straight: During the development path of the PC game, the renderer can be updated to take more advantage of the current (graphics) hardware. For example, the Doom3 engine was originally built around effectively Geforce 3 level, but towards the release date it was upscaled to take more advantage of the raw horsepower and/or features that NV40 and R400 had to offer.
In a word (or several, even), why then do people sometimes tout that PC graphics hardware (in this case running what it's primarily designed to do- shader/texture-intensive operations) isn't really being pushed to it's limits? CPU bottlenecking aside, pump up the AA, AF and screen res in F.E.A.R. or AoE3 and they'll bring a pair of SLI'd 7800GTX's to their knees*.
Is it really a case of just 'badly' written drivers giving the end-user sub-par performance, or indeed the renderer itself? Regardless, in an ideal world where there are no errors made by the devs, should the end-user right now be getting more for his/her money? Is the PC current hardware really being pushed, or is there some kind of lack of optomisation somewhere down the line?
*SLI or Crossfire is perhaps a bad example, assuming the current generation of games don't natively support multi-GPU, only via the drivers forcing AFR/SFR etc, correct me if I'm wrong. A single 512MB GTX or X1800/1900XT would be a better example, I suppose.
Apologies for my waffling, hopefully someone will be able to shed some light on this little conundrum of mine.
Thanks in advance .
In a word (or several, even), why then do people sometimes tout that PC graphics hardware (in this case running what it's primarily designed to do- shader/texture-intensive operations) isn't really being pushed to it's limits? CPU bottlenecking aside, pump up the AA, AF and screen res in F.E.A.R. or AoE3 and they'll bring a pair of SLI'd 7800GTX's to their knees*.
Is it really a case of just 'badly' written drivers giving the end-user sub-par performance, or indeed the renderer itself? Regardless, in an ideal world where there are no errors made by the devs, should the end-user right now be getting more for his/her money? Is the PC current hardware really being pushed, or is there some kind of lack of optomisation somewhere down the line?
*SLI or Crossfire is perhaps a bad example, assuming the current generation of games don't natively support multi-GPU, only via the drivers forcing AFR/SFR etc, correct me if I'm wrong. A single 512MB GTX or X1800/1900XT would be a better example, I suppose.
Apologies for my waffling, hopefully someone will be able to shed some light on this little conundrum of mine.
Thanks in advance .