While it does make sense for closed hardware to roll your own libraries and go all out if you want to invest in it, for the PC space it is much better to stick religiously to the API and design rules of the platform, and simply wait for faster hardware.
Algorithmic optimizations are the single best way to go, especially when you take into account how you access your data. Using hardware and/or implementational features is bad. Because, when newer hardware and APIs become available, you do want your software to keep on working.
In that context, while it was very common and advisable to use a custom memory manager for Windows apps, Vista changed the rules there as well.
While many old (DOS, W95) applications fail to run on NT/W2000/XP, the ones that stuck to the specs still work. And without any performance issues.
Then again, the graphics of most games that are a few years old don't look very hot by todays standards. But that can be improved by adding higher resolution models and textures for the most part.
So, the best way to optimize your PC game is simply providing the higher resolution artwork that is unusable at launch. But then again, with the very short sales window for any game that isn't considered AAA, it won't matter much in any case. And programming the game is only a very small part of the overall budget.
So, to optimize the money gained, programming close to the metal does make sense, unless you go for the long haul.
But then again, what platforms are you going to support? If you want your game to run on any old and new hard- and software, you're going to produce a game that only runs on a small amount of PCs, if you dig too deep for those small speed increases.
What's even more: multi-platform is the way to go. You want everyone and his sister to buy your game, no matter what hardware they're going to use to run it. Be it an Xbox360, PS3, Windows, Mac or Linux computer. And you're definitely not going to be able to port it if you don't program for the most common demeanor.
Which only leaves algorithmic improvements and trying to limit random memory access and large structures as much as possible.
Algorithmic optimizations are the single best way to go, especially when you take into account how you access your data. Using hardware and/or implementational features is bad. Because, when newer hardware and APIs become available, you do want your software to keep on working.
In that context, while it was very common and advisable to use a custom memory manager for Windows apps, Vista changed the rules there as well.
While many old (DOS, W95) applications fail to run on NT/W2000/XP, the ones that stuck to the specs still work. And without any performance issues.
Then again, the graphics of most games that are a few years old don't look very hot by todays standards. But that can be improved by adding higher resolution models and textures for the most part.
So, the best way to optimize your PC game is simply providing the higher resolution artwork that is unusable at launch. But then again, with the very short sales window for any game that isn't considered AAA, it won't matter much in any case. And programming the game is only a very small part of the overall budget.
So, to optimize the money gained, programming close to the metal does make sense, unless you go for the long haul.
But then again, what platforms are you going to support? If you want your game to run on any old and new hard- and software, you're going to produce a game that only runs on a small amount of PCs, if you dig too deep for those small speed increases.
What's even more: multi-platform is the way to go. You want everyone and his sister to buy your game, no matter what hardware they're going to use to run it. Be it an Xbox360, PS3, Windows, Mac or Linux computer. And you're definitely not going to be able to port it if you don't program for the most common demeanor.
Which only leaves algorithmic improvements and trying to limit random memory access and large structures as much as possible.