The sense in programming to the metal

Frank

Certified not a majority
Veteran
While it does make sense for closed hardware to roll your own libraries and go all out if you want to invest in it, for the PC space it is much better to stick religiously to the API and design rules of the platform, and simply wait for faster hardware.

Algorithmic optimizations are the single best way to go, especially when you take into account how you access your data. Using hardware and/or implementational features is bad. Because, when newer hardware and APIs become available, you do want your software to keep on working.

In that context, while it was very common and advisable to use a custom memory manager for Windows apps, Vista changed the rules there as well.

While many old (DOS, W95) applications fail to run on NT/W2000/XP, the ones that stuck to the specs still work. And without any performance issues. ;)


Then again, the graphics of most games that are a few years old don't look very hot by todays standards. But that can be improved by adding higher resolution models and textures for the most part.

So, the best way to optimize your PC game is simply providing the higher resolution artwork that is unusable at launch. But then again, with the very short sales window for any game that isn't considered AAA, it won't matter much in any case. And programming the game is only a very small part of the overall budget.

So, to optimize the money gained, programming close to the metal does make sense, unless you go for the long haul.

But then again, what platforms are you going to support? If you want your game to run on any old and new hard- and software, you're going to produce a game that only runs on a small amount of PCs, if you dig too deep for those small speed increases.



What's even more: multi-platform is the way to go. You want everyone and his sister to buy your game, no matter what hardware they're going to use to run it. Be it an Xbox360, PS3, Windows, Mac or Linux computer. And you're definitely not going to be able to port it if you don't program for the most common demeanor.

Which only leaves algorithmic improvements and trying to limit random memory access and large structures as much as possible.
 
I indeed don't really get what this is about either. DX10 vs CUDA/CTM? CUDA vs CTM? DirectX vs OpenGL? ARB OpenGL extensions vs Proprietary OpenGL extensions? EAX vs World? I don't get it... Your point really doesn't seem applicable to much of anything to me, so a bit more context would be useful. Also:
In that context, while it was very common and advisable to use a custom memory manager for Windows apps, Vista changed the rules there as well.
Errr, Vista didn't change anything. If your programming paradigm means malloc() is a POS, you're going to use your own solution. It's not like I was interfacing straight with the motherboard's drivers to reserve a memory pool - I'm still using malloc or equivalent memory allocation functions...
 
Since this topic is befuddled anough already ... I'd like to offer this:

Consider that there are, say, 50.000.000 consoles out there. Say you could purchase a 2x performance upgrade for only $40 ... that makes a total cost of $2.000.000.000. Now, say that you can reach the same kind of performance increase by spending 2-4 additional man-years on progarmming more efficiently, or even 'to-the-metal'. If you've got some really expensive coders to do this job really well, then say that cost you about $2.000.000.

Now translate this situation to the complex PC space, with lots of different graphics cards. Is it conceivable that within the timeframe you are targetting your game, there are 2 or 3 dominant graphics cards in people's PCs for which you can add additional 'code to the metal' work to make them perform 2x better? If so, and this offers you a significant advantage over your direct competitors in that game segment, then this may be worth it. If not, then it probably isn't worth it.

Simple, no? ;)
 
Back
Top