Questions regarding target platform/development path of the PC game

Ken2012

Newcomer
Let me get this straight: During the development path of the PC game, the renderer can be updated to take more advantage of the current (graphics) hardware. For example, the Doom3 engine was originally built around effectively Geforce 3 level, but towards the release date it was upscaled to take more advantage of the raw horsepower and/or features that NV40 and R400 had to offer.

In a word (or several, even), why then do people sometimes tout that PC graphics hardware (in this case running what it's primarily designed to do- shader/texture-intensive operations) isn't really being pushed to it's limits? CPU bottlenecking aside, pump up the AA, AF and screen res in F.E.A.R. or AoE3 and they'll bring a pair of SLI'd 7800GTX's to their knees*.

Is it really a case of just 'badly' written drivers giving the end-user sub-par performance, or indeed the renderer itself? Regardless, in an ideal world where there are no errors made by the devs, should the end-user right now be getting more for his/her money? Is the PC current hardware really being pushed, or is there some kind of lack of optomisation somewhere down the line?

*SLI or Crossfire is perhaps a bad example, assuming the current generation of games don't natively support multi-GPU, only via the drivers forcing AFR/SFR etc, correct me if I'm wrong. A single 512MB GTX or X1800/1900XT would be a better example, I suppose.

Apologies for my waffling, hopefully someone will be able to shed some light on this little conundrum of mine.

Thanks in advance ;).
 
Ken2012 said:
In a word (or several, even), why then do people sometimes tout that PC graphics hardware (in this case running what it's primarily designed to do- shader/texture-intensive operations) isn't really being pushed to it's limits?
Because people are ... difficult. People like to think in simplified categories. Some people go too far.

As you've already given the prime example with Doom 3: its renderer is actually fully functional on a Geforce 1 or even a Radeon VE (which "lacks" transform hardware). People have claimed, based on that simple fact, that it's "only a DX7 game". "DX7 game" is one of those simplified categories, it is meaningful in some circumstances (though certainly not for Doom 3, with it using OpenGL), but it is irrelevant. It doesn't tell you anything. It is an utter waste of time to reach these forms of conclusions, but people continue to do so. Just search the forums for Direct3D 10 or WGF2.0 or SM4.0 [sic] or whatever. The perpetual ignorant hunt for higher version numbers, as the proverbial "executive summary", seems to be some sort of entertainment in itself, but the problem remains that if you simplify things down to that level you strip away all substance. The result is as easy to remember as it is meaningless. Square one. Might have poked your nose instead.

Targetting a broad range of hardware that significantly shifts around during a development cycle is a science in itself. You want to have your geometry amounts, overdraw, shader cycles per pixel, number of dynamic lights etc in check at all times, based on what you expect to be a reasonably affordable gaming rig at the time of release.

Another problem that I personally perceive as much worse than the former is public perception of the work that actally is done. You can throw a lot of very realistic (as in approaching a match with physical reality) lighting models around, do subsurface scattering, soft shadows, base your water on real fluid dynamics or what-have-you, and invariantly noone will appreciate it because "it's not shiny", because "it doesn't even use shaders, where is all my performance going?".

Sorry, that was probably a rant. I feel much better now. May come back later, calmer.
 
Last edited by a moderator:
zeckensack said:
Sorry, that was probably a rant. I feel much better now. May come back later, calmer.

Heh, not to fuel your personel fire but the situation is like this: For the 3D hardware novice/learner/non-programmer/yet rather open-minded enthusiast, it can get more than a tad confusing when people throw subjective opinions around you regarding what this or that hardware/instruction set is capable of, or whatever.

In all honesty, if someone was to come up to me and randomly tell me that, for example, CELL/RSX were capable of producing LOTR-quality CG in absolute real time, I'd doubt their claims based on what amounts to not much more than common (if moderatley technical) sense. On the one hand I'd obviously think that the amount of data needed to be generated in a single frame is simply way beyond any scope of current frame buffer (or GPU or CPU) bandwidth/width to be run @ a stable 30+ FPS in RT, yet concurrently there would be no reason to say they were "wrong" because chances are they had more intricate knowledge of the inner workings of PS3 hardware than I do...

Rather exaggerated example there, but you get my point I hope.
 
Ken2012 said:
In a word (or several, even), why then do people sometimes tout that PC graphics hardware (in this case running what it's primarily designed to do- shader/texture-intensive operations) isn't really being pushed to it's limits? CPU bottlenecking aside, pump up the AA, AF and screen res in F.E.A.R. or AoE3 and they'll bring a pair of SLI'd 7800GTX's to their knees*.

I have a geforce 4ti 4200. CounterStrike can push it to its limit in one sense, if I run it 8xAA and 4xAF. But on the other hand, it's a low poly / low res textures game which was made for the voodoo1 and doesn't take advantage of the gf4's features (TnL/vertex shading, pixel shading, even 32bit rendering).
Doom3 and Far Cry really push it to its limit, as they are probably the most advanced stuff the gf4 can do, with decent IQ and decent framerate and highest detail (except for the texture compression in Doom3)

I guess the people who complain about that are the ones who buy the latest $500 thing when it comes out.. As when the real DX8-level stuff came out (aforementioned doom and far cry), they had already upgraded from gf4 to 9800 pro or such. And when the real DX9 stuff comes they go to X800XT or NV40, without having enjoyed what a 9800pro can really do.
Rince, lather, repeat :)
 
I'd think anyone telling you that LOTR realtime stuff is just trying to impress or woo you.
But are we talking about consoles now? Because if we do, we're probably going to places I don't like, as stated above, and we're probably in the wrong forum section. The executive summary I would reach is "PS3 performance > XBox360 performance", and I could actually make an argument that will support that, using the system spec details I know. And that is where it gets tricky. The executive summary is really not worth remembering. The details are worth remembering, and as many of them as you can manage.

As for your initial question as I understood it, I believe my rant wasn't that far off. Going to take a while to explain myself though.

The Quake 1/2/3 type of rendering engine, where you rely on light maps for most of the "realism", just performs blazingly fast on modern hardware, and people (reasonably) expect that a game that performs significantly worse must also look significantly better. The problem is that it's hard for e.g. a global unified lighting system to look significantly better than static light maps ftw, with just small splotches of dynamic lighting "effects" bolted on. IOW: it may be hard to detect or even appreciate the differences visually, but everybody will notice that the performance went waaaay down. IMO we're already very far out on the curve of diminishing returns.

What people do immediately appreciate however are ... gimmicks. The proverbial water shader, lens flares, exagerrated shininess, and all that "HDR" craziness about blurry vision and blindingly bright crates made of super-reflective wood. At least I know that the gaming gazettes in my country always predictably drool themselves wet when they notice such "effects", while I myself find them outright unrealistic and cheesy.

So let's take FEAR for a second. That game, IMO, is super-ambitious and effective technologically. At the same time everyone hates the low performance. That's a given, because most of the processing power goes into subleties. I postulate that had FEAR been sprinkled with more gimmicky effects that jump in your face, there would be far less complaints about its performance.

FEAR may be mis-targeted for the PC hardware it was released for. And perhaps it is just too ambitious on the shader front. Perhaps nobody needs more than lightmaps plus "effects" to enjoy a game. That would be worth a discussion.
But I don't think it's fair to say that FEAR is poorly coded or something in that vein.

Next one: quality options. This issue is actually two-fold. First, many people stubbornly max out all quality settings and then start complaining that performance stinks. Thus it becomes a liability to include higher quality settings in a game. You're getting much less angry customers if you just leave them out of the product. Second, there are indeed games where quality scaling just does not work, where low quality options look like ass and still perform terribly. I won't call the names but we all know they exist, and that's genuine poor design.
 
zeckensack said:
I'd think anyone telling you that LOTR realtime stuff is just trying to impress or woo you.

Worry yourself not, no-one has. Yet.

I do agree with you on the over-use of HDR in various titles.

EDIT:-

zeckensack said:
Targetting a broad range of hardware that significantly shifts around during a development cycle is a science in itself. You want to have your geometry amounts, overdraw, shader cycles per pixel, number of dynamic lights etc in check at all times, based on what you expect to be a reasonably affordable gaming rig at the time of release.

Well, as expected I guess this is what the point of this thread boils down to, thanks for that.

However, I must ask: during the latter stages of the dev process, current-current hardware is tested & implemented, right? The renderer is re-optomised for whatever hardware is available. So, how time consuming is it for the programmer to take the original, say for example, SM2.0 code and rewrite it to effectively SM3.0 (or at least use some of a newer hardware/API's features), if this is even possible or how it's even done?
 
Last edited by a moderator:
Back
Top