What is the problem with todays developers ??

And there's just a limited number of truly great artists to go around, and artwork in general can be a very time consuming process, so we're back to the "games take too long to develop" syndrome as well. The focus on the artists to control the quality of a games graphics, rather than the engine programmer will likely continue, especially as shaders get easier to work with.

Quit an interesting point that one because as the complexity rises there is also a blurring of the boundary as to what an artist needs to know and what a developer needs to know.

I’ve been talking with an Xbox developer recently who’s been getting very frustrated with his artist’s because they think they need to know what a normal map is and how it does it – he’s spent countless hows trying to explain that they in fact don’t need to concern themselves with that as that’s his job.
 
Doomtrooper said:
I disagree here :)

WinXp is much more efficient than Win 9.x for memory management and task management. Sure the GUI is more advanced and slightly more CPU intensive but testing here I installed XP on a PIII 333 and it runs as good as 9.X.

I've been dual-booting XP and ME for some time now, although I usually just use XP for dialing into work. The problem is that from my 900mhz Tbird/GF2MX system (that is now at my girlfriends place), to my current XP 1800 + GF2ti system, XP is noticeably slower than ME when playing games. I was quite surprised and dissapointed to find this is the case when I upgraded to an XP1800, I assumed the extra CPU power would reduce the overhead, but even with the latest drivers I still find there's a noticeable improvement in FPS with ME. Yes, I'm aware of XP's refresh rate quirks, but for the games I tested that was fixed.

Now, the one commonality between both systems is the sound card - the dreaded Live. While I haven't had any compatibility problems, I'm wondering if the reason I see this disparity while others don't (or choose to ignore it) is because of this. For XP I have to use the WDM drivers, which offer noticeably poorer performance in ME as well when compared to the VXD version. So I'm wondering if the Live is the reason that I just can't get into XP for games - the fact that it's using WDM, and from what I've heard not an exactly stellar implementation in XP may be the cause.

Any others notice a speed boost in gaming with XP that switched from a SB Live?
 
I don't know how many of you are programmers but this thread and it's bashing of programmers is driving me crazy. Programmers have not gotten sloppier. For a retro example take RBI Baseball(made by Tengen in 1987) for the NES. It is still my favorite game, despite having more than its fair share of bugs that actually dictate the outcome of some ballgames. So why care so much about getting more fps or having a few bugs if the game is still fun? That is the point of a game right? To have fun!

Sure there are lazy people and some of them happen to be programmers, but I suspect that some of your complaints of today are only because you are euphoric of yesterday. Shit happens just as bad games do. Think E.T. for Atari 2600.
 
Diminishing returns...

Without reading all of the thread, the quote with "diminishing returns" caught my eye. This would be rather tragic...

Where are the specular highlights? Where are the shadows, where the better lighting?

Compare today's computer games to a five year old raytracing image and the new computer game looks pathetic. Compare the five year old raytracing image to a new one with global illumination, caustics, subsurface scattering etc. etc. and the old image looks pathetic.

So if the quote is correct, you'll better prepare to wait a few million years before you'll see realistically rendered images in realtime.

But I don't believe that. I haven't done any actual research, but I think that a game that used, say, a Ti4600's resources for better lighting, more geometry etc. rather than just for cranking up the resolution and enabling AA would look much better and be much more immersive.

Chris
 
Re: Diminishing returns...

ChrisK said:
But I don't believe that. I haven't done any actual research, but I think that a game that used, say, a Ti4600's resources for better lighting, more geometry etc. rather than just for cranking up the resolution and enabling AA would look much better and be much more immersive.
It is not only about what do you have but how do you use it:

Programming quality - Dont need explanation

Art/Design quality - The same as above

Pervasive use - Some technology are only used in some cases/places in the game and not all situations (Doom3 has pervasive use of DOT-3 then it looks good).

Realistic Art -The artists/designers keep doing things cartoon like with flashy colours (UT2003 has flashy colours and carton like Art/Design), and IMHO it is not immersive. Also flashy colors equalize old tech games with new tech (why then 32bits colors?)

Sound immersion - I miss A3D. Have you heard Abuse in the dark?

Gameplay/game design - No word about that.
 
You know what i really want to see other then the fancy graphic options?

Real looking human characters. Characters that animate smoothly, you see their feet actually touch the ground and you see their knees bend correctly and their legs move smoothly and their whold body not look stiff, but rather flow while running like in real life. I also want 5 fingers on each hand that can move independently, I don't want anymore hands that look like they are in mittens or have all their fingers sewed together lol. I want hair that blows in the wind and clothes that actually move and shift when they are moving, not just look like something that is painted on.

Do we have the power for this in current graphics cards?

IMO things like that will help tremendously in bringing realism and increadibly fun gameplay into games, do this, and it would be a revolutionary step IMO.
 
Re: Diminishing returns...

ChrisK said:
Without reading all of the thread, the quote with "diminishing returns" caught my eye. This would be rather tragic...

Where are the specular highlights? Where are the shadows, where the better lighting?

I don't know about others, but I was talking about gameplay, not graphics.

Actually, if you just want per-pixel lighting, it will not be too hard. You'll need some arragement in your code, but you don't need any new art pieces. Shadows will be much harder. Volumetric shadowing is not practical until recently. Shadow mapping is supported by only few hardwares, and still with some problems.

I wrote a program using per-pixel lighting (no bump mapping). It runs reasonably fast with simple scenes and models. However, another program using volumetric shadowing is not that lucky. Although the effects look good but it runs quite slow even on a GF3. This is without per-pixel lighting. I don't know how slow it will be if the two are combined.

Hopefully Doom 3 will lead other game developers to seriously consider a complete per-pixel lighting and shadowing environment.
 
Graphical quality isn't just about technology, it's also about artwork.

Take Warcraft III and compare it to Starcraft, and in some scenes you'll find that Starcraft actually looks better. It's 2d, 640*480*8, and can run on a 486-DX100... but Starcraft looks damn good, because Blizzard was really good with its 2d artwork.

A lot of old games can look stunningly good even though their technology is ancient. Any idiot can make 2048*2048 bumpmapped textures look good, but to make a low-res 2d game look good takes a lot of skill. ^^

IMHO though, a lot has to do with too high expectations. Back in the old days, the difference between Keen's 2d and Wolf's 3d was awesome. The difference between Wolf3d's small flat levels and DOOM's huge multilevel arenas was insanely impressive. The difference between DOOM's sprites and Quake's polygons was a quantum leap. But you can only make the jump from 2d to 3d once, (its not like we're going to use full-surround virtual reality anytime soon) and the incremental improvements we see today are less impressive.

Another thing is that computer games have gone mainstream. Back in the old days, JC was elated to sell 10,000 copies. These days, Warcraft III sells 4.5 million copies before release, and The Sims sells that many copies every month. What this means is that the potential market for computer games is no longer limited to an exclusive geek club with hyper-powered computers. Back when only the true geeks had 486's, and the average computer-user had 286s used only for word processing, DOOM could be a success because the target market was only the 486-equipped geeks in the first place.

Nowadays, that same geek population is armed with AthlonXP2000+ and GeForce4Ti4600, while the average gamer has a Celeron-600 and TNT2 or GeForce2Mx. (or, God forbid, 8 MB Intel Integrated graphics) However, while in the old days the gaming company could ignore the "average computer" and aim for the "geek's computer" as their system specs, a modern computer gaming company cannot afford to do so. The population of TNT2-toting newbies is at least a hundred times larger than the GeForce4 crowd. Look at any computer you can find on a store shelf. They are probably something like "Celeron 1.4 GHz with NetBurst architecture, Intel Integrated Graphics". It's a brand new computer, but no way will it run UT2003, let alone DOOM3. And there is no reason for it to - the buyer of such a computer is likely to play Minesweeper, and may play The Sims, NeoPets, but probably will never even hear about Starcraft or DOOM. If he ever discovers that his precious computer lacks graphics horsepower, it will be because he picked up Black & White out of the bargain bin.

And although most of the "non gaming population" does not play "real" computer games, the computer gaming manufacturer cannot ignore that population. They are at least 20 times larger in size than the computer gaming population, so even if only 10% of them buys a computer game, it will vastly outweigh all of the non-clueless computer gamers. (For proof of this, log onto warcraft 3 battle.net at any time)

Thus, even though DX8 cards have been out for a long time, no one's actually used DX8 on PC's; the DX8 effects are limited to Xbox. Fact is, the majority of people with computers don't have the capability to run DX7 effects, let alone DX8. Talk about potential DX9 games at this point is absurd.

One of the major problems with the GeForce was that its main feature, hardware T&L, took so long to be implemented in games that the GeForce3 was out by the time an average game would show any benefit from HW T&L. Likewise, the GeForce3 wasn't able to see any benefit from DirectX8 shaders, and now that UT2003 is about to be released, so is the DX9-compatible R300. I don't see any end to this trend soon - by the time DirectX9 features are widely used in games, there will be a DX10 card out with speed and features far surpassing any NV30 or R300.
 
It's just all in the artwork. Take skinning in FPSes for example. The art style itself has hardly changed (note this is my opinion) from the the old days of 8 bit to now with full 32 bit source art. In the old 8 bit days it hard to make things look realistic with just a limited number of colours, so the art needed to be somewhat stylized. The problem is the art in new games is still generally stylized, except more colours are now used or the textures are bigger. It is in no way more realistic.

A nice example would be comparing Unreal 2 to Doom 3. I doubt anyone would say that either game is being written with a 'bad' engine, but which one looks more realistic? Unreal 2, or Doom 3.... well, just just have a look. Here's a screenshot from Doom 3, and here's a recent one from Unreal 2. It should be immediately obvious that Doom 3 just looks better, but is it the per-pixel lighting.... I don't think so. Per Pixel lighting isn't normally something that is so obvious in screenshots. It makes much more of a difference when you are actually moving around in the game. In a screenshot it's difficult to see if the shadowing is part of the source art or if it's being generated real time. Back to my comparison, the difference between Doom 3 and Unreal 2, is that the character from Unreal 2 looks almost like a hand drawn cartoon character, which of course she effectively is.

The reason for the difference is the way the skins were generated. The woman from Unreal 2 had her skin hand drawn. The skins in Doom3 were on the other hand rendered from the high polycount (untextured???) models. Unreal used the conventional method that's been used since Quake (if not before), while Doom 3 is using a completely different method that is rarely used in 3D games. The artists for Doom 3 are unable to make skins like they were before, so they look completely different to before, while the Unreal 2 artists are doing exactly what they were doing before.

One could argue that the Unreal 2 artists wanted the nonrealistic look, but then that is just the point. It's the artists who are not making realistic looking art, not the programmers, who many people tend to blame. It's not the 3d accelerators either. They do not enforce a limitation that states, you can not use photorealistic source art if you are not doing perpixel lighting. It's all just the artists.

While these observations are focused only on FPSes, they still apply somewhat to other game types as well. It doesn't matter how well a RPG engine can render the world if the artists are incapable of making a realistic looking character skin. Things will just not look real. Unless a change occurs here, which hopefully the Doom 3 engine will help to force, then we will still be complaining about how non realistic our games look in years to come.

-Colourless
 
It's just all in the artwork.

I don't agree completely. Specifically, I understand the differences in the "methods" by which tharacters are skinned by the artists. However, it is the hardware and technical rendering target that allows the Doome 3 character generation to work. The fact that the Doom3 characters are generated based on the assumption that they will be lit per pixel, is what allows the "new method" to be used. (Starting with a high poly model, then auto-generating a low poly model with several texture / light / bump maps.).

It should be immediately obvious that Doom 3 just looks better, but is it the per-pixel lighting.... I don't think so.

Again, IMO, It's not the per-pixel lighting per-se...but it's the per-pixel lighting target that allows the artists to do what they are doing with the Doom3 characters.
 
Back
Top