Quickest way to advance games?

Which would lead to the implementation of advanced graphics features in actual games quicker?

  • Many new features per generation , but there are performance issues associated with there implementa

    Votes: 0 0.0%

  • Total voters
    251

indio

Newcomer
Each new card generation seems to bring us new features. Unfortunately it takes a significant amount of time for games to implement these features. Sometimes it's because the features have performance issues. other times because the features are not widely enough distributed across cards to justify the extra work it takes to implement them.
The basis of the question is it is better to have 1 well implemented feature that works well introduced in a new card generation or several new features exposed each generation even though the performance with each feature is not great.
 
Feature usage sometimes slow for two reasons:

1. Time lag between conception and completion of a computer game.
2. Lack of saturation of a feature in the market.
3. Problematic to both support a feature and not support it (i.e. have an alternate path for cards that don't support said feature).
4. Occasionally, feature is poorly-implemented in hardware, making software usage unfeasible, undesirable, or just not worth the effort.

Basically, feature adoption will come faster with more features implemented in video cards purely because that's the only way we'll get features that are good. It's going to be hit and miss.
 
the only way i see it is when a new api comes out the companys need to put out a card that can support it across the board.

Like when the 9700pro came out instead of just having the 9700pro come out they should have had the 9500pro , another lower level card and an integrated chipset that supported dx9. That way right from the bat all systems sold that year will have the hardware. Right now we still don't have a dx9 chpset . We just got dx8 chipsets around christmas. THats the problem. Wow i hope this makes sense.
 
No DX9 chipset? Are you talking about integrated motherboard video?

Anyway, we do have a low-cost DX9 part: the GeForce FX 5200.

And yes, the rapid release of low-cost parts that support the features of their high-end counterparts is necessary for the rapid adoption of new features.
 
Chalnoth said:
No DX9 chipset? Are you talking about integrated motherboard video?

Anyway, we do have a low-cost DX9 part: the GeForce FX 5200.

And yes, the rapid release of low-cost parts that support the features of their high-end counterparts is necessary for the rapid adoption of new features.
sorry my pain killers are making it hard for me to concenrtrate and its actually making my spelling worse if there was any way that could happen haha.

Yes i mean intergrated mb video.

Yes we now have the 5200. How long ago did dx9 come out though. there was about a 6 month diffrence between the first dx9 cards and a low end one .
 
jvd said:
Yes we now have the 5200. How long ago did dx9 come out though. there was about a 6 month diffrence between the first dx9 cards and a low end one .
Which is the fastest it's ever been.

The TNT2 Vanta was released approximately 6 months after the TNT (It's hard to remember the exact number...the Vanta was so under-marketted).
The GeForce2 MX was released about 6 months after the GeForce256.

Anyway, you can't really expect updated low-cost architectures to be released in sync with the high-end cards due to cost. And you also can't expect release schedules faster than 6 months (on the average), due to OEM release schedules.
 
Chalnoth said:
jvd said:
Yes we now have the 5200. How long ago did dx9 come out though. there was about a 6 month diffrence between the first dx9 cards and a low end one .
Which is the fastest it's ever been.

The TNT2 Vanta was released approximately 6 months after the TNT (It's hard to remember the exact number...the Vanta was so under-marketted).
The GeForce2 MX was released about 6 months after the GeForce256.

Anyway, you can't really expect updated low-cost architectures to be released in sync with the high-end cards due to cost. And you also can't expect release schedules faster than 6 months (on the average), due to OEM release schedules.

i know but if it were to happen it would speed up adoption wouldn't it ? thats what he asked.
 
It's not about not being able to support feature "X" in time, but having something that works generally across all taget hardware, and doesnt look visually different on those bits of hardware.

"Effects" are gimmicky. Imagine if quake3 supported EMBM (DX6 - Matrox). It would be totally out of place, and not a very well used "feature".

Things like TRUEFORM as they currently stand are exactly the same. They arent supported well enough for developers to use them, and they just dont mix well with other algorithms or "effects".

Vertex and Fragment programs are becoming more widly used by game developers, because they are useful - either they give performance advantage or allow you to do certain algorithms. The key is not to make these things stick out like dogs balls, and then not be supported in the same way on other hardware. This is the main drive behind things like Doom3's architecture. Even Doom3 uses Vertex and Fragment programs in a minimalistic approach. Not because these arent good "features" or carmack couldnt come up with other uses for them, but because mixing them with other things is hard to do for performance and visual reasons.

Having fur on a Doom3 or HL2 character might look kewl, but does it fit?
 
Bump Mapping and truform are good examples. The original radeon had software support , the 8500 had hardware support and the 9700 has software support. It seems like games are just starting to support truform on a fairly broad scale then ATI goes and crippples it. Bumpmapping doesn't work extremely well in hardware but it looks very good . The precursory test on Bloodrayne shows an up to 30% hit with bumpmapping enabled. I would rather have almost no performance hit than 10 effects that are nearly worthless . I think companies do this to hedge there bets and but make a greater commitment on future products based upon developer feed back. In the meantime your standing around with 3-4 unusable effects.
 
although slightly offtopic, I think that for games to advance faster will be when MS releases longhorn. Making sure that new comps at the very least are dx9 will surely lead developers to raise their minimum target group.

later,
 
Chalnoth said:
Feature usage sometimes slow for two reasons:

There are 3 kind of people in the world those who can count, and those who can't. :LOL:

1. Time lag between conception and completion of a computer game.

That depends on whether the feature request a change in the artwork.
That's why adoption of bump-mapping is slow.
Some features like T&L needs no change in artwork at all and therefore can be implemented later in the development of the game.

2. Lack of saturation of a feature in the market.

Yeah, I wan't displacement mapping!

3. Problematic to both support a feature and not support it (i.e. have an alternate path for cards that don't support said feature).

That's why pixel shader effects are so uncommon.
It usually used for optional decoration for water and ice.
Even Doom3 won't have an effect that cannot be done on a GF2.

4. Occasionally, feature is poorly-implemented in hardware, making software usage unfeasible, undesirable, or just not worth the effort.

A not so widely used example is cube-mapping.
It can be used for real-time reflections.
But it requires 6 passes to render the environment into the cube-map.
It would require a change in the triangle setup only and rendering could be done in a single pass.
(That would be useful shadow mapping point lights as well.)
 
The problem is Dx. It's too slow and not nearly as dynamic as it could be. The problem with OGL is that it's to slow to create an across the board standard, but I think that's more Nvidia's doing than anything.
 
Back
Top