New Cards - more complexity - longer wait for game adoption?

g__day

Regular
I rant on and off about how long it takes for games to be released that utilise the architecture of new leading edge cards.

I am wondering (even with Cg) is the gap between h/w becomming available and games appearing that commonly use these new capabilities only increasing?

As things stand new h/w generations comes out every 12 months, but it takes 12-18 months for games to commonly start reallying using its features, will things only become worse as top end cards and games become more complex?


A friend at 3DGPU asked where is the Radeon 9700 Pro bottlenecked?

My view (below) is lack of games that will use its new features appearing before the card is well and truly obsolete!!! :(

* * * What are your thoughts? Is the gap lengthening between h/w appearing and games using it?


I believe ATi Radeon 9700 PRO is a very well balanced card. What it most needs are games written for its underlying architecture and Directx 9. By the times these are common we will be mid way through the NV40 / R400 cycle waiting for their point releases.

This has happened time and again. A new revelation is revealed - it takes 12-18 months to adopt this revelation into mainstream games, just look at Voodoo 1 thru to Geforce 3. How long did it take for say five games to appear that utilised more than 50% of its new features?

The sad trend is that it will be mid-late 2004 before games appear commonly that really utilise the full features of NV30 and R300 even.

Reading the .plan updates John Carmack wants more passes (100) vs more bandwidth to do effects. This is where new cards are going. Beyond that in say 2004 - 2006 hardware will be fast enough I reckon to do proper Ray Tracing. This will be a incredible feat, but how long will it take to come into mainstream games?

The answer is 80% of games are written for the mid point of 2 year old OEM video cards; today that means a high end TNT2.

Only now are games appearing from JC that insist on any type of GeForce level card, with bells and whistles enabled on GF 3, 4, 5 level cards. So until the NV30 is an entry level card - or bundled on a motherboard, expect a plethora of games designed well for top end video cards only in your dreams!!!
 
:)
Well, this particular lament seems to come up every 3 months or so.
Games are introduced to the market that can be taken advantage of by a reasonable percentage of that same market. This makes perfect sense btw. It varies a bit, largely with how delayed the game has become.

It takes time for new hardware to penetrate the market. Even those rare whackos who actually want to upgrade once a year, and know of no higher form of entertainment than installing and upgrading drivers, still have to have sufficient spare money to support their hardware urges. That part of the market does indeed exist, but it is relatively small.

Incidentally, games are often more challenging in terms of performance rather than features. Performance needs can be dialed down (or up) by straightforward techniques. Coding for different feature sets is a larger undertaking.

Whatsamatter, AA and AF isn't enough for ya? If you want to watch pretty pictures, well, that's what technology demos are for. :) Content is more tricky. People who are tech hungry enough to buy an R9700, are not likely to keep the card for as long as it takes for its feature set to be exploited. And they know this at the time of purchase.

Entropy
 
Hmm...pixel and vertex shaders are a major change in the way of doing things. T&L on card was a change before that (less of one I'd say). Enhanced precision is not...it is just a quality enhancement of the existing way of doing things.

That is not to say that the difference between the two isn't significant, but that significance in is what the hardware is capable of, not what the developer is required to do to achieve it. So, as far as just that aspect of DX PS AND VS 2.0, I don't think we'd necessarily have as much delay as we've had for PS 1.x and VS 1.x.
 
I think Carmack's 100 passes thing is pretty much ridiculous. It's not going to happen within the next decade.
 
Keep in mind that Carmack includes overdraw and # of lights in his pass counts. So, what he calls 100 passes is more like 10 :)
 
Let me quote the man him self! :LOL:

Fuz said:
On a dif note, I must say that I am getting sick and tired of low res textures! When I walk up to a wall, or look down at the ground, I want to be able to see detail, is that too hard to ask for? Dont' give me this crap about "but the game has to run all systems".... total crap. Atleast give us an option damn it! If my system can handle it, then let me have it. Bloody hell!!!

I know the major reason for games not using the latest tech is because the devs have to cater for the lowest common denominator (G4MX), but how hard is it to make a game engine that is scalable? That way, game developers wont' be limiting their market, and every one is happy!

Scalability is the key, imo.
 
It needs more collorabation between the card and game makers.

Otherwise as games and cards get vastly more complex - they lag will go from the 12 month minimum we have today to 2 years or more by 2005.

I understand the economics fine. But a 12 - 18 month software development cycle synchronising badly with a 6 - 12 month hardware developing cycle for 7+ years is the pits!!!

Will things get better or worse was my question - and why do you hold your belief?
 
Is there any sites that keep track of sales figure of this latest Card like 9700 for example ?

I mean if they have a user base of 250,000 it might be worthwhile to develop games especially for it.
 
g__day said:
I understand the economics fine. But a 12 - 18 month software development cycle synchronising badly with a 6 - 12 month hardware developing cycle for 7+ years is the pits!!!

Will things get better or worse was my question - and why do you hold your belief?

OK, serious now...
I feel things will largely stay the same, or get slightly worse.
There are two factors at play
* the time needed for developers and content creators to make use of new features.
* the need for a realistic target audience at the release of the game.

The first point gets argued a lot, usually focussing on tools and developer relationships. I'd say that part will remain roughly as it is, but the overall time span for bringing a game to market includes content creation, QC et cetera, and overall I can only see the time needed to bring a game to market increasing, which would imply that typically, time-to-new-feature-adoption would increase as well.

The second point is trickier since it depends on what type of game you develop. I think the trend is clear that computers will get replaced less frequently, implying longer time is needed for a hardware feature to penetrate the market. But what is true for the average computer user is not necessarily true for people who buy games. (It may still be true, I just haven't seen any data.) Different game genres probably have different buyer demographics, muddying the waters further. Overall though, it is difficult to see that computers will get updated at an increasing pace.

Overall - the time required for new features to penetrate the market is most likely to increase, although indivdual titles may buck this trend.

Entropy
 
You also have to take into account that it takes much longer to create content for these high end games. You got all sort of maps now (bump this, gloss that, etc), also more polys = more detail for the artist to put in! Coding this shit up is pretty quick, it's the art and pipelines that are always a lot of work.
 
Maybe we need a standard in scene management ? if we'd have that, different software vendors could compete directly with pure rendering engines, and such engines could be delivered in time with new hardware features ( provided that such development houses are in good enough relationships with IHV's).
If scene management would be standardized, game developers could freely use any rendering engine that suits their needs, and changing it would be a snap.
Of course, such rendering engines would need to take care of transferring high-res artwork into their corresponding internal formats ( like Crytek would probably Polybump the models, but some other vendor would make use of tessellation/displacement mapping instead, to retain high-res look of the models )
 
Good points folk - esp Entropy and no_way

My feelings are to prevent new games taking years to catch two-cycle-obsolete technology cards perhaps we just need more specialisation, collorabation and standards between all of:

1. Major gaming houses and thought leaders (iDSoftware , Crytek etc)
2. Conent Creators
3. NVidia and ATi
4. SGI OpenGL and Microsoft Directx
 
g_day,

most of these features have to be built into the engine from the ground up. Some like Truform are a bit easier to hack in (not saying truform is a hack). Other link TnL will require a complete overhall of at least the rendering pipline. There is not much your going to be able to do as developers allways target the main stream which always lags behind the highend cards which have all the new features.
 
Even Carmack, with DOOM3, made a game engine that would look exactly the same on DX7, DX8, and DX9 cards. The only difference would be performance. I think the expense of making a game, versus the small target audience of people with high end videocards, has gotten to the point where game developers simply HAVE to target their 3d engine to the lowest common denominator. You spend millions of dollars on artwork, sound, etc, there's no way to turn a profit by targeting a crowd (high end computer users) which may only have a few million people in it, compared to the general computer-gaming crowd of hundreds of millions of people. (worldwide)

The major problem with new features is that they won't run at any decent speed on the videocards they are introduced on. A GeForce SDR's T&L is often slower than no T&L at all, and just try using widespread Dot3 bumpmapping or cubemapping on it - you'll be running games at single digit framerates. (like UT2003 on geforce1... o_O) Try running large amounts of pixel and vertex shaders on a GeForce3; it has trouble even handling the uglified Final Fantasy demo... Similarly, I doubt a Radeon9700 or NV30 could actually run those long looping cinematic shaders at any reasonable speed. Those features aren't so much useful on the videocard they're on, they are useful so that two or three years down the line game developers can put stuff (like dot3 bumpmapping) into games... it won't run fast on a GeForce, but it will run, so at least GeForce owners can make up a small part of the market. =/
 
well

I have my own opinions.


Well instead of a new dx version every year mabye we should move it to every 2 years. That way say dx 9 comes out for x mass of this year. We already have hardware for it (9700pro) a year after it comes out we will have hardware that will run the features of it at fast enough speeds. The following year we have new features and after that a new dx version.

Also the feature set of the next api should be known a year before its release so software makes will know what to target.
 
Re: New Cards - more complexity - longer wait for game adopt

g__day said:
The answer is 80% of games are written for the mid point of 2 year old OEM video cards; today that means a high end TNT2.

Only now are games appearing from JC that insist on any type of GeForce level card, with bells and whistles enabled on GF 3, 4, 5 level cards. So until the NV30 is an entry level card - or bundled on a motherboard, expect a plethora of games designed well for top end video cards only in your dreams!!!


One of the truly attractive things about the 9700 Pro for me is how well it straddles the line between newer and older 3D software. For the older stuff you have gobs of fill rate and the bandwidth to match, and for the stuff yet to come the card is better positioned than anything else shipping to take advantage of it. So in a sense you want these qualities in any 3D card you buy. I think it's a bit unrealistic to think that if the software doesn't use 100% of the given properties of a product you aren't "getting your money's worth." Rather, it's better, I think, to have a product that can handle most any piece of 3D software you throw at it competently.

As far as the hardware coming before the software, I've never known that not to be the case in the industry. Software developers need firm targets to shoot for, and every year the hardware bar is raised a bit higher and the resulting software gets better. I can't see how the situation could be any different, because having developers write software for hardware products they hope will be released seems nonsensical...;) Of course, developers have a great deal of input into the hardware creation/standards-making process, but still they cannot create the software prior to the hardware which will run it (huge waste of time and resources, of course) being sold into the markets in such a quantity so as to create a market for their software. And all of that takes time. As the saying goes, it's an imperfect world but it's all we've got.
 
Although I'm sure it would be nice to see games that take full advantage of modern graphics cards, it just ain't happening. When any of us start to see our systems chug playing the latest game, we know it's time for an upgrade. OTOH, I know people who are complaining because they're afraid (and rightly so) that they're not going to be able to play Thief 3 on their TNT2.

I think the reason why the 9700 was hailed as such a great card wasn't because it can do some new fancy trick that no other card can do. When the GeForce 3 was launched, nVidia showed off all sorts of spiffy tech demos and went "look what our card can do! Oooo." Shortly after, we heard claims like "can render 'The Spirits Within' in real-time". Yet we'll never see any games that look anything like their tech demos.

The reason why the 9700 is such a solid card is because it let's you play all of the current games with FSAA and AF turned on and still keep high framerates. Not because it has some fancy new special effects. Pixel-Shaders, for example, were basically introduced with the advent of the GF3 line of cards, yet there's still hardly any games which actually use them (Morrowind and Asheron's Call 2 are the only one which come to mind). I think focusing on playing today's games with all the bells and whistles, instead of creating features that will never be used, was the smartest thing ATi could have done.

Cheers,
Nup
 
Re: well

jvd said:
I have my own opinions.


Well instead of a new dx version every year mabye we should move it to every 2 years. That way say dx 9 comes out for x mass of this year. We already have hardware for it (9700pro) a year after it comes out we will have hardware that will run the features of it at fast enough speeds. The following year we have new features and after that a new dx version.

Also the feature set of the next api should be known a year before its release so software makes will know what to target.
1) DX8 was released late 2000. DX9 will be released late this year or early next. That's at least 2 years.

2) DX10 will probably debut with Longhorn sometime winter 2004/2005 (just a guess). I don't think yearly DX versions is really an issue.
 
Back
Top