On the other side of the AGP port lurks the Nemesis

Entropy

Veteran
The CineFX thread brought up (and buried) an issue that I've spent some time mulling over lately. The differences between interactice and cinematic type rendering, and how they stress the rendering system. I'll avoid cluttering up that thread and transfer a couple of relevant quotes here instead.
Reverend said:
A sidenote. More and more games are becoming increasingly more dependent on the CPU for obtaining the maximum framerates. All such games definitely have better gameplay. What is the correlation here?

BoardBonobo said:
I would guess games are becoming more CPU dependant because:

a. Physics engines are becoming more precise and are covering a whole scene and all the objects in the world. Most games would just use collision detection and a discrete way of measuring and plotting the result. Now we have games that contain variables for wind resistance and flow, proper particle dynamics for smoke and fog. Lots of things that weren't possible before.

b. The AI engines are becoming 'smarter' and some games seem to be using GAs to evolve better responses rather than having fixed paths. ANNs are also starting to make a show in simplistic form and that's quite an exciting thing to see. When games use GAs to produce ANNs so that distinct responses could evolve over the course of a game we'll see emergent behaviour that will seem pretty realistic appearing.

Minus points for elitist use of acronyms. ;)

And of course, here on B3D the recent articles with Geforce 4Ti cards showed that none of the games tested were affected by gfx-card limitations until 1280x1024, and as often as not were CPU bound all they way past 1600x1200. The R9700 extends this further still.

This is not a problem in and of itself, quite the contrary. More powerful graphics cards mean we can raise resolutions and add higher quality filtering and anti-aliasing. The resulting increase in visual quality is worthwhile and can be taken advantage of in all games, past present and future. And other advances open up further improvements in visuals.

But the graphics card is not the whole of the graphics engine, nor is a gfx-demo a game. And this is where the future seems less rosy as far as the development of our interactive virtual worlds goes. In recent material from the major gfx-card players, the 3D-pipeline is typically depicted with boxes, and they invariably give the CPU its' own single little box, which then passes data to the following dozen of boxes where they outline and sell their newest products. However, quite a lot happens in that first box, and as the complexity of the interactive environment grows, so does the load on the host.

Future capabilites of the host is fairly easy to predict. It still roughly follows Moores law, doubling every 18 months. That gives us a factor of ten in five years. The problem is, that is not a whole lot, compared to what we would like. The speed of the AGP port evolves even more slowly.

It would seem that the complexity of our interactive environments will be limited not by the capabilities of the gfx-cards, but of the host system. Furthermore, many of us do not envision that future games will work just as today only with more geometry and better shading, but that the environment will be more interactive, more accurately modeled, and more "alive" for want of a better word. All of which puts even more strain on the host on top of the growing complexity alone.

Frankly I can't see anything but a huge bottleneck, but then again I don't know in what directions game engines are moving for the future. So please chip in with any thoughts info or ideas you have on this issue. Or is the outlook truly as bleak as I see it?

Entropy
 
I sort of touched on this in the console forum a while back.
The problem is the target platforms that PC developers are aiming at. They usually target a relatively powerful CPU with a very underpowered graphics card, as a result they tend to do a lot of work on the graphics data with the CPU in order to minimise the load on the GPU.

Here's an example of something that cameup in a game I worked on recently, Drawing a shadow required redrawing of a largish object (about 2000 tris) but only about 6 of those tris, would actually havce any pixels redrawn shadowed. On a PC targeting a TNT or Geforce 1 you'd probably process all those tri's with the CPU, and only resubmit the ones that could recieve the shadow. On Xbox I ended up just dumping the entire object back to the GPU because the GPU could reprocess all the verts in about 1% of the time it took the CPU to determine if it should be resubmitted.

The problem is that in a graphics engine that touches individual verts, with high polygon counts you simply run out of CPU memory bandwidth (mostly it gets eaten up on latency). So even on very fast processors you end up CPU constrained.
 
It seems to me the way the technology is evolving, we are already working around the problem. It is only graphics related data that is really a concern for the AGP bottleneck, and moving more and more capabilities to the graphics chip decreases the amount of data that needs to be sent real time, even as the rendering detail and data increase.

EDIT: to expand and clarify, as we advance along this path, the previous technology will migrate downwards, also alleviating the "minimum spec" problem mentioned...the hardware exists to work around that stated problem, it is just a matter of it becoming the common denominator.
 
Back
Top