The CineFX thread brought up (and buried) an issue that I've spent some time mulling over lately. The differences between interactice and cinematic type rendering, and how they stress the rendering system. I'll avoid cluttering up that thread and transfer a couple of relevant quotes here instead.
Minus points for elitist use of acronyms.
And of course, here on B3D the recent articles with Geforce 4Ti cards showed that none of the games tested were affected by gfx-card limitations until 1280x1024, and as often as not were CPU bound all they way past 1600x1200. The R9700 extends this further still.
This is not a problem in and of itself, quite the contrary. More powerful graphics cards mean we can raise resolutions and add higher quality filtering and anti-aliasing. The resulting increase in visual quality is worthwhile and can be taken advantage of in all games, past present and future. And other advances open up further improvements in visuals.
But the graphics card is not the whole of the graphics engine, nor is a gfx-demo a game. And this is where the future seems less rosy as far as the development of our interactive virtual worlds goes. In recent material from the major gfx-card players, the 3D-pipeline is typically depicted with boxes, and they invariably give the CPU its' own single little box, which then passes data to the following dozen of boxes where they outline and sell their newest products. However, quite a lot happens in that first box, and as the complexity of the interactive environment grows, so does the load on the host.
Future capabilites of the host is fairly easy to predict. It still roughly follows Moores law, doubling every 18 months. That gives us a factor of ten in five years. The problem is, that is not a whole lot, compared to what we would like. The speed of the AGP port evolves even more slowly.
It would seem that the complexity of our interactive environments will be limited not by the capabilities of the gfx-cards, but of the host system. Furthermore, many of us do not envision that future games will work just as today only with more geometry and better shading, but that the environment will be more interactive, more accurately modeled, and more "alive" for want of a better word. All of which puts even more strain on the host on top of the growing complexity alone.
Frankly I can't see anything but a huge bottleneck, but then again I don't know in what directions game engines are moving for the future. So please chip in with any thoughts info or ideas you have on this issue. Or is the outlook truly as bleak as I see it?
Entropy
Reverend said:A sidenote. More and more games are becoming increasingly more dependent on the CPU for obtaining the maximum framerates. All such games definitely have better gameplay. What is the correlation here?
BoardBonobo said:I would guess games are becoming more CPU dependant because:
a. Physics engines are becoming more precise and are covering a whole scene and all the objects in the world. Most games would just use collision detection and a discrete way of measuring and plotting the result. Now we have games that contain variables for wind resistance and flow, proper particle dynamics for smoke and fog. Lots of things that weren't possible before.
b. The AI engines are becoming 'smarter' and some games seem to be using GAs to evolve better responses rather than having fixed paths. ANNs are also starting to make a show in simplistic form and that's quite an exciting thing to see. When games use GAs to produce ANNs so that distinct responses could evolve over the course of a game we'll see emergent behaviour that will seem pretty realistic appearing.
Minus points for elitist use of acronyms.
And of course, here on B3D the recent articles with Geforce 4Ti cards showed that none of the games tested were affected by gfx-card limitations until 1280x1024, and as often as not were CPU bound all they way past 1600x1200. The R9700 extends this further still.
This is not a problem in and of itself, quite the contrary. More powerful graphics cards mean we can raise resolutions and add higher quality filtering and anti-aliasing. The resulting increase in visual quality is worthwhile and can be taken advantage of in all games, past present and future. And other advances open up further improvements in visuals.
But the graphics card is not the whole of the graphics engine, nor is a gfx-demo a game. And this is where the future seems less rosy as far as the development of our interactive virtual worlds goes. In recent material from the major gfx-card players, the 3D-pipeline is typically depicted with boxes, and they invariably give the CPU its' own single little box, which then passes data to the following dozen of boxes where they outline and sell their newest products. However, quite a lot happens in that first box, and as the complexity of the interactive environment grows, so does the load on the host.
Future capabilites of the host is fairly easy to predict. It still roughly follows Moores law, doubling every 18 months. That gives us a factor of ten in five years. The problem is, that is not a whole lot, compared to what we would like. The speed of the AGP port evolves even more slowly.
It would seem that the complexity of our interactive environments will be limited not by the capabilities of the gfx-cards, but of the host system. Furthermore, many of us do not envision that future games will work just as today only with more geometry and better shading, but that the environment will be more interactive, more accurately modeled, and more "alive" for want of a better word. All of which puts even more strain on the host on top of the growing complexity alone.
Frankly I can't see anything but a huge bottleneck, but then again I don't know in what directions game engines are moving for the future. So please chip in with any thoughts info or ideas you have on this issue. Or is the outlook truly as bleak as I see it?
Entropy