Best % split between CPU and GPU?

Seiko

Newcomer
With CPU prices so low now compared with GPUs I'm just wondering whether developers should try and split the load between CPU and GPU a little differently?

Two obvious comparisons can be made between Q3 and UT/UT2003.
Q3 to me represents a perfect example of placing the engines resource requirements firmly on the shoudlers of the GPU.

UT/UT2003 on the other hand places its resource requirements firmly on the CPU.

Until recently UT seemed a perfect balance. Then I dropped the 8500 (not exactly a cheap card at the time) into the rig and bang. Q3 runs amazingly well and insome case 75% quicker. Poor old Ut however receieved no boost at all :( Of course it didn't help I was previously using a V5 and glide so although the 8500 allowed me to enabled aniso for free in FPS terms the overall feel and speed of the game remained the same.

I just wondered how ppl here would like to see the split in future games? If we hope that the CPU will be freed to handle AI, physics etc then just what should be deemed a base line system? UT2003 seems to asking quite a lot and yet in terms of new game play I found it rather lacking.
Simple things like no collision detection on your weapons, no terrain damage or involvement etc all left me feeling rather unimpressed.
 
I think we have to wait the final UT2003 to understand it better.

From Anandtech two articles (I dont have the beta) it looks like a GPU bounded game.

IMHO the correct is center fire in the GPU. GPUs have a lot of unused power and some are really much more cheaper than a new cpu/memory/mobo/cooler/case/psu upgrade.

How many people are there with great 1GHz CPUs ?

I would rather see some elegant use of the GPU power and with great Art/Design ;)
 
Given $300, I'd probably spend $200 on the graphics and $100 on the CPU. But, given $500 I'd probably spend $300 on the graphics and $200 on the CPU.

It depends on what is availabe though. Right now, the $100 and $150 (8500 and Ti4200) are excellent price points. The way the market is at this very moment, a $150/$150 split might make more sense.
 
pascal said:
IMHO the correct is center fire in the GPU.
The unfortunate thing in this is the OEMs still selling TNT M64s with their Top-O-Da-Line(tm) processors. But, screw them is my opinion.
 
Graphics cards are now way more important for games than CPU. You can easily get away with spending half or less on the CPU than the GPU.

Don't expect this to change in the future either. With more programmable GPUs, performance is going to rely more and more on graphics cards and less on the CPU. In a way, Intel and AMD should be concerned, because with the cutting edge cards ATI and Nvidia are making, their CPUs are going to be somewhat obsoleted in terms of gaming. Since one of the only reasons to buy a fast CPU is gaming (no need for a fast CPU for word processing or e-mail) this could potentially be a serious problem for them in the future.
 
I would think though that as GPUs get more and more powerful and are capable of taking the majority of the strain off the CPU the developers would then be able to use all the extra processing power available to do all kinds of extra funky AI and physics.

Bayesian or Hopfield based nets plugged into reinforcement learning black boxes using decision theory to out guess the player. A game so smart that after you've made your first move it's already worked out exactly what you'll do and which path you take right to the end. And the told the GPU, which has pre-rendered the whole thing and and worked out the exact physics of the world to that point. Meanwhile the soundcard has been told by the GPU what materials are involved in each scene and has already produced a personalised sound track and a completely realistic 3D sound environment. And finally it's been informed by the fridge that you're out of milk and puts hints to this fact into the plot.

A world so realistic and indistinguishable from your normal one that when you exit the game and switch the computer off, all the lights go out... everywhere.

But seriously I think that programmers will always find a use for the power provided by the CPU regardless of how much of the rendering\sound engine is processed by dedicated hardware. Think William Gibson...
 
BoardBonobo said:
I would think though that as GPUs get more and more powerful and are capable of taking the majority of the strain off the CPU the developers would then be able to use all the extra processing power available to do all kinds of extra funky AI and physics.

Bayesian or Hopfield based nets plugged into reinforcement learning black boxes using decision theory to out guess the player. A game so smart that after you've made your first move it's already worked out exactly what you'll do and which path you take right to the end. And the told the GPU, which has pre-rendered the whole thing and and worked out the exact physics of the world to that point. Meanwhile the soundcard has been told by the GPU what materials are involved in each scene and has already produced a personalised sound track and a completely realistic 3D sound environment. And finally it's been informed by the fridge that you're out of milk and puts hints to this fact into the plot.

A world so realistic and indistinguishable from your normal one that when you exit the game and switch the computer off, all the lights go out... everywhere.

But seriously I think that programmers will always find a use for the power provided by the CPU regardless of how much of the rendering\sound engine is processed by dedicated hardware. Think William Gibson...

While this is true, it's going to take a major paradign shift on the part of game programmers, and it might take a while for them to come around. Let's face it the AI in many games even today is still pretty basic. Epic seems to be pushing the envelope in terms of bot AI (UT's bots were pretty realistic for their era, although they still were stupid in CTF), it remains to be seen if id is going to follow suit. I hope Doom3 doesn't end up being eye candy with daemons as dumb as rocks, but I'm kind of expecting it to be that way.
 
Q3 to me represents a perfect example of placing the engines resource requirements firmly on the shoudlers of the GPU.

I disagree, I think Q3 is an excellent example of a game that quite nicely distributes workload between the CPU and GPU. The nice thing about Q3 is that with a slow CPU one simply needs to tone down the geometric detail and with a slow GFX card they simply need to tone down the texture detail. I find it rather easy to get the game to run on most any system that meets the requirements. Unlike Unreal's system requirements which were completely pull out of someone's ass and I believe UT falls under a simillar vien. I believe UT2003, finally changes this, where the GPU has a much larger roll, I'm not sure if it'll be as responsive to upgrades as Q3 is.
 
Seiko said:
With CPU prices so low now compared with GPUs I'm just wondering whether developers should try and split the load between CPU and GPU a little differently?

There are things that you can only do with CPU (with todays hw anyway).
I mean things like collision detection, physical simulation and AI.
The games of future will increase the complexity of these.

But this has hardly anything to do with graphics or the GPU.

Most games are fillrate bound, and not vertex-processing bound. So doing some vertex-calculation on the CPU is not necessarily helps, it'd only jam the AGP bus. (One of the advantages of the vertex-shader is that you don't have to push the computed vertex data every frame trough the AGP bus.)

So to help, the CPU has to reduce the fillrate!
This is possible by analitical visibility (occlusion) calculations.
It is a complex subject, but probably this is what you looking for...
Think BPS trees, portal engines, etc.

One pitfall is to rush into too much "helping" the GPU, and end up with a CPU bound game. (Like MOHAA with its LOD system.)
 
I think in the near future the CPU's only task (related to graphics) will be compiling the code written in some high level shading language (RenderMonkey, Cg, or anything) for the GPU in runtime.
 
opy said:
I think in the near future the CPU's only task (related to graphics) will be compiling the code written in some high level shading language (RenderMonkey, Cg, or anything) for the GPU in runtime.

I think whilst the CPU is still passing the verts developers really need to keep the vertice count to a low level. At least that way by reducing the graphics level/texture quality etc we can ensure machines can still deliver a playable level of performance.

Of course the newer cards with r/n patches can help with model complexity etc and with vertex shaders power increasing it's a shift and ruleset I'd like to see.

To have a game that simply can't run without a 2 gig/266 DDR memory sub system is going the wrong way in my opinion.

A disappointed CeleronII 850 system owner. ;)
 
Back
Top