CPU / Graphics Load Balancing

Dave Baumann

Gamerscore Wh...
Moderator
Legend
Reuters has an article talking to ATI's director of software, Ben Bar-Haim - in it, this crops up:

[url=http://www.forbes.com/technology/feeds/infoimaging/2004/11/12/infoimagingvnunet_2004_11_12_eng-vnunet_eng-vnunet_022509_6370189890051010114.html?partner=yahoo&referrer= said:
Reuters[/url]]'With a game like Jedi Academy you will get exactly the same speed with a Radeon X800 and a Radeon 9800, which has half the number of transistors, because it is the CPU that is holding the performance back.'

Some tasks may be done by either processor, leaving the driver to decide which to use.

'We call it load balancing,' said Bar-Haim. 'If there is a very fast CPU we might move some of the functionality to it.

'Some of the decisions are made on the fly, some in advance. Some will depend on the load on the CPU. The software can be very clever, finding out how much load there is on the CPU.'

'We might see more of that in the future, seeing the GPU or VPU [video processing unit - the card] as a co-processor... We will start to look at the whole system, how we can optimise it to get the best performance.'

I wasn't aware of ATI doing this before, and I'm wondering to what extent they are doing it. At the moment, especially given the comments about Jedi Academy, this would appear to be a one way process with some vertex shaders being pushed back to the CPU if the graphics is a little preoccupied - this would also seem to be able to occur less when Vertex Texturing is used.
 
Maybe he mispoke though? I am just saying it might be premature to make too many conclusions based on something that might not really even be accurate.
 
Didn't NV do this with a driver update fairly early on in the GF3's life when the score for the 8-light 3DMk2001 test suddenly jumped like 50%? People at least speculated they did vertex processing on the CPU also, but I don't think it was ever confirmed either true or false, and if true, if it was merely in that subtest or if it was a general solution...
 
Well, that brings a whole new definition to cpu-bound. Will all new B3D graphics card reviews be done with Pentium II class cpus to remove this factor?

I wonder if an app could be written to detect and/or block that kind of request to the cpu?

Actually, Dave, it might be interesting to try the current top contenders on the slowest cpu you can find on a motherboard that will support them, and compare the relative results to the usual top-end (or near it) cpus they are usually paired with.
 
Oh, and isn't it amusing how far we've come. . .it used to be one of the chief reasons put forward in favor of brawny discrete graphics was to offload the cpu so it could do the things it does best, like AI and such. . .and now as our graphics are getting faster and faster, we're pushing work back at the cpu. Something wrong there philosophically.
 
geo said:
Oh, and isn't it amusing how far we've come. . .it used to be one of the chief reasons put forward in favor of brawny discrete graphics was to offload the cpu so it could do the things it does best, like AI and such. . .and now as our graphics are getting faster and faster, we're pushing work back at the cpu. Something wrong there philosophically.

Well, you can pump graphics up to levels where even modern graphics cards will choke, but you generally can't push up physics and AI calculations to make a top of hte line cpu drop even below 60 fps, so why not use that extra power?
 
DaveBaumann said:
Ostsol said:
Might it be related to that "HyperMemory" thing that ATI announced?

Read the Radeon XPRESS 200 reviews - HyperMemory is a method for addressing from either a local frame buffer, or from system memory.
Yeah, there is that, but couldn't the HyperMemory concept be expanded to managing memory usage in such a way as to store certain data, for which processing is more likely to be performed on the CPU (for example), in system memory?
 
Fox5 said:
geo said:
Oh, and isn't it amusing how far we've come. . .it used to be one of the chief reasons put forward in favor of brawny discrete graphics was to offload the cpu so it could do the things it does best, like AI and such. . .and now as our graphics are getting faster and faster, we're pushing work back at the cpu. Something wrong there philosophically.

Well, you can pump graphics up to levels where even modern graphics cards will choke, but you generally can't push up physics and AI calculations to make a top of hte line cpu drop even below 60 fps, so why not use that extra power?

Are you sure that games can't push AI and physics demands up, or that they haven't done so because technology that offloaded significant graphics work from the CPU is relatively new?

I feel it's a step towards stagnation to assume CPUs can't be used on more intense non-graphics processing.
Considering how inconsistent modern games sometimes are when it comes to AI and physics, I feel it's a step in the wrong direction to assume pumping up the intensity of non-graphical calculation can't be done.
 
3dilettante said:
Fox5 said:
geo said:
Oh, and isn't it amusing how far we've come. . .it used to be one of the chief reasons put forward in favor of brawny discrete graphics was to offload the cpu so it could do the things it does best, like AI and such. . .and now as our graphics are getting faster and faster, we're pushing work back at the cpu. Something wrong there philosophically.

Well, you can pump graphics up to levels where even modern graphics cards will choke, but you generally can't push up physics and AI calculations to make a top of hte line cpu drop even below 60 fps, so why not use that extra power?

Are you sure that games can't push AI and physics demands up, or that they haven't done so because technology that offloaded significant graphics work from the CPU is relatively new?

I feel it's a step towards stagnation to assume CPUs can't be used on more intense non-graphics processing.
Considering how inconsistent modern games sometimes are when it comes to AI and physics, I feel it's a step in the wrong direction to assume pumping up the intensity of non-graphical calculation can't be done.

Well it can be pumped up, but I'd imagine it's harder to do than going to a higher resolution or putting in higher res textures. Besides, can you really make any critical part of the game better based on someone having a better hardware configuration? It could change the experience of the game, and definetely wouldn't work while playing multiplayer.(unless it's something unimportant like a death animation, but how a vehicle crashes and then bounces around couldn't be if the wreckage could effect other players)
 
Well if you are going to make your drivers support rendering over multiple video cards and virtual memory why not scale over cpus as well. The driver developers already have hyperthreading and soon they will have multiple cpus.
 
Very Interesting,

I just ran some benchmarks on EQ2 on a 1.6GHz AMD Athlon. A 9500 Pro 128MB, 9500 Pro oc'd, and 9800 256MB all three preformed the same. It looks like I am close to 100% CPU bound. I hope to get my wife a Sempron 3100+ and crank it up to Athlon 64 3400+ speeds but for a few she will still have a GeForce Ti4200. It would be very cool if any extra CPU cycles could be put to work on video rendering until I get her a new card.

Sounds cool,
Dr. Ffreeze
 
Back
Top