will multicore take over 3d rendering?

let me explain my point of view, now as we all know a lot of CPU cores can be good for rendering, so if you consider the addition of GPU like (but not as powerful as a normal GPU) instructions to the X-86 ISA , and the large no. of cores on future CPUs, you might start thinking the way i am thinking. I heard that there is some analyst event durind December, and there are rumours that more Fusion info will be unveiled during the event.
 
let me explain my point of view, now as we all know a lot of CPU cores can be good for rendering, so if you consider the addition of GPU like (but not as powerful as a normal GPU) instructions to the X-86 ISA , and the large no. of cores on future CPUs, you might start thinking the way i am thinking. I heard that there is some analyst event durind December, and there are rumours that more Fusion info will be unveiled during the event.

Instructions themselves aren't the whole picture. The number of functional units that a core can fit into a given area is very important as well.

Unless the CPU cores can fit close to as many floating point units per mm2 as a GPU can, there will always be a gap.

A current CPU is lucky to fit more than 4 fp units in ~200 mm2.

A G80 packs at least 128 units, not including the large number of specialized units and texturing hardware.

A G80 is a fair bit larger, but it isn't 32x the size of a CPU.

Just having instructions lying around isn't enough. If the CPUs aren't also crammed so full of units that they pretty much cease to be x86 CPUs, then they won't compete.

By that point, they're just GPUs that happen to run off an x86 ISA, and we're just throwing names around.
 
let me explain my point of view, now as we all know a lot of CPU cores can be good for rendering, so if you consider the addition of GPU like (but not as powerful as a normal GPU) instructions to the X-86 ISA , and the large no. of cores on future CPUs, you might start thinking the way i am thinking. I heard that there is some analyst event durind December, and there are rumours that more Fusion info will be unveiled during the event.

all proposed solutions provide a more expensive answer to the question.

gpu like instructions? what will this net you in a vista environment? I don't think anyone is waiting to create an extended instruction set for modern CPU's just for the sake of .. what actually?
and limiting the instruction set.. wouldn't that bring it down to an ASIC level?

And yes, Fusion; everyone here can see the IGP part move to the cpu with the obvious benefits and the inherent drawbacks. I would love to see a 200Watt GPU tacked on to a quad-core CPU and see how everything fails to impress because of bandwidth limitations on current PC memory or someone makes a notion to place both DDR2 and gDDR4 on the same motherboard.
 
i'll have to admit that u guys convinced me, especially 3dilletante.


We, at B3D are all glad to be of service, we've seen people that walk away after a conversation like this, they don't even fight like Don Quichote.
 
Back
Top