Would a GPU+SSD combo card make sense?

oh ok, I guess it would have to be faster than current ssd drives then by order of magnitude or more, but still somehow a lot cheaper than just having a whole lot more gddr5 or what have you. Maybe it doesn't make as much sense as I was thinking.
 
Lol no. GPUs are not (yet?) general purpose devices. They still require a lot of work on the CPU to feed them data and commands in a specific format for them to process. You could have done something like that with Larrabee, but alas it was not meant to be.

Furthermore even if you could do it, it would run *very* slowly. GPUs are not good at a lot of the types of work that games have to do and that would entirely bottleneck it even if the parallel bits were very fast. The CPU/GPU combo is here to stay for the foreseeable future.


Aren't nVidia's Project Denver and AMD's reportedly deeper CPU+GPU integration in future Fusion products supposed to change all that?

Complete convergence seems pretty much a given in ~6 years or less.

In a way, I guess we could say Larrabee was actually too much before its time.
 
Fusion like things are nice when you don't need to move around a lot of data. To provide the GPU with enough bandwidth you pretty much are forced to solder the chips to the motherboard so it won't really work all that well. Also if you have GPU and CPU as a separate entities you can also cool them separately. Keeping a 130W CPU and 200W GPU at normal working temperatures isn't too hard but cramming them to a single 330W unit will make things tricky, not to mention you'd have to provide it with equivalent bandwidth of what they would have when they were separate (2-4 channel DDR3 + 256bit GDDR5).
 
Fusion like things are nice when you don't need to move around a lot of data. To provide the GPU with enough bandwidth you pretty much are forced to solder the chips to the motherboard so it won't really work all that well. Also if you have GPU and CPU as a separate entities you can also cool them separately. Keeping a 130W CPU and 200W GPU at normal working temperatures isn't too hard but cramming them to a single 330W unit will make things tricky, not to mention you'd have to provide it with equivalent bandwidth of what they would have when they were separate (2-4 channel DDR3 + 256bit GDDR5).

Joining a CPU and a GPU into a single chip doesn't result in the sum of their separate power consumption, as proven by current Fusion chips, or even looking at the power draw difference between mid and high-end GPUs (Cypress doesn't consume twice as Juniper at the same clocks, for example).
You can't say that joining a 130W CPU + 200W GPU would result in a 330W APU..

Besides, the CPU area in desktop motherboards and cases usually has a lot more space for power dissipation than the PCI-Express modules. I'd rather have a powerful APU consuming 250W while using a large heatsink with a couple of silent 120mm fans than the usual setup where the bundled heatsink for the GPU does a lot of noise because it can't occupy more than 2 slots.


Regarding memory bandwidth, DDR4 should be changing things a bit, using point-to-point instead of the usual 1/2/3-channel controllers.


In the end, you can't just assume these future high-performance APUs will use 2010-2011 technology, since that's probably the main reason they haven't come out yet.
 
Joining a CPU and a GPU into a single chip doesn't result in the sum of their separate power consumption, as proven by current Fusion chips
Aren't those fusion chips using much weaker CPU cores and far weaker GPUs than there are availiable in PCIe slot?
You can't say that joining a 130W CPU + 200W GPU would result in a 330W APU..
Yea, the difference is probably a little bit smaller but not by a huge amount, at least not when you want to have as fast CPU and GPU with similar memory bandwidth for each as they would have if they were separate.

Besides, the CPU area in desktop motherboards and cases usually has a lot more space for power dissipation than the PCI-Express modules.
That might be true for SLI-like setups but I don't believe the difference is that bad. Highest TDP desktop CPU I know was some ancient netburst at around 150W while GPUs have been well over 250W with dualslot cooling.

Regarding memory bandwidth, DDR4 should be changing things a bit, using point-to-point instead of the usual 1/2/3-channel controllers.
True but again you'd be giving up having separate memory pools for GPU and CPU and would have to provide more bandwidth for the APU socket than either of them separately have.
In the end, you can't just assume these future high-performance APUs will use 2010-2011 technology, since that's probably the main reason they haven't come out yet.
No argument there. Though also next to those next-gen APUs there will be FAR stronger standalone GPUs and CPUs.

Basically my point is that an integrated CPU+GPU solution can never provide as high maximum performance at given technology level as having two separate units.
 
Back
Top