G80 is more CPU than GPU (really interesting)

Just doesn't work that way by sticking the two together. Creating a highly specialized processor (GPU) that can run CPU tasks you will need additional transistors, silicon, what not for the flexibility (at least if we are taking about x86) a CPU needs, otherwise performance will suffer.

AMD/ATi's approach is quite different using a GPU as a coprocessor, which for all purposes they are still sperate pieces, might be one die, but both have thier own paths.
 
AMD/ATi's approach is quite different using a GPU as a coprocessor, which for all purposes they are still sperate pieces, might be one die, but both have thier own paths.

That's sort of the approach I was going for. CPUs still do what they do now but all math/encoding/decoding parts are removed to a large degree and left to a GPU.
 
I am inclined to believe the reason for FUSION is not performance, but to save costs. Adding a graphics card, sound card, network card, RAID card, BitTorrent card, physics card, AI card means $$$/Power/Power/Power. Putting a CPU on a GPU means indirect sacrifice in performance because if you want the crippled CPU to have reasonable performance, a significant amount of the die that's supposed to be doing graphics work(or CPU work) is gonna be CPU. You are just moving one problem to another place.

I believe there will be a time where to play the latest games you'll need a computer the size of a 4 processor box commonly used in workstations, or sacrifice performance.
 
even if any GPU can run X86 code, i dont think anyone will adopt it since most current desktop apps are single-threaded, they have to make them multi-threaded to take advantage of the GPU.
 
I am inclined to believe the reason for FUSION is not performance


I competely agree with the above statment with the additions that they may well be chasing the SFF and laptop market here as well. A low power computer at a price =<£500 with the onboard GPU pitching in to help video acceleration and physics processing will appeal to a non-cutting edge game demographic.
 
I am inclined to believe the reason for FUSION is not performance, but to save costs.

Just because you focus on one metric does not mean that you can completely disregard the other.

If you qualify your solution with cost or power, then FUSION will have higher performance/W and performance/€ for a whole range of power consumption and price points.

You don't do that in the gamers/workstation market, but in the mass market for mobile, home and work PCs you very much do.

Cheers
 
Just because you focus on one metric does not mean that you can completely disregard the other.

I agree with you there, its even true in integrated graphics. Though they have to cripple to a point that while you can expect double performance from high end GPUs every year, you can expect less than 50% for the integrated GPU. Main reason for FUSION is cost.
 
All the threads on a Niagara core are visible to the outside world.

Thanks to the layers of abstraction and internal sleight of hand, there can be many more semi-independent program counters being maintained in a GPU that are not visible to the system. A CPU doing the same thing would have many more threads visible.

Though with Cuda the threads are becoming more exposed on G80, while in Rock there are going to be internal threads of execution which aren't visible, not to mention any DMT projects which may or may not find their way into future CPUs. Funny how the world revolves.
 
Though with Cuda the threads are becoming more exposed on G80, while in Rock there are going to be internal threads of execution which aren't visible, not to mention any DMT projects which may or may not find their way into future CPUs. Funny how the world revolves.

We'll see. I haven't looked at the Cuda stuff yet, so I can't speak to that.

The examples of helper threads I've seen are somewhat close, but I've only looked at the proposals that use dynamically spawned threads to warm up caches and fire off prefetches. They do not actually advance the state of computation.

There are probably other variations, I've fallen behind on the latest work in that area.
 
We'll see. I haven't looked at the Cuda stuff yet, so I can't speak to that.

The examples of helper threads I've seen are somewhat close, but I've only looked at the proposals that use dynamically spawned threads to warm up caches and fire off prefetches. They do not actually advance the state of computation.

While it may not directly advance the state of computation, it certainly paves the way. Especially when you look at what the major performance enhancements have been in current processors, if I'm not mistaken most of K8's improvement was from the memory controller and the subsequently reduced memory latency, and I've hear similar statements about Conroe's aggressive prefetcher being the major source of it's performance enhancements. So if Hardware Scout like threads give most of the benefits while allowing wider chips when necessary it looks like a good trade-off.
 
While it may not directly advance the state of computation, it certainly paves the way. Especially when you look at what the major performance enhancements have been in current processors, if I'm not mistaken most of K8's improvement was from the memory controller and the subsequently reduced memory latency, and I've hear similar statements about Conroe's aggressive prefetcher being the major source of it's performance enhancements. So if Hardware Scout like threads give most of the benefits while allowing wider chips when necessary it looks like a good trade-off.

Perhaps, but that wouldn't make that kind of threading comparable to a GPU's internal threading. Those threads in the end produce externally visible data changes, but never have to be seen by the system.
A scout thread in the end will produce no software-visible changes. Instead, the original thread will hopefully produce the exact same results faster.
 
Perhaps, but that wouldn't make that kind of threading comparable to a GPU's internal threading. Those threads in the end produce externally visible data changes, but never have to be seen by the system.
A scout thread in the end will produce no software-visible changes. Instead, the original thread will hopefully produce the exact same results faster.


I certainly didn't intend to imply that the two were directly comparable, after all they are intended to solve entirely different problems, just on the surface there are similarities. And it is these surface similarities that likely caused the original article which spawned this discussion. My main intent was to point out that thread visibility, just like programmability, isn't a major deciding factor in what constitutes a CPU versus a GPU, anymore.
 
Did this pass under the radar ? ;)

NVIDIA Invests In GPU Accelerated Software Company

"We're delighted to make this strategic investment in Acceleware. The investment reflects our ongoing commitment and enthusiasm toward the ecosystem of high performance computing based on GPU technologies," said Jeff Herbst, Vice President of Business Development at NVIDIA. "We have the highest regard for Acceleware and its management team, and believe they will quickly emerge as leaders in the GPU Computing revolution."

"We've been collaborating with NVIDIA for over two years on the deployment of GPUs to accelerate non-graphics software applications such as cell-phone design, seismic data processing, printed circuit board design, drug discovery, nanophotonic communications device design, reservoir simulation, lithography mask design, and others," said Sean Krakiwsky, CEO of Acceleware. "NVIDIA's investment in Acceleware is an endorsement of these efforts and provides us with significant additional funding to execute our business plan and to capitalize on the rapidly growing demand for GPU-based computing products."



http://www.vr-zone.com/?i=4512
 
Would it be more reasonable to stick a lightweight CPU into a GPU and make that a processor? You're almost better off letting each chip do what it does best. Strip the heavy math processing from the CPUs and leave it to a GPU.
CELL? Granted - it's not a GPU, but then: What is a GPU?

wrtt: Why do many people define a CPU as being able to execute x86-code?

Just some things to think about....
 
Back
Top