G80 is more CPU than GPU (really interesting)

This information is completely incorrect.
The G80 is not a CPU. It's just a GPU with a 100% programmable shader core.
The big differences with a CPU is that is it massively parallel, branching requires coherence, and there are plenty of CPU-only things thast just aren't supported, including interrupts.
From that point of view, the G80 is much more akin to a parallel DSP (with FP32!) than to a CPU.


Uttar
 
This information is completely incorrect.

Then again, it's the ... inquirer. ;)

Maybe they should rename themselves thenarrator, or thestoryteller, or therumormill, or thegossiper. Calling yourself theinquirer creates expectations, you know. ;)
 
I think you're being too harsh on Fuad.

With just a little extra hardware, only some extensive software hacks, just a few complete redesigns, and a couple of process shrinks and clock increases, I'm sure the G80 would be an unmitigated disaster of a CPU.
 
A WELL-CONNECTED source says Nvidia's G80 is as programmable as the average CPU.
Well, if you consider an ARM7 an average CPU, then maybe this un-named source have a point.
 
I think you're being too harsh on Fuad.

With just a little extra hardware, only some extensive software hacks, just a few complete redesigns, and a couple of process shrinks and clock increases, I'm sure the G80 would be an unmitigated disaster of a CPU.

only(sarcastic), perhaps G90 will be right?

about 1.5 years ago, AMD said that beyond dual core, General purpose cores will be thought of as almost specialized hardware for 3d rendering. Here is how i put it all together: Beyond dual core ( i.e Fusion) General purpose cores ( i.e GPGPU cores, provided that by G90, R700, GPUs can run X86 code well).
 
about 1.5 years ago, AMD said that beyond dual core, General purpose cores will be thought of as almost specialized hardware for 3d rendering. Here is how i put it all together: Beyond dual core ( i.e Fusion) General purpose cores ( i.e GPGPU cores, provided that by G90, R700, GPUs can run X86 code well).

AMD's been backtracking on its expectations and release dates in a manner proportional to how much closer the date has gotten to when its promises would have come due.

The programming model and semantics of x86 are very different from what GPUs can use. Running one well is pretty much opposite of what runs well on the other.

If there is a way of making their excessively optimistic early predictions work, I wouldn't count on AMD getting it out on time, or while we're young.
 
I don't see the point in forcing them to do that.

If the circuitry were designed so part of the GPU could handle things like interrupts, precise exceptions, software permissions, and the more complicated memory addressing were possible, it could be made so it would be functional, but it wouldn't be very good.

The fact that there is a software driver over the GPU is also something of a problem. It's not impossible, as Transmeta had a code layer over its chip, but the translation layer meant that the best that could be hoped for was 1/2 or worse IPC on most code.

Going from the CTM spec, the closest thing to fitting what the current x86 CPU's would do would be assigning a thread to the command processor (it would have to be much more robust than it is currently, reading off what it is fed by the CPU), which would send a bit of code to one local array scheduler, which would then use one array processor. That's one of 16 array processors out of several arrays.

If there are four arrays, that's 1/64 of the total capability of the chip, at a lower clock speed, and a whole slew of problems getting single-threaded performance. Because the array processor is dependent on the array scheduler, which in turn is dependent on the command processor, a large amount of hardware goes unused.

Because CPU code has so much extra context attached to its threading, changing this would mean making the chip closer to a Niagara-type implementation, with peak execution resources in the neighborhood of 8-16 semi-independent cores where at the same process there could have been something like 128+ shader cores.

Obviously, this is a pretty naive implementation. With work, it might not be a 1:1 correspondence to just one array processor, with some creative design and software work.
 
do u think we will ever have GPUs running our PCs?

What would you have them doing, precisely? Talking to your hard-drive? Talking to your keyboard? Compiling your C++ code? Running your SQL database?

If a GPU can do everything a CPU can do, why can't a CPU do everything a GPU can do equally well (at the speed a GPU can do it that is)?

Put another way, CPUs and GPUs are different, and they're different for a reason. GPUs are good at what they do precisely because they aren't good at some of the things CPUs are good at.

GPUs are specialised, CPUs are generalised. If GPUs became as generalised as CPUs, why would you expect GPUs to suddenly be better at being CPUs than CPUs are? Because Intel and AMD engineers are stupid, and ATI and NVIDIA engineers are all Nobel Prize winners?
 
Put another way, CPUs and GPUs are different, and they're different for a reason. GPUs are good at what they do precisely because they aren't good at some of the things CPUs are good at.

And that's why generic metrics like FLOP ratings are otherwise worthless.

GPUs are specialised, CPUs are generalised.

I'd actually say that CPUs are quite specialized as well. After all branch-prediction, OoOE, forwarding, etc... are all techniques to improve IPC and/or ILP, a CPU could certainly do with out those techniques and use substantially fewer transistors.






Of course most of this confusion between the difference between CPU and GPU is mainly we are sticking to the historical definitions of these devices. At this point the primary difference between G80 and say Niagara is interrupts and system IO. Computationally/algorithmically (from what little I know) a G80 can compute anything a CPU can compute. It won't be good at it, but it can do it.
 
I'd actually say that CPUs are quite specialized as well. After all branch-prediction, OoOE, forwarding, etc... are all techniques to improve IPC and/or ILP, a CPU could certainly do with out those techniques and use substantially fewer transistors.

Point taken.
 
Of course most of this confusion between the difference between CPU and GPU is mainly we are sticking to the historical definitions of these devices. At this point the primary difference between G80 and say Niagara is interrupts and system IO. Computationally/algorithmically (from what little I know) a G80 can compute anything a CPU can compute. It won't be good at it, but it can do it.

All the threads on a Niagara core are visible to the outside world.

Thanks to the layers of abstraction and internal sleight of hand, there can be many more semi-independent program counters being maintained in a GPU that are not visible to the system. A CPU doing the same thing would have many more threads visible.
 
i did a small research on this and found out what Fuad was pointing out. It wasnt about the capabilities of today's GPUs, although he talked about the X86 crap. He just wanted to tell us that nvidia can easily design a CPU, since G80 shares a lot of CPU features, all what nvidia need is a small redesign and nvidia is good to go.
 
i did a small research on this and found out what Fuad was pointing out. It wasnt about the capabilities of today's GPUs, although he talked about the X86 crap. He just wanted to tell us that nvidia can easily design a CPU, since G80 shares a lot of CPU features, all what nvidia need is a small redesign and nvidia is good to go.

If a chip runs any sort of a program, it has CPU features.

It would take something a bit more than a minor redesign to go from "can function as a CPU" to being competitive.

Fuad's article is arguably more wrong with that second interpretation than it was with the first way it was read. Then again, I still don't know what he's trying to get at.
 
i did a small research on this and found out what Fuad was pointing out. It wasnt about the capabilities of today's GPUs, although he talked about the X86 crap. He just wanted to tell us that nvidia can easily design a CPU, since G80 shares a lot of CPU features, all what nvidia need is a small redesign and nvidia is good to go.

I don't think anyone has questioned Nvidia's ability to make a CPU, and I wouldn't be surprised if it wouldn't require much effort to convert G80 into a relatively capable throughput processor to compete with the likes of Cell or Niagara. Only problem is that both Sparc and Power have relatively well established bases, G80's ISA would otherwise have zero support.

Of course I think Fuad is full of something when he started talking about G80 and x86. I would be very surprised if Nvidia bothered making an x86 GPU/CPU through any mechanism other than what Transmeta did.

With any degree of luck x86 can continue being the dominant single threaded performance ISA, and parallel processing can be exposed through an API + driver. But I haven't heard anything about any of the obvious parties wanting to go that route, and I fear in the end it might be their undoing.
 
The Inq keeps confusing the evolution of the Geforce and Goforce families of processors, when in fact they are evolving in very different directions...
 
GPUs are specialised, CPUs are generalised. If GPUs became as generalised as CPUs, why would you expect GPUs to suddenly be better at being CPUs than CPUs are? Because Intel and AMD engineers are stupid, and ATI and NVIDIA engineers are all Nobel Prize winners?

Perhaps some people get that idea because while GPUs double performance over predecessors every year, CPUs can't, and they don't understand that its hard to get additional performance like that in a CPU, so if GPUs become a CPU it'll become a much faster one. Which is an obviously flawed logic.

I like the last sentence.
 
Would it be more reasonable to stick a lightweight CPU into a GPU and make that a processor? You're almost better off letting each chip do what it does best. Strip the heavy math processing from the CPUs and leave it to a GPU.

Heck a single core Celeron is about the same speed as a quad core Core chip if all you're doing is moving things around in memory and doing a lot of logic. CPUs generally only get hammered when they have to start doing 3d math or encoding/decoding. Something a GPU is really good at. Pair the two together and you can have the best of both worlds.
 
Back
Top