Educate us fools: 512 vs 256 bit GPUs

Ante P said:
http://www.nvnews.net/vbulletin/showthread.php?s=&threadid=12216

What would the tangiable differences be if we were have a hypothetical R350 or NV35 512 bit GPU?
What areas would improve the most, would anything be worse etc?

Nothing, it's just a meaningless number. I personally blame Tom's Hardware for spreading that bullshit number.
 
Thowllly said:
Ante P said:
http://www.nvnews.net/vbulletin/showthread.php?s=&threadid=12216

What would the tangiable differences be if we were have a hypothetical R350 or NV35 512 bit GPU?
What areas would improve the most, would anything be worse etc?

Nothing, it's just a meaningless number. I personally blame Tom's Hardware for spreading that bullshit number.

If it doesn't make a difference then why are we steadily mowing forward?
 
I think it isn't quite "meaningless" as a number, but that it is meaningless to focus on it. For instance, I would be surprised if the R300 couldn't be marketed as a "512 bit" GPU already...I'm pretty sure just 256-bit DDR being utilized by the chip would be enough indication that the label could be justified.

It is meaningless because you are attaching significance to the label "512-bit GPU" as if it would add anything by itself. If the Parhelia was more successful in the mainstream, ATI might have pushed the idea.
 
Ante P said:
Thowllly said:
Nothing, it's just a meaningless number. I personally blame Tom's Hardware for spreading that bullshit number.

If it doesn't make a difference then why are we steadily mowing forward?
I'm not sure I understand the question. "We" are moving forwards, but it's not due to this number. To say that something is so many bits is generally quite useless.

Like current CPUs. Why is the P4 a 32bit CPU? It has a 64 bit data bus, a 36bit address bus, it has registers that are 80 and 128bit. The double speed ALUs are 16bit (AFAIK). It does have 32 bit GPRs (general purpose registers), so you could say that the size of the GPRs determine how many bits a CPU is (ignoring the 68000, which had 32bit GPRs but was considered 16 bit CPU).

So, what does it mean that the GF1-4 is 256 bit? That it has 256 bit wide GPRs? No. So what does it mean? I think that for the original Geforce 256 they counted the bits something like this: (32bit color + 32bit z&stencil) * 4 pipes = 256 bit. That is of course nonsense (or else the P4 should be considered a 128bit CPU). And according to that calculation the GFFX should be a 640bit GPU (or a 1280bit GPU according to NVIDIA’s way of counting pipelines).


Edit: I realised I might have come across as a bit aggressive in my posts. I’m not trying to flame; I’m just tired of those “xxx-bitâ€￾ numbers being thrown about as if they were actually important. They can be useful when talking about how something was implemented in HW and so forth, but they don’t really tell you anything as far as actual performance goes. It has been given too much focus by the PR people; by itself such a number doesn’t really tell you anything.
 
Thwolly is right - the whole GPU labelling business is marketinglish. At least Matrox had the courage to point out that they called the Parhelia a 512-bit GPU because it had 4 x 128-bit vertex pipelines.
 
I was thinking more in the lines of the actual core, not vertex pipes lines etc.

If the TNT had a 128 bit core, what did they count as 256 bits?

I mean is it like the console market where they just chose whatever part of the console has the highest "bits" and use that to specify wheter it's 64 or 128 bits?
If so just ignore the question about "moving forward".
I meant if pre-TNT was 64 bits, and post TNT was 128 bits and now we're at 256 bits there's obviously some advantage brought by it.
But if it's just as THG pointed out with the GF256 then the numbers are self explanatory so just ignore my question.


Oh man, I'm having "some" problems expressing what I mean here. You need to take Swedish lessons ya know ;)
 
How do you define the "actual core"? There is no one set way to arrive at the name "thismany-bit GPU" for a given core, so one can be made up.

The number then made up might then have a meaning, but it would still be meaningless to focus on it as a feature without isolating what that particular meaning is (which can be accomplished by discussing the feature used for the label, rather than the "thismany-bit" label itself).

I think that's what others have been saying, as well.
 
demalion said:
How do you define the "actual core"? There is no one set way to arrive at the name "thismany-bit GPU" for a given core, so one can be made up.

The number then made up might then have a meaning, but it would still be meaningless to focus on it as a feature without isolating what that particular meaning is (which can be accomplished by discussing the feature used for the label, rather than the "thismany-bit" label itself).

I think that's what others have been saying, as well.

that's teh answer I was looking for, I was trying to figure out if "GPU bits" was a standard or if the merketing teams just whip together what ever numbers they like
 
Thowllly said:
Like current CPUs. Why is the P4 a 32bit CPU? It has a 64 bit data bus, a 36bit address bus, it has registers that are 80 and 128bit. The double speed ALUs are 16bit (AFAIK). It does have 32 bit GPRs (general purpose registers), so you could say that the size of the GPRs determine how many bits a CPU is (ignoring the 68000, which had 32bit GPRs but was considered 16 bit CPU).
Don't want to nitpick, but the 68000 was never considered a 16bit CPU. It was often referenced as a 16/32bit CPU, the official word however is it's a 32bit cpu (http://e-www.motorola.com/webapp/sps/site/prod_summary.jsp?code=MC68000).
So for CPUs I'd agree that the "bit-ness" is determined by how large numbers it can process in its "general purpose execution units" (which should be the same as the size of the general purpose registers), disregarding all special purpose units like FPU, SIMD SSE or whatever.
And for GPU's the bitness is whatever the manufacturer decides it should be...
 
you cant really assign GPUs a "xxx-bit" number and have it mean anything similar to what it would mean on a CPU... yet. CPU's are generally referred to as "xxx-bit" because of the max size of the instructions it can handle. The first thing that has to happen is for the pixel and vertex units to merge (using the same physical units for either pixel or vertex) and then assing it a number based on the precision at the least precise point in the pipeline for the maximum precision it can output in one clock. Yeah, I know that's confusing...
 
There's many processors that have max instruction length != "bitness". It's the size of general purpouse registers and ALUs that is the usual qualifyer.

And when you add special SIMD registers, the "bitness" of the CPU start to get less describing. For GPUs it's pretty much completely useless. You'll have to give one number for each part instead.
 
yes, when you start getting into modern CPU's and SIMD and all that sort of thing then everything gets out of whack.
 
Sage said:
you cant really assign GPUs a "xxx-bit" number and have it mean anything similar to what it would mean on a CPU... yet. CPU's are generally referred to as "xxx-bit" because of the max size of the instructions it can handle. The first thing that has to happen is for the pixel and vertex units to merge (using the same physical units for either pixel or vertex) and then assing it a number based on the precision at the least precise point in the pipeline for the maximum precision it can output in one clock. Yeah, I know that's confusing...

that's not confusing, that makes perfect sense IMHO
 
Back
Top