NVIDIA: Beyond G80...

Stupid guessing on my part, but it could as well be possible that the MUL ain't working right in the G80 due to some sort of failure and will do so in the tweaked version.
That's entirely possible. Or it could be a fundamental limitation in the design, where the MUL never really was accessible for anything but interpolation. We'll have to see what happens when nVidia releases their next high-end part, and the next major driver release.
 
But I agree with you, I don't expect anything drastic either. It will be G80 done right, which makes you wonder why nVidia had to launch last year and not wait another spin and come out with a 100% functional unit.
Are you serious? This really is such a minor, minor issue that it was vastly better for them to release then instead of months later. They could have lost huge amounts of sales if they had missed Christmas.
 
That's entirely possible. Or it could be a fundamental limitation in the design, where the MUL never really was accessible for anything but interpolation. We'll have to see what happens when nVidia releases their next high-end part, and the next major driver release.

I have severe doubts that the MUL cannot be used for general shading as it is. It rather sounds like the compiler needs some more work. I'll bend over laughing if that thingy "coincidentally" starts to work after March heh.
 
Are you serious? This really is such a minor, minor issue that it was vastly better for them to release then instead of months later. They could have lost huge amounts of sales if they had missed Christmas.


Pardon, I'm not really sober yet. obviously it was such a minor issue that they could wait with it for the 8850/8900.
They could afford to ship a product with satisfied the customers but not the engineers.
 
Looks like vrzone's response to ocworkbench's table. :LOL: I dont know which one is funnier. ;)

And this one has a glaring error, too.
Take a look at the 8900 GTX and 8900 GTS:

Both listed as 128 shader parts.
But the GTS keeps the 320bit bus, when we know that ROP partitions are tied to bus width, so if the the cheaper part has a narrower bus, it has to have a smaller number of scalar ALU's than the GTX, not the same (at least i think so).


edit
Another error detected on the top 8950GX2, with a 256bit bus (presumably x2) coupled to 96 SP's (should be either 64 or 128, considering the memory buses).


edit 2
Yet another error, 8800 GTS vs 8900 GS. Same number of ALU's (96), but different bus width (320 vs 256bit).
 
Both are full of glaring errors. The guy who made this chart is dismissing the ocworkbench's chart as a fake, how ironic. :p
 
when we know that ROP partitions are tied to bus width, so if the the cheaper part has a narrower bus, it has to have a smaller number of scalar ALU's than the GTX, not the same (at least i think so).
The SPs and the ROPs/memory channels are decoupled.

Jawed
 
  • Like
Reactions: Geo
The SPs and the ROPs/memory channels are decoupled.

Jawed

Yes, but i still think this table looks suspicious.
Couldn't they have done a 512bit single-GPU 90nm G80 GTX then ?
Or a 256bit 90nm G80 GTS ?

Why the odd 320 and 384bit memory buses ?
 
Yes, but i still think this table looks suspicious.
Couldn't they have done a 512bit single-GPU 90nm G80 GTX then ?
The extra 128 bits would have required another 8 ROPs + 2 channels of memory bus + L2 cache, since ROPs and memory are tied together. That could have cost another 50M+ :?: transistors.

Jawed
 
Heh, the ROPs and memory channels are coupled - increase in bus width = increase in ROPs. On the other hand the shader/TMU arrays are not coupled to the ROPs - they use a crossbar.
 
Distinct possibility. The R420 was originally touted as a 12-pipeline GPU but really had 16. It's not out of the question that G80 could have 160 total shaders for redundancy.

Well, this *could* explain the apparently good yield rate from a 90nm 681M transistor chip, in theory... ;)
And if this turns out correct, then the 80nm shrink may be out of the question, at least for now.

But, as i said before, i'm having a hard time swallowing this story, especially considering the source.
These were the same guys that said G80 wasn't even unified in the first place, mere weeks before the official unveiling...
 
Well, this *could* explain the apparently good yield rate from a 90nm 681M transistor chip, in theory... ;)
And if this turns out correct, then the 80nm shrink may be out of the question, at least for now.

Why is that? Shrinks are usually a good idea.

But, as i said before, i'm having a hard time swallowing this story, especially considering the source.
These were the same guys that said G80 wasn't even unified in the first place, mere weeks before the official unveiling...

I think nVidia fooled more than just theinq. ;)

Yes, I know this is a problematic source, but it is very similar to a previous GPU, and is almost a requirement if you want redundancy. There almost certainly has to be some spare shader units or the 8800GTX wouldn't even exist.
 
Why is that? Shrinks are usually a good idea.

Not when the cost of redesigning for the new half-node isn't worth it.
I'm not certain of the prototyping/sampling costs these days, so, take it as you will.



I think nVidia fooled more than just theinq. ;)

Yes, I know this is a problematic source, but it is very similar to a previous GPU, and is almost a requirement if you want redundancy. There almost certainly has to be some spare shader units or the 8800GTX wouldn't even exist.

Some is one thing, but an extra 32 ALU's, up from 128 ? That is the same difference between GTS and GTX, and, as we know from the benchmarks, it is a lot.
I don't know... :???:
 
Back
Top