NVIDIA: Beyond G80...

I am unsure that a large shader clock bump would do much, though, as the thread dispatch unit runs at the core speed, not the shader speed, if I remember correctly. That is, I seem to remember nVidia stating at the G80 launch that they could have increased the shader clock quite a bit more, but the rest of the core couldn't keep up.

Granted, they might have adjusted those parts so that they're no longer a bottleneck, but we'll see.
 
basing it on anandtechs findings where overclocking the core was more than twice as big an increase as ocing the shaders

Oh yeah I remember that article. Those anandtech numbers would indicate that even at 640Mhz core it still doesnt gain you anything by bumping up the shaders from 1350Mhz. So yeah I'd have to agree with you here - 1800Mhz seems like it would be a fantastic waste even at 700Mhz core unless per-pixel math increases significantly in upcoming titles.

Edit: Though it does seem somewhat application dependent. Following link shows cases where a 25% shader advantage either has significant or negligible effect on performance depending on the game -
http://www.overclock3d.net/reviews....0gts__xxx__vs_gainward_8800gts__golden_sample_
 
Last edited by a moderator:
But last page he was just some guy with no proof whatsoever! What makes his opinion any better than mine? :p

I'm not sure how G80 having 160 shader units matters to him or his project if they're not accessible. I'm also skeptical of G80 hiding extra units for the same reason every other rumor of hidden pipes has come to naught. True, cores are getting way bigger, so the chance of a defect on a given die is greater, but so much so as to lug around a third extra shader units as spares? And shader units that take up a relatively smaller proportion of the die to begin with b/c they're double-clocked. Then again, he's got friends with x-ray machines, so I suppose I should defer to FUDz's reporting.

That just feels so wrong, though.

As for per-pixel apps that may stress shaders, I'd say there's a new 3DMark a-comin' (along with xth-gen DX9 titles and console ports with more shader power than bandwidth to spare), but I've already exceeded my talking-out-the-wrong-hole allowance for the month, and I'd hate to lose face on the internet.

Ah, what the heck.
 
I am unsure that a large shader clock bump would do much, though, as the thread dispatch unit runs at the core speed, not the shader speed, if I remember correctly. That is, I seem to remember nVidia stating at the G80 launch that they could have increased the shader clock quite a bit more, but the rest of the core couldn't keep up.

Granted, they might have adjusted those parts so that they're no longer a bottleneck, but we'll see.

[*]Confirmation that scheduler clock runs at half the shader clock (675MHz for 8800 GTX f.e.).

Chalnoth,

So a theoretical 1800 Mhz Shader domain would have a 900 Mhz thread scheduler clock.
 
But last page he was just some guy with no proof whatsoever! What makes his opinion any better than mine? :p ..... Then again, he's got friends with x-ray machines, so I suppose I should defer to FUDz's reporting.

A new page can make a world of difference :p

But yeah I'm with ya on the unlikelihood of there being 160 shaders. But I was leaning more towards Fudo's bungling of the whole thing instead of the guy misinterpreting some performance numbers on new drivers. The > 1x MUL rate uncovered in the B3D article is still less than Nvidia claims is possible on G80 so I doubt anyone in his position would get too excited over it. Especially since we "know" that the current SKU only has 128 shaders. When I read it I figured if Fudo was right and this guy did explicitly claim that G80 has 160 shaders total then he either has inside info or he reads too many 3D rumour sites!
 
Let's dissect the latest Fudzilla story, shall we ? ;)

Supposedly there is a "8850 GX2" just waiting for ATI's R600.
But what really caught my eye was that odd memory amount (896MB, wtf ?!?).
If they were two 8800 GTS cores, then it would have to be either 640MB or 1280MB, right ? Or are we talking about 448MB per GPU ?


So, what do you think ?
(my take: fake)
 
Last edited by a moderator:
Heh, so 896MB total = a 224-bit bus, 448MB and 7 memory modules / 14 ROPs per card? That's about as weird as you can get - is Nvidia getting into the frankenstein business? But there's no way I'm going to believe that Nvidia did a GX2 at > 600Mhz with 7 ROP clusters enabled on 90nm.

And that would mean the memory bus isn't limited to 64-bit chunks. It's just too stupid to be true. Why not just do the tried and true 256-bit on each card for a total of 1GB?
 
Last edited by a moderator:
Heh, so 896MB total = a 224-bit bus, 448MB and 7 memory modules / 14 ROPs per card? That's about as weird as you can get - is Nvidia getting into the frankenstein business? But there's no way I'm going to believe that Nvidia did a GX2 at > 600Mhz with 7 ROP clusters enabled on 90nm.

And that would mean the memory bus isn't limited to 64-bit chunks. It's just too stupid to be true. Why not just do the tried and true 256-bit on each card for a total of 1GB?

I'd agree, Very odd. What's up with the strange RAM amounts
 
Maybe there's 512MiB per chip, but they just reserve 64MiB per GPU for, err, erm.......shits and giggles?
 
Reserved for when you finally write that port of Linux, in CUDA?
 
Indeed. After all it is more CPU than GPU. :LOL:

64MB is reserved for the on-board eEDRAM.
Oh wait, *G80* ... must be for the PS3 emulator. ;^)

I'm still a lot more partial to the 10 cluster rumor than the GX2. Unfortunately, if I were to guess which were more likely....
 
I'm still not putting too much stock into the GX2 stuff. G80 is a beast compared to G71 - it wont be so easy to just slap two of them together even at 80nm. And I'm thinking going to 80nm is a must if there's any chance of them doing SLI-on-a-stick again. So that obviously would lead me to ask - why not just create a souped up G80 on 80nm? Might be wishful thinking but I just dont see GX2 happening.
 
What about the power draw of.. even 2 GTS's with 320MB? I would like to know how far the mobile G80's are before even announcing it as a possibility
 
I also heard today that the GX2 is coming which supposedly is based on 2 G81 cores. The GF8950GX2 coolingsolution which was shown at CeBit was already a hint of what was to come. And the price... a staggering $999,-.
 
I also heard today that the GX2 is coming which supposedly is based on 2 G81 cores. The coolingsolution which was shown at CeBit was already a hint of what was to come. And the price... a staggering $999,-.

:oops:

A grand for two unknown processors...
They must feel really confident to release such a card, especially if the R600 beats G80 as is generally expected.
Also, what about single GPU "G81's" ? Any word on that ?
 
I also heard today that the GX2 is coming which supposedly is based on 2 G81 cores. The GF8950GX2 coolingsolution which was shown at CeBit was already a hint of what was to come. And the price... a staggering $999,-.

/me prepares to collect on a bet with Uttar. . . . :D
 
:oops:

A grand for two unknown processors...
They must feel really confident to release such a card, especially if the R600 beats G80 as is generally expected.
Also, what about single GPU "G81's" ? Any word on that ?

the plot thinkens......
 
Back
Top