G70 Core Clock Variance

digitalwanderer said:
Did y'all see nVidia's response to Hiiiiilbert's inquiry?

Yeah we saw it :), hopefully Unwinder will have some answers for us soon. He's been polling people at Nvnews for Rivatuner register dumps etc to aid in his investigation as well.
 
Yeah, it could. If it does turn out to be a correct phenomenon, it'll make editors pretty peeved since they didn't know about it, nor accounted for it in their articles reviewing 7800 GTX on its release, since they weren't told about it (that I know).

And if it is something that happens, I'm not sure I can see different clocks being applied to different parts of the chip. The issues with buffering output from one part of the chip to another, on clock boundaries, might be too hard to manage in a chip like G70. I'd guess the whole chip is clocked up or down instead.
 
Rys said:
And if it is something that happens, I'm not sure I can see different clocks being applied to different parts of the chip. The issues with buffering output from one part of the chip to another, on clock boundaries, might be too hard to manage in a chip like G70. I'd guess the whole chip is clocked up or down instead.

Isn't there already buffering between the pixel and vertex engines? Would this become that much more complex if they had different clocks?
 
Sounds like multiple clock domain to me. The 40 MHz discrepancy was first encountered with 3DMark, when ORB would report higher clocks then the ones shown by coolbits.
 
Rys said:
If it does turn out to be a correct phenomenon, it'll make editors pretty peeved since they didn't know about it, nor accounted for it in their articles reviewing 7800 GTX on its release, since they weren't told about it (that I know).
They didn't mention it to you I take it, and if you're planning on being peeved I wonder if Kyle with blow a gasket over it? :|

I gotta go check [H] now and see if they've heard about this yet... ;)

EDITED BITS: Nope, looks like he was up all night playing with a BFG 7800 GTX and loving it...the lucky barstard!
(His post about it was at 4:09am)
 
trinibwoy said:
Isn't there already buffering between the pixel and vertex engines? Would this become that much more complex if they had different clocks?

Yeah, but it helps if the gates that drive the buffers are ticking at the same clock as the rest of the chip. Or so I assume.
 
digitalwanderer said:
EDITED BITS: Nope, looks like he was up all night playing with a BFG 7800 GTX and loving it...the lucky barstard!
(His post about it was at 4:09am)

Yeah, two days ago. They're having fun playing with the news menu again. I hate it when they do that.

Edit: Brent seems bored by the whole thing (see Trini's thread), and so did Hanners.
 
russo121 said:
To me it's like ATI's overdrive... good idea!

True dat.

It's not like you should be lying awake over this I guess. It would actually be best practice if you underclock your card for 2D and 3D mode.
While Overdrive correlates a percentual clock speed increase to GPU temperature most people, like me, would underclock their cards (heavily) in 2D environments.
I like my machine silent so for 2D my card (x800provivo) goes to 350/375 and 3D brings it up to 520/560.

Now a 40 Megahertz increase for normal cards and 70 for the OC's is nice, but wouldn't anyone want to advertise with the highest available clock speed?
It would seem default marketing to me to say that, although you clock 40Mhz less then your competitor, you're still way faster...
 
If the clock was actually jumping, shouldn't performance go up to match? I just did a quick single board set of tests, and at 430MHz the fillrate is around 6600Mtexels/sec. For a card with 16 ROPS clocked at core frequency, that's pretty spot on for 430MHz.

I get the same ratio of Mtexels/sec / core clock on a 6800 Ultra, which has the same ROP count. And on NV43 (the ratio should match ROP count fairly closely). And on pretty much any board I have results for from the fillrate tests I use.

Seems to me that the clock is actually 430MHz, when doing 3D, unless I'm completely missing something.
 
Rys said:
If the clock was actually jumping, shouldn't performance go up to match? I just did a quick single board set of tests, and at 430MHz the fillrate is around 6600Mtexels/sec. For a card with 16 ROPS clocked at core frequency, that's pretty spot on for 430MHz.

I get the same ratio of Mtexels/sec / core clock on a 6800 Ultra, which has the same ROP count. And on NV43 (the ratio should match ROP count fairly closely). And on pretty much any board I have results for from the fillrate tests I use.

Seems to me that the clock is actually 430MHz, when doing 3D, unless I'm completely missing something.
I don't think you are. It's definitely not actually running at 470Mhz, because fillrate tests and everything would show that. Are there any benchmarks anywhere that don't really make sense or show a greater-than-expected gain? Are there any aspects of G70 performance that don't line up at all with NV40's? It's not any sort of Overdrive or anything silly like that, as the core (in the sense that everyone here has always thought of a GPU core) isn't running at 470Mhz.
 
Hmm, reread the posted article, please? Apparently, the VS units and perhaps other parts of the chip are running at 470Mhz. PS and ROPs are running at 430Mhz, aka the "primary clock speed". I thought the link was quite clear about this? How come so many people are jumping to weird conclusions so fast? :?
 
Uttar said:
Hmm, reread the posted article, please? Apparently, the VS units and perhaps other parts of the chip are running at 470Mhz. PS and ROPs are running at 430Mhz, aka the "primary clock speed". I thought the link was quite clear about this? How come so many people are jumping to weird conclusions so fast? :?

That's just speculation. In vertex bound tests I get differences that indicate two more vertex units only (if that), not a bump in clocks to 470MHz or so on top of that.

For example, on the complex vertex shader test in 3DMark05:

430MHz G70 - 45.1
425MHz NV45 - 38.5

Simple test:

430MHz G70 - 61.5
425MHz NV45 - 50.2

If I overclock the G70 to 470MHz using Coolbits, I get 65.2 in the simple test and 48.4 in the complex test. That's clock scaling paired with a rough increase in the number of vertex units.

I still can't see it :?
 
from the B3D review:

In these tests the performance difference between the two boards is a little less than the increase in theoretical vertex rates, which doesn't necessarily point to much in the way of changes in the geometry pipeline beyond the extra two vertex shader units for G70.

Most of the tests in this case indicate a similar pattern to the performance differences between the two boards under the various different shader profiles. However, in this instance we do see the Ambient test is slightly above the pure theoretical geometry performance differences, which does suggest that there are some changes to the Vertex Shader ALU's.

so the "VS units are running at 470Mhz" explanation doesn't make a whole lot of sense to me.

(and you know, I could be Rys too if I had a G70 myself so I didn't have to rely on finding quotes in reviews to back up my positions and could just run the damn benches myself ;) )
 
Sinistar said:
Why go to the trouble of having them run at different speeds, what the point?

Higher geometry throughput? 8VS at 470 should be able to feed 24 pipes at 430 better than 8VS at 430 but as Rys and Baron noted we don't see evidence of higher VS performance either.
 
Stencil?
As we've seen before, the double Z rate of previous NVIDIA parts carries across to G70 hence the 7800 GTX still has a performance advantage over the 6800 Ultra in this test. The difference is, in fact, larger than the pure stencil performance difference would suggest, which is likely due to the increased shader capabilities.
 
Back
Top