G70 Core Clock Variance

  • Thread starter Deleted member 2197
  • Start date
Well, the best thing to do at this point would be to test the triangle setup rate. This implies basically abusing the vertex cache to the maximum, so that the VS idles.
I don't have a framework to do this efficiently right now, and I'm not sure VBO would be best to test this (since it tends to preprocess a bit; if the same triangles came up, perhaps the driver would detect it; but then again it's always possible to use different triangles but you're limited to n*n possibilities then). If nobody feels like bothering, I always could see what I can come up with though...
Honestly, I can't believe NVIDIA saying they improved triangle setup/rasterisation efficiency. These parts must have been pretty much unchanged since the TNT2! Sure, they might have had a revamp or two since then, but there's no reason to redesign them completely for every new generation of GPUs. Which would point towards an asychronous clock rate being used in that area, at least imo.
As for people asking "Why should part of the chip run at 470Mhz, and not all of it?" - from my verry limited understanding of hardware engineering (I'm sure some other members could give a much better answer ;)), different parts of all chips have different clocking potential. The NetBurst architecture, for example, has some of its ALUs at a much higher clock rate than the rest of the chip, because they were handtuned to support such a high frequency and because ALU design, if done properly, tends to be slightly more tolerant to higher clock rates.
And let me reinsist that I'm talking of Triangle Setup performance here - "geometry tests" wouldn't highlight this, should it be those units that are running at a higher frequency.

Uttar
 
It does make sense to change the triangle setup/rasterization for generating more general pixel positions for multisampling and supersampling. This is probably the change that was talked about for the 7800: it allows RGSS. You can, of course, hack RGSS/RGMS by having the triangle setup engine just output many more triangles (and have them recombined so they're rendered in an efficient order), but it'd be better to increase the efficiency of the triangle setup engine when dealing with attempting to set up a non-ordered grid.
 
Honestly, I can't believe NVIDIA saying they improved triangle setup/rasterisation efficiency.

You can't or refuse to? ;)

A claim does not automatically mean that it's the whole story.

And let me reinsist that I'm talking of Triangle Setup performance here ...

I understood that and to that I replied; see your PM.
 
AndrewM said:
I dont know how Dave came to the conclusion that the increased stencil performance had anything to do with the shaders.

You need hit the link to see which test is being used as the quote is out of context without it.
 
Crap, I just thought I compared 6800U and 7800GTX on a "fair" basis using same clock and 16pp/6vs config this proves me wrong :(

- Ohh well the result where surprisingly positive for the G70 chip compared to NV40.

Great for the users, but not so great for reviewers or others trying to lurk some interesting result out of the chip.
 
DaveBaumann said:
AndrewM said:
I dont know how Dave came to the conclusion that the increased stencil performance had anything to do with the shaders.

You need hit the link to see which test is being used as the quote is out of context without it.

Ah, yes sorry Dave, I must have read that part in haste. You dont have any other benchmarks for testing stencil performance?
 
Strangely, this ties in exactly with something interesting I observed. A card manufacturer came in with a um interesting clocked 7800GTX 490/1.3 GHz
What's interesting about it is their default clocked 7800GTX is 450/1.25 AND installed in certain motherboards, it reads at 450 others, 490 curiouser and curiouser
 
ben6 said:
Strangely, this ties in exactly with something interesting I observed. A card manufacturer came in with a um interesting clocked 7800GTX 490/1.3 GHz
What's interesting about it is their default clocked 7800GTX is 450/1.25 AND installed in certain motherboards, it reads at 450 others, 490 curiouser and curiouser

The XFX i have right now is at 450/1.25GHz
 
I've just noticed that Overclockers.co.uk have started selling the XFX "Extreme Edition" with the 490/1300 clocks. I assume the name is a bit of marketing to link it with the Intel Pentium "Extremely Expensive" chips as the price of £460 including taxes is equivalent to a mere $824 for our US chums. :oops:
 
ben6 said:
Strangely, this ties in exactly with something interesting I observed. A card manufacturer came in with a um interesting clocked 7800GTX 490/1.3 GHz
What's interesting about it is their default clocked 7800GTX is 450/1.25 AND installed in certain motherboards, it reads at 450 others, 490 curiouser and curiouser
This as nothing to do with the graphic card, some motherboard manufacturer have an option in the bios wich overclock the graphic card (PEG Link for ASUSTeK). You have to disable this option!
 
Some updated info about investigation progress:

1) G70 has 3 independently clocked domains with no doubts. I've already fully ripped clock detection logic from the driver, so new RT will replace single "Core clock" graph with 3 graphs, representing current clocks of each domain. I still have no strict info about the functional purpose of these domains, most likely they are geometric domain (that clock is currently read by publicly available RT and 3DMark), shader domain and ROP domain. So this naming scheme ("Core clock \ geometric domain", "Core clock \ shader domain" and "Core clock \ ROP domain") is currently used in beta of RT 15.7. However we're still waiting for NV's official comments on it, as well as performing own synthetic testing allowing us to determine domain functional purposes ourselves. So domain naming scheme will probably change in future.
2) Shader/ROP domain are clocked with more primitive PLL comparing to geometric domain allowing per-1MHz clock frequency adjustment, and currently NVIDIA driver is able to adjust clocks of these domains with 27MHz (oscillator frequency) step only. For example, default 430MHz ROP clock is generated as 27MHz * 16, and the next ROP clock the driver is able to set is 459MHz (27MHz * 17). This results in rather interesting effect when overclocking G70: often overclocking will adjust geometric domain clock only, e.g. attempt to set 440MHz instead of default 430MHz will result in generating the same 432MHz for ROP domian clock, but 480MHz (440+40) for geometric domain clock. This effect is clearly visible on new RT's core clock monitor, G70 owners may also verify it with fillrate tests and see that there are no changes in fillrate for such example.
3) Target 3D clock for shader/ROP domain (430MHz) and delta for geometric domain clock (40MHz) are explicitly specified in VGA BIOS in performance table. New RT is able to display this delta in "NVIDIA VGA BIOS information" section of the diagnostic report. The example of such info is shown below:

$ffffffffff ---------------------------------------------------
$ffffffffff NVIDIA VGA BIOS information
$ffffffffff ---------------------------------------------------
$1100000000 Title : GeForce 7800 GTX VGA BIOS
$1100000002 Version : 5.70.02.11.01
$1100000100 BIT version : 1.00
$1100000200 Core clock : 275MHz
$1100000201 Memory clock : 1200MHz
$1100010000 Perf. level 0 : 275MHz/600MHz/1.20V/100%
$1100010001 Perf. level 1 : 415(+35)MHz/600MHz/1.40V/100%
$1100010002 Perf. level 2 : 430(+40)MHz/600MHz/1.40V/100%
$1100020000 VID bitmask : 00000011b
$1100020100 Voltage level 0 : 1.20V, VID 00000000b
$1100020101 Voltage level 1 : 1.30V, VID 00000010b
$1100020102 Voltage level 2 : 1.40V, VID 00000001b

Please pay attention to performance level 1 descriptor. It is so called low power 3D perormance level, and the system uses this performance level as a temporary step when switching from 3D (performance level 2) to 2D (performance level 0). As you see, it has different "Geometric clock delta", and it perfectly matches with temporary 450MHz clock (415+35), which you can see on many clock graphs.
During overclocking the driver uses this BIOS-defined delta and generates closest possible to desired clock for ROP/shader domain and closest possible to (desired clock + delta) for geometric domain. => We'll probably see some BIOS editors in the future, allowing us to alter this delta. Or even independent clock control sliders for each domain (of course, if NV will decide to provide such functionality in their driver).
4) Digit-Life are currently preparing the review summing all this info.
5) Unfortunately I'm leaving the city for vacation soon, so I'll be probably offline since 11th July till 8th of August. So new RT will be publicly launched when I'll return.

Stay tuned ;)
 
Thanks for the update, Unwinder --I'm sure we all appreciate you taking the time to keep the denizens at B3D informed.

Sounds like you have the situation well in hand and the mysteries on the way to resolution.
 
Marc said:
ben6 said:
Strangely, this ties in exactly with something interesting I observed. A card manufacturer came in with a um interesting clocked 7800GTX 490/1.3 GHz
What's interesting about it is their default clocked 7800GTX is 450/1.25 AND installed in certain motherboards, it reads at 450 others, 490 curiouser and curiouser
This as nothing to do with the graphic card, some motherboard manufacturer have an option in the bios wich overclock the graphic card (PEG Link for ASUSTeK). You have to disable this option!

Nope not quite. The card is default clocked at 450/1.25 (according to Coolbits), but clocked at 490/1.3 according to Rivatuner and 3DMark, strange actually, as the PR person who dropped it off said it was supposed to be 490/1.3 on the AMD SLI board.

On the Asus and Gigabyte Intel SLI boards, it's clocked at 450MHz strangely in all instances.
 
Yep, thanks for your efforts, Unwinder.

So, Dave, is this news or old hat to ATI? Any traffic spikes from certain IP ranges?
 
Just to back Unwinder up (not that he needs it!) and prove my earlier farting about wrong (always nice :LOL: ), NVIDIA confirmed to me today that there are indeed discrete clock domains for G70.

They'd only confirm that the pixel processing and output hardware uses the primary clock, so FPs and ROPs are at 430MHz for a reference 7800 GTX. Everything else is a 'no comment' and 'Unwinder will get the info :D'.
 
Rys said:
Everything else is a 'no comment' and 'Unwinder will get the info :D'.
I don't get that, they know Unwinder is gonna tell us about it. :rolleyes:

Lemme guess, that was either a BB or DP reply....right? ;)
 
Unless it was a prediction rather than a promise. They can read too --they probably know better than UW that the game is up. :)
 
Back
Top