Tech Power Up's article on the 9600 "Dirty Trick"

Post
Wrong guess. I still read the clocks from the PLL. I just use hardcoded 25MHz crystal clock for G94 based display adapters now.

Interesting...
 
Last edited by a moderator:
http://en.expreview.com/2008/03/11/follow-up-to-nvidias-shady-trick-on-9600gt/

Will nForce mobo automatically starts LinkBoost?
No, because nFoce board did not have this feature any more.​
9600GT runs better when using it with nForce.
No. It is only a 0.3% increase in Multi-texture test, which can be omitted. If compare it with 3Dmark06 total score that Intel P35 won’t have any difference with 780i.
9600 GT GPU clock increase 1:1 when we increase PCIe clock?
No.PCIe clock change not only affects GPU clocks. First they are not 1:1, second the scale pattern are not the same.
Other cards also have this issue?
For now, no. But in the future we will encounter more 9 series card which have this kind of issue.
Now here is a new question which we can not answer: What PCIe clock affects?
It is clear that the GPU clock and PCIe clock patterns are not the same, so PCIe clock not only affect the GPU clock. But we can not sure which part is the key. Maybe only NVIDIA can answer this question.

Case closed. End of thread. :p
 
Probably because only Asus makes decent nForce Intel boards for one. :D

Of course marketing does pay off in a free mobo or two, and reviewers can't be picky either way I suppose...
 
Case closed. End of thread. :p

I'm still trying to figure out what the hub-bub was all about in the first place.:oops: Do people still think that reviewers are overclocking their PCIe buses and making the 9600GT look better than it should? Aren't most reviews done on Intel chipset boards anyway?
 
I'm still trying to figure out what the hub-bub was all about in the first place.:oops: Do people still think that reviewers are overclocking their PCIe buses and making the 9600GT look better than it should? Aren't most reviews done on Intel chipset boards anyway?

It's the reverse. Quite some review sites have 780i for everything besides Crossfire.

I think someone over at Xtremesystems compiled a list of HD 3870X2 reviews. A lot particularly used nF780i boards, and even a direct conclusion can't be drawn, the X2 seems to pale with the competition in those setups more.


Well, considering the tech press thrives on sensationalism, it's not uncommon; See TLB bug. ;)
 
So what exactly happened to the LinkBoost feature in the nForce 590 and 680 chipsets anyway? Lots of boards were shipped in that feature and I'm sure NVIDIA couldn't magically disable it just by clapping their hands, so what happened to it? Maybe boards are still out there with LinkBoost working, and they just happened to make it to test beds? Its entirely possible that NVIDIA cards are still being "boosted"....

Likboost was removed a year ago:
EVGA Bios Version : P26
BIOS Date : 3/14/2007
Bios File : NF68_P26.exe (filesize N/A)
Notes : The following was updated in release P26:

Wireless PCI card fixes
Removed LinkBoost support (Please see below)
“LinkBoost was removed from nForce 680i SLI because it did not show significant demonstrable benefit in games. We had hoped newer games would take advantage of this additional bandwidth but this has not been the case. Please note that future BIOS upgrades will only remove the automatic overclocking component of LinkBoost. Users can still manually overclock the PCI-Express and HyperTransport buses in the BIOS.”

I doubt many reviewer who use 680i are useing such a old Bios. 650i never had linkboost.

I'm still trying to figure out what the hub-bub was all about in the first place.:oops: Do people still think that reviewers are overclocking their PCIe buses and making the 9600GT look better than it should? Aren't most reviews done on Intel chipset boards anyway?

Agreed, it seems their is alot of lack of comprehension about this aound the web and titles like shady trick dont help. Indeed Intel boards were used more, 650i never had linkboost and linkboost was removed from 680i a year ago. From my list that leaves one Evga 780i.

9600GT launch reviews:
Intel
AnandTech - Intel D5400XS Skulltrail
bit-tech - Gigabyte GA-X38-DS5
ChileHardware - abit IP35 PRO
Elite Bastards - Gigabyte GA-X38-DS5
[H]ard|OCP - Gigabyte X38-DQ6
HardwareZone - Intel D975XBX
Legit Reviews - Gigabyte X38-DQ6
OCWorkbench - Gigabyte GA-P35T-DQ6 Intel P35
Overclock3D - Asus P5E3 X38
Overclockers Club - Gigabyte GA-X48-DQ6
t-break - ASUS P5E3 X38
techPowerUp! - Gigabyte P35C-DS3R Intel P35
Tom's Hardware - Asus P5E3 Deluxe X38
TweakTown - GIGABYTE X48-DQ6

Nvidia
Ars Technica - Asus P5N-E 650i SLI
ComputerBase - Asus Striker Extreme 680i
FiringSquad - EVGA nForce 780i SLI
Guru3D - eVGA nForce 680i SLI
HEXUS - eVGA NF68 680i SLI
HotHardware - eVGA Nvidia nForce 680i LT SLI
I4U News - XFX 680i
nV News - BFG nForce 680i SLI
PC Perspective - EVGA nForce 680i
 
Im in the same boat as you are. I think this has been blown way out of proportion by some hardware sites and users alike.


Well, somebody possibly 'fed' it to them ... the tech press rarely comes up with labels like "shady" or "dirty trick" by themselves
.. it just appears to be negative PR

Have we hear from NVIDIA on it?
 
Some days after TPU’s NVIDIA’s shady trick, and our following-up, NVIDIA replied our questions. This is the first time NVIDIA officially answer us the question on 9600GT. But as you can expected, they still hiding some truth. However we can already guess what’s in their mind.

1. 9600GT using LinkBoost to cheat?

NVIDIA said: “NVIDIA nForce board didn’t provide LinkBoost any more.”

TPU said: “This feature was pioneered with the NVIDIA 590i chipset and is present in the NVIDIA 680i chipset too, but has recently been disabled as far as I know. Also some motherboards from ASUS and other companies increase the PCI-Express bus frequency beyond 100 MHz when the BIOS option is set to “auto”.



The automatic increase of 25 MHz on the PCI-Express bus frequency yields an increase of 25% or 162.5 MHz over the stock clock (assuming a 650 MHz clock board design). With a final clock of 812.5 MHz you can bet this card will perform much better, when used by an unsuspecting user, on an NVIDIA chipset motherboard with LinkBoost.”...

3. 9600GT’s core clock derived from PCIe clock, it is 1:1 scaling?

NVIDIA said: “The core and SP clocks on 9600GT are derived from PCIe (PEX) clock. There are two reference clocks available on the chip: the 100 MHz PEX clk and the 27 MHz Crystal clock. Either can be chosen, but using a higher reference clock provides better clock stability (less jitter).

If users were to inadvertently set the PCIe clock really high, it could cause an excessive GPU clock speed increase, but no chip damage would ever occur because the GPU’s thermal protection circuitry would be triggered, and the chip would slow down.”...

Because the new clocking method exists and NVIDIA did not want us to know, so calling it cheating is not wrong, it should blame NVIDIA PR itself...
Source. More info can be found if you read the article.

So, it looks like TPU article was correct after all. Also, I still find it odd that there are no examples using 680i XFX and EVGA motherboards.

One more note regarding the 9800 GX2:
The final overclocks of our card are 749 MHz Core (25 % overclock) and 1106 MHz Memory (11 % overclock). I was totally surprised by these overclocks. From a dual GPU design like this I expected no, or almost none overclocking at all. This is another fact that NVIDIA's new cooler design works well and is an excellent innovation.

Please note that the core clock of both GPUs is dependent on the PCI-Express clock signal, just like on the GeForce 9600 GT. Please read our article here for additional details.
Source
 
Last edited by a moderator:
So, it looks like TPU article was correct after all.
Has anyone said otherwise? Seems to me the issue was always how many people this affected (especially reviewers) and hence the appropriateness of calling it a "dirty trick," not if it actually happens to LinkBoost MBs.
 
Did this get mentioned?

[L=http://www.nordichardware.com/news,7538.html]http://www.nordichardware.com/news,7538.html[/L]

NVIDIA responds to the GeForce 9600GT PCIe irregularity
Written by Andreas G 21 March 2008 16:35

"There has been a lot of buzz around the web regarding an article published by W1zzard over at TechPowerUp. He found an irregularity when testing the just launched GeForce 9600GT. It seemed that the card would perform exceptionally better when it was running with an overclocked PCIe bus, which isn't normal under these circumstances. He investigated it further and found that it seemed like the card used the PCIe frequency as a reference crystal, instead of the on-board physical crystal. A follow-up investigating the oddity was also posted.

The problem with this isn't so much that the card overclocks with the PCIe bus, it's actually quite nifty, but that the increased frequency wasn't reported by the drivers. The card seemed to operate at default frequency when it was not. People have been wondering why NVIDIA didn't reveal this to people reviewing the card, as they may have been lured into making the card look better than it was. That would be the paranoid angle of it, but right now it's the one dominating the discussions.

We still don't know why this information was omitted. It might just have been some sort of miscommunication at NVIDIA, because it has now made an official response saying that the card does indeed have two crystals. One on-board 27MHz crystal and one crystal which is connected to the PCIe bus. The things is, GeForce 9600GT isn't the only card that behaves this way. TechPowerUp discovered that GeForce 9800GX2 behaves the same way, and chances are that the rest of the GeForce 9 series cards do too."
 
because it has now made an official response saying that the card does indeed have two crystals. One on-board 27MHz crystal and one crystal which is connected to the PCIe bus. The things is, GeForce 9600GT isn't the only card that behaves this way. TechPowerUp discovered that GeForce 9800GX2 behaves the same way, and chances are that the rest of the GeForce 9 series cards do too.

Now that it's official we can congratulate Nvidia on a very innovative technique and hope they have patented the feature. ;)
 
Could this PCI Express based clocking be a "power saving" feature. Until hybrid SLI with "power saving" is launched, I presume NVidia was trying to keep this under wraps.

Jawed
 
The 9600GT doesn't support HybridPower (the 9800GX2 is the first board to support it; don't get me started on this one...) so I doubt it, sadly.
 
Could this PCI Express based clocking be a "power saving" feature. Until hybrid SLI with "power saving" is launched, I presume NVidia was trying to keep this under wraps.

Jawed

Maybe a botched try at it?
 
Back
Top