Interestingly, from the standpoint of FP16 filtering, the 680 is Nvidia's R600.
It seems AMD was about half a decade early on that one.
Because... GF104 and onwards did it also single-cycle?
Interestingly, from the standpoint of FP16 filtering, the 680 is Nvidia's R600.
It seems AMD was about half a decade early on that one.
First people made fun that testing at 1080p is BS for a card of this caliber. Then they speculated that, for sure, it would trail at 25x16. And now with that out of the way too, they complain that a test was done at 25x16, the resolution of choice remember, when it should have been done at 1080p? Are we in the bargaining phase of the 5 stages of grief?jimbo75 said:Ryan says it only boosted by 3% on average - of course he tried it at 2560 when really he should have tried it at 1080p.
If there is anyone cherry picking here it is you.Wow, does this thing suck at compute. Now we know where Nvidia cheated.
Also, the way boost works, it's clear Nvidia's cherry picked review samples
First people made fun that testing at 1080p is BS for a card of this caliber. Then they speculated that, for sure, it would trail at 25x16. And now with that out of the way too, they complain that a test was done at 25x16, the resolution of choice remember, when it should have been done at 1080p? Are we in the bargaining phase of the 5 stages of grief?
But that was all just you guysIt should have been tested at 1080p when so much is being made of the 680's 1080p performance - not lauding the performance at 1080p then benching turbo at 1600p.
If there is anyone cherry picking here it is you.
Manually setting a specific GPU clock on the GeForce GTX 680 is not possible. You can only define a certain offset that the dynamic overclocking algorithm will _try_ to respect. If the card runs into the power limit or something else comes up, the clocks will be lower than requested. Think of it more as a "best effort plzplz" value than a hard setting.
NVIDIA has defined a hard limit of +549 MHz for the clock offset, which will certainly upset some extreme overclockers with liquid nitrogen. As mentioned several times before, there is no way to turn off dynamic clocking, which means there is no way to go back to the classic overclocking method. Also directly accessing hardware to write clocks to the clock generator won't work as the algorithm in the driver will instantly overwrite it with what it thinks is right. The clock offset simply acts as additional input variable for the dynamic clock algorithm which also takes into account things like power consumption, temperature and GPU load.
This means that no matter how hard you try using clock offsets, power limits and voltage settings, the card will always reduce clocks when it thinks it has to.
Looks like Hexus got lucky with their card! I wonder how many other reviewers did.The only method of discerning the clockspeed is, currently, to use EVGA's Precision tool, which provides an overlay of the frequency. The two pictures show the GTX 680 operating at 1,097MHz and 1,110MHz for Batman: Arkham City and Battlefield 3, respectively.
Because... GF104 and onwards did it also single-cycle?
Wow, does this thing suck at compute. Now we know where Nvidia cheated. Should AMD follow suit in the future with a gaming only GPU?
nahActually the 7970 is significantly closer to the 680 than the 6970 was to the 580. (7% deficit vs 19%)
This again ! , any clock increase AMD would be able to raise , NVIDIA could just do the same .In fact it seems if AMD hadn't fubared 7970's clocks by Dave's admission , they might be right there.
HD 7970 clocks at 5500 MHz , this what I am referring to .At least according to the Anand review, the memory bandwidth is the same:
256 * 6GHZ = 384 * 4GHZ
It's the average card not the average boost clock. My point stands in any case whichever definition it is. It's known reviewers typically get good overclocking samples and since Nvidia themselves states boost will vary on the individual card level, it is going to affect benchmarks.
http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/4Accordingly, the boost clock is intended to convey what kind of clockspeeds buyers can expect to see with the average GTX 680. Specifically, the boost clock is based on the average clockspeed of the average GTX 680 that NVIDIA has seen in their labs.
Link please?
RE: Why no CUDA-Test? by Ryan Smith on Thursday, March 22, 2012
We have the data, although it's not exactly a great test (a lot of CUDA applications have no idea what to do with Kepler right now). It will be up later today.
Wow, does this thing suck at compute. Now we know where Nvidia cheated.
HAHAHA, by reasoning thank goodness ATI did all that "cheating" in compute during previous generations or they would have been really screwed.