NVIDIA Kepler speculation thread

jimbo75 said:
Ryan says it only boosted by 3% on average - of course he tried it at 2560 when really he should have tried it at 1080p.
First people made fun that testing at 1080p is BS for a card of this caliber. Then they speculated that, for sure, it would trail at 25x16. And now with that out of the way too, they complain that a test was done at 25x16, the resolution of choice remember, when it should have been done at 1080p? Are we in the bargaining phase of the 5 stages of grief?
 
First people made fun that testing at 1080p is BS for a card of this caliber. Then they speculated that, for sure, it would trail at 25x16. And now with that out of the way too, they complain that a test was done at 25x16, the resolution of choice remember, when it should have been done at 1080p? Are we in the bargaining phase of the 5 stages of grief?

It should have been tested at 1080p when so much is being made of the 680's 1080p performance - not lauding the performance at 1080p then benching turbo at 1600p.

He should also have ran power draw, heat and noise benchmarks on one of the games Nvidia won at instead of one of the very few than AMD won at (Metro).
 
If there is anyone cherry picking here it is you.

I'm cherry picking pointing out it really seems to have fallen off a cliff in compute? Sorry, the benchmarks dont lie. In that test it gets creamed by a 580 forget 7970.

It seems like when Trinibwoy reveled in AMD taking the compute hit, he was only half right. Nvidia did the reverse, they un-took the compute hit.

Also, overclocking is weird on this thing

Manually setting a specific GPU clock on the GeForce GTX 680 is not possible. You can only define a certain offset that the dynamic overclocking algorithm will _try_ to respect. If the card runs into the power limit or something else comes up, the clocks will be lower than requested. Think of it more as a "best effort plzplz" value than a hard setting.

NVIDIA has defined a hard limit of +549 MHz for the clock offset, which will certainly upset some extreme overclockers with liquid nitrogen. As mentioned several times before, there is no way to turn off dynamic clocking, which means there is no way to go back to the classic overclocking method. Also directly accessing hardware to write clocks to the clock generator won't work as the algorithm in the driver will instantly overwrite it with what it thinks is right. The clock offset simply acts as additional input variable for the dynamic clock algorithm which also takes into account things like power consumption, temperature and GPU load.

This means that no matter how hard you try using clock offsets, power limits and voltage settings, the card will always reduce clocks when it thinks it has to.

Hardcore overclockers are such a huge part of the enthusiast market, hell I've sometimes wondered if it isn't best to artificially limit high end part clocks a little bit so that the OC'ers can feel smug about their great OCing results.

This overclocking weirdness with Kepler, I'm not sure how it's going to play, but I suspect it might be a negative with that crowd. Regardless their love for Nvidia will probably overcome it to some extent. They'll likely grin and bear it for Nvidia's sake, even if they dont like it.
 
http://hexus.net/tech/reviews/graphics/36509-nvidia-geforce-gtx-680-2gb-graphics-card/?page=4

The only method of discerning the clockspeed is, currently, to use EVGA's Precision tool, which provides an overlay of the frequency. The two pictures show the GTX 680 operating at 1,097MHz and 1,110MHz for Batman: Arkham City and Battlefield 3, respectively.
Looks like Hexus got lucky with their card! I wonder how many other reviewers did. :p

Average 1097 - http://translate.googleusercontent....e.html&usg=ALkJrhjWLuwQUkB2NNH7HwOmqGZsAYb_dA
 
GPU boost obviously leaves open doors to benchmark shenenigans and weird inconsistent results from one review to another. It's basically an out-of-the-box overclocked card only that OC is range based.

And it seems Nvidia gave up compute for gaming performance.
 
Wow, does this thing suck at compute. Now we know where Nvidia cheated. Should AMD follow suit in the future with a gaming only GPU?

It's not necessarily cheating that the 680 fares worse in various compute scenarios, particularly since it still beats AMD in others.
What we do see is a deepmhasis on all areas of compute and less consistency, which can be a valid tradeoff for the target market which is not dominated by compute loads.
One of the few areas of interest for most consumers is now handled by a hardware encoder.

Lesser capability needs to be weighed against the relative significance of the feature where the card is sold.
That's not to say that this couldn't hurt in the future, since some games do make better use of compute shaders.
However, the rest of the architecture on the graphics side is so strong that it can frequently more than compensate.
 
Actually the 7970 is significantly closer to the 680 than the 6970 was to the 580. (7% deficit vs 19%)
nah
http://www.hardwarecanucks.com/foru...616-nvidia-geforce-gtx-680-2gb-review-29.html

In fact it seems if AMD hadn't fubared 7970's clocks by Dave's admission :p, they might be right there.
This again ! :rolleyes: , any clock increase AMD would be able to raise , NVIDIA could just do the same .

At least according to the Anand review, the memory bandwidth is the same:

256 * 6GHZ = 384 * 4GHZ
HD 7970 clocks at 5500 MHz , this what I am referring to .
 
It's the average card not the average boost clock. My point stands in any case whichever definition it is. It's known reviewers typically get good overclocking samples and since Nvidia themselves states boost will vary on the individual card level, it is going to affect benchmarks.

Accordingly, the boost clock is intended to convey what kind of clockspeeds buyers can expect to see with the average GTX 680. Specifically, the boost clock is based on the average clockspeed of the average GTX 680 that NVIDIA has seen in their labs.
http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/4
 
Link please?

Comments under the review.

Edit: Comment from Ryan:
RE: Why no CUDA-Test? by Ryan Smith on Thursday, March 22, 2012
We have the data, although it's not exactly a great test (a lot of CUDA applications have no idea what to do with Kepler right now). It will be up later today.
 
Last edited by a moderator:
HAHAHA, by reasoning thank goodness ATI did all that "cheating" in compute during previous generations or they would have been really screwed.

Lol I'm glad somebody said it. The coo-coo train is really rolling now. The fact that it doesn't make toast is probably cheating too.

Jawed was right about the greater compiler dependency though. He probably saw the white paper before starting that little diatribe :LOL: In any case it's obvious nVidia's static scheduling needs some work. It's only dual issue dammit, how hard can that be. AMD had to deal with 2.5x that.
 
The NVIDIA card appears to suck badly in the original Crysis and Crysis Warhead , it is hardly faster than GTX 580 ! I suspect this could improve with future driver updates .

Performance in Crysis 2 DX11 seems superior to competition though , that is mainly because of the tessellation I think , without it we are looking at a tie .
 
Back
Top