NVIDIA Kepler speculation thread

itsmydamnation said:
everyone seems fixated on comparing GK104 to Tahiti but if you stuck 6gbps memory on pitcairn and upped its clock, GK104 and Tahiti both wouldn't look that great as a GPU with a die of 210mm would be in spitting distance.
If you stuck 6gbps memory on Pitcairn PCB, you'd probably still wouldn't be able to up the clock because the MC is likely not sized for those clocks...

So you add area to fix that and... you'll end up exactly where a scaled down GK104 or stripped 7970 would end up?
 
The particular hd7970 benchmarked Batman AC screenshot you are referring to
I was referring to your post, but it looks like your "130" was a typo. Checking the link shows 103, which is indeed slower than 112. Makes sense now.

Anyway, if the 680 is faster than a 580, I can buy it being faster than a 7970 in Batmang judging from TR's FRAPSed run-through (19x12, 4xAA, "very high," DX11 shows a 580 ~= a 7970).
 
If you stuck 6gbps memory on Pitcairn PCB, you'd probably still wouldn't be able to up the clock because the MC is likely not sized for those clocks...
Many review sites actually achieved memory speeds >1500Mhz when overclocking HD7870 ... so the memory controller is certainly up to the task.
 
Many review sites actually achieved memory speeds >1500Mhz when overclocking HD7870 ... so the memory controller is certainly up to the task.


1500mhz is not achievable this is the rating of the GDDR5 used. even if it is set lower ( anyway for reference card )

Chipworks
Surrounding the graphics chip are 12 256 MB GDDR5 chips, for a total of 3 GB graphics RAM memory. The part number is the Hynix H5GQ2H24MFR GDDR5, which is a 2 Gb device rated at 6 GBps at 1.6 V. Twelve of them are used to give a 384 bit memory bus and memory bandwidth of 264 GB/s. The x-ray shows that they are single die 2 Gb chips, as opposed to 2 x 1 Gb, which would have been common a few months ago.
Use CCC Overdrive, dont touch anyvoltage, set 1500, no problem.


Quick question:
Why then does this setting "AMD optimized" exist anyway (as a driver default)? As far as I can see the only thing it accomplished was raising concern about cheating.


So far, nothing, there's yet no profile associated / game / benchmark to it. But i believe what it mean is it have set the factor artificially. ( so you cant use this score if it do this.)

Now, the result given by this guy differ completely of what we have seen previously with HKPEC in unigine, Maybe they was just use the 2.5. ( unigine 3.0 have been released only few days ago, as allways a new version is released when a new Nvidia card is released ).
 
Last edited by a moderator:
From what I've found out today in Korea (can't say the source),
Performance-wise, the average difference between GTX680 and 7970 is very similar (if not the same) as the difference between GTX580 and 6970.

Well, since GTX580 was considered to be roughly 15% faster than 6970, it looks like nVIDIA in this generation (GTX680) can be considered 15% faster than AMD's counterpart (7970).

And TDP of GTX680 seems to be between 6970 and 6950.

To me, nVIDIA looks like a clear winner this time round. Almost the same performance lead (~15%) as previous generation (GTX580 vs 6970) *and* better TDP.
We'll find out soon enough. (22nd).
 
Lol. So you want a tilted playing field? Unchoke one and leave the other choked? Why not disable AF or AA or simply lower the resolution for one?

I think the point is to have the settings at a reasonable level that isn't choking all cards (some more than others) for no discernable IQ gain.
 
So? Any reasonable reviewer won't bench these games/synthetics, then. The benchmark may be excluded, that is completely ok. And even if he/she uses them, they will stand on their own, not distorting any kind of performance rating. But to change the workload and then do benchmarks is cheating, that's a fact.

And what about the reviewers that aren't "reasonable"? AMD Optimized exists due to unrealistic tessellation loads that provide no IQ gains but choke graphics cards. There is only one company cheating here, and it's the one who buys benchmarks like HAWX 2 and Crysis 2.
 
I think the point is to have the settings at a reasonable level that isn't choking all cards (some more than others) for no discernable IQ gain.

Basically, if you want to use a benchmark, you use the same settings for all.

Unigine is anyway not representative of anyting, and outside their update of the benchmark each time a new card is released, the bench have still some bugs we have find in the 1.0. ( as this streaming problem )


Well yet we have 3 numbers:
- The HKPEC ones with a small difference between both ( the only problem it seems they was maybe just use Unigine 2.5 ( 46fps at 1920x1080 ).
- The score posted yesterday who use Unigine 3.0.
- The third one but who had the problem of clock and I3.

So before i know what numbers believe, i will wait for full reviews. If the 80fps is true, nice ( nearly 2x a GTX580.. )
 
I think the point is to have the settings at a reasonable level that isn't choking all cards (some more than others) for no discernable IQ gain.

I understand, but to "optimize" one and not the other is hardly "benchmarking"
The point of benchmarking is behavior under *identical* conditions. If I did a mathematical benchmark on one computer using 16 bit values and on another with 64 bit the results might be indistinguishable, but it wouldn't be a reasonable benchmark.
 
And what about the reviewers that aren't "reasonable"? AMD Optimized exists due to unrealistic tessellation loads that provide no IQ gains but choke graphics cards. There is only one company cheating here, and it's the one who buys benchmarks like HAWX 2 and Crysis 2.

No one absolves the customer from switching on his grey matter and put these results into perspective. To compare under different workload is cheating, plain and simple. You can say what you want, that's the way it is.
Anyway, this was not even my point. My point was: Why put in this slider@default if it doesn't do anything at all. This is just silly. There was no such slider with NV-optimized profiles when they sucked at 8xMSAA which also provided little to no quality gain. Either the driver does what the application tells it to or it allows optional tuning of the setting like the AF and AA sliders.

Take these benchmarks as what they are: synthetics. I imagine they can still be useful to compare an older NV card with a newer NV card like Kepler and Fermi.
 
From what I've found out today in Korea (can't say the source),
Performance-wise, the average difference between GTX680 and 7970 is very similar (if not the same) as the difference between GTX580 and 6970.

Well, since GTX580 was considered to be roughly 15% faster than 6970, it looks like nVIDIA in this generation (GTX680) can be considered 15% faster than AMD's counterpart (7970).

And TDP of GTX680 seems to be between 6970 and 6950.

To me, nVIDIA looks like a clear winner this time round. Almost the same performance lead (~15%) as previous generation (GTX580 vs 6970) *and* better TDP.
We'll find out soon enough. (22nd).

If NVIDIA doesn't have anything like ZeroCore, I would argue that their power advantage under load essentially vanishes, except maybe for people who only turn their computer on for gaming.
 
I see your point, well let say the "AMD optimised" should have been set differently, basically you have 3 " positions ".

AMD Optimised ( same as "appliccations default " ), " use appliccations settings", or if you set replace "appliccations " enable the slider and you can choose your level of tesselation.

I have not really play with it, cause yet i have never play anygame where tesselation will be a problem on my system.
 
Back
Top