Nvidia BigK GK110 Kepler Speculation Thread

Really expensive!
No backplate
Voltage control very limited
Power limit can only be adjusted by +6%
Boost 2.0 adds more complexity to overclocking


http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan/35.html

Single GPU, no multi-GPU issues
Very fast
Quiet during gaming
Low power consumption
Good overclocking potential
Sexy high-quality design
Extremely quiet in idle
Boost clock 2.0 adds new overclocking features
6 GB memory
Support for voltage control
Up to four active outputs
Native full-size HDMI & DisplayPort
Adds support for Display Overclocking
Support for CUDA and PhysX
 
NVIDIA pays little attention to OpenCL, there's no reason Titan should change that.

CUDA is the target.

Can we debunk this myth ? All Kepler family lacks of good performance in most FP32 GPGPU applications.

Just try to run Blender Cycles or Octane (both CUDA-only off-line renderers) on a GTX 580 and than on a GTX 680.
 

Single GPU, no multi-GPU issues
Very fast
Quiet during gaming
Low power consumption
Good overclocking potential
Sexy high-quality design
Extremely quiet in idle
Boost clock 2.0 adds new overclocking features
6 GB memory
Support for voltage control
Up to four active outputs
Native full-size HDMI & DisplayPort

Adds support for Display Overclocking
Support for CUDA and PhysX

It's a joke. :LOL:
All of these is true for almost every card out there on the market.
 
I guess there are driver issues in a few games still but I think it goes both ways. I notice AMD still lags badly in Shogun II for some reason. But yeah ~30% seems about right, or at least it'll be that once the drivers are sorted.

Very concerned about this boost shenanigans though. Cold cards adding 10% or so performance is just not on, because they won't be cold while gaming. I really believe this needs looking at HARD by the tech press.
 
OpenCL and CUDA are very very similar and they perform similar in almost all scenarios.

The problem with Kepler is that it doesn't have a lot of register space and cache vs. its computing resources, so it tends to choke on somewhat complex workloads.

CUDA lets you tap into the TMU memory to alleviate this problem, but as far as I'm aware, OpenCL doesn't. Even so, it requires a bit of extra effort, and typical CUDA applications don't necessarily do it.
 
The aggressive power and temperature throttling is a surprise, and it makes sense, the default is not noisy and there's not too much heat to get rid of. You can install it and forget about it if you wish.

Value for the price is dubious of course (give me a vanilla GTX 660 and I'll be happy) unless you're a PhD who badly needs such a new toy, 6GB Cuda with 1.5 teraflops DP.

Deal with it! It's hardware you can't afford, so pretend it's not there and you will be fine.

I may compare it to the iphone with max flash storage, at 900 euros (included taxes). Rofl!
that computer phone is terribly overpriced, isn't it.
 
The problem with Kepler is that it doesn't have a lot of register space and cache vs. its computing resources, so it tends to choke on somewhat complex workloads.

CUDA lets you tap into the TMU memory to alleviate this problem, but as far as I'm aware, OpenCL doesn't. Even so, it requires a bit of extra effort, and typical CUDA applications don't necessarily do it.

I agree regarding the caches, I dont regarding registers

http://www.realworldtech.com/kepler-brief/

You can use TMU only under special conditions.
 
I agree regarding the caches, I dont regarding registers

http://www.realworldtech.com/kepler-brief/

Why don't you agree?

brief-kepler-1.png


Register file size per SP flop per SM(X) or CU:

GF100: 128/64 = 2
GF104: 128/96 = 1.33
GK104: 256/384 = 0.67

GCN: 256/128 = 2
 
I guess there are driver issues in a few games still but I think it goes both ways. I notice AMD still lags badly in Shogun II for some reason. But yeah ~30% seems about right, or at least it'll be that once the drivers are sorted.

Very concerned about this boost shenanigans though. Cold cards adding 10% or so performance is just not on, because they won't be cold while gaming. I really believe this needs looking at HARD by the tech press.


Boost 2.0 probably is somewhat bugged now as it works based off temperature and less power consumption as GTX 680 boost 1.0 was. As you said wait till this is sorted via drivers. There maybe no deliberate shenanigans as you seem to be implying....
 
Boost 2.0 probably is somewhat bugged now as it works based off temperature and less power consumption as GTX 680 boost 1.0 was. As you said wait till this is sorted via drivers. There maybe no deliberate shenanigans as you seem to be implying....

If it's based on temperature, lowering clocks when the card gets hot (and increasing them when it's cold) doesn't seem like a bug to me, it seems like feature doing exactly what it's supposed to.
 
Back
Top