Nvidia BigK GK110 Kepler Speculation Thread

Discussion in 'Architecture and Products' started by A1xLLcqAgt0qc2RyMz0y, Apr 21, 2012.

Tags:
  1. Alexko

    Alexko Veteran Subscriber

  2. UniversalTruth

    UniversalTruth Veteran

  3. HKS

    HKS Newcomer

    How about reading the entire page before you post:

    AnandTech also notes that it is an OpenCL bug in the current driver. That's why Anand isn't testing with OpenCL.
     
  4. HKS

    HKS Newcomer


    Single GPU, no multi-GPU issues
    Very fast
    Quiet during gaming
    Low power consumption
    Good overclocking potential
    Sexy high-quality design
    Extremely quiet in idle
    Boost clock 2.0 adds new overclocking features
    6 GB memory
    Support for voltage control
    Up to four active outputs
    Native full-size HDMI & DisplayPort
    Adds support for Display Overclocking
    Support for CUDA and PhysX
     
  5. Dade

    Dade Newcomer

    Can we debunk this myth ? All Kepler family lacks of good performance in most FP32 GPGPU applications.

    Just try to run Blender Cycles or Octane (both CUDA-only off-line renderers) on a GTX 580 and than on a GTX 680.
     
  6. elect

    elect Newcomer

    OpenCL and CUDA are very very similar and they perform similar in almost all scenarios.

    I just hope for Nvidia that this is gonna be true..
     
  7. UniversalTruth

    UniversalTruth Veteran

    It's a joke. :lol:
    All of these is true for almost every card out there on the market.
     
  8. jimbo75

    jimbo75 Veteran

    I guess there are driver issues in a few games still but I think it goes both ways. I notice AMD still lags badly in Shogun II for some reason. But yeah ~30% seems about right, or at least it'll be that once the drivers are sorted.

    Very concerned about this boost shenanigans though. Cold cards adding 10% or so performance is just not on, because they won't be cold while gaming. I really believe this needs looking at HARD by the tech press.
     
  9. CarstenS

    CarstenS Legend Subscriber

    Last edited by a moderator: Feb 21, 2013
  10. Alexko

    Alexko Veteran Subscriber

    The problem with Kepler is that it doesn't have a lot of register space and cache vs. its computing resources, so it tends to choke on somewhat complex workloads.

    CUDA lets you tap into the TMU memory to alleviate this problem, but as far as I'm aware, OpenCL doesn't. Even so, it requires a bit of extra effort, and typical CUDA applications don't necessarily do it.
     
  11. elect

    elect Newcomer

  12. Blazkowicz

    Blazkowicz Legend

    The aggressive power and temperature throttling is a surprise, and it makes sense, the default is not noisy and there's not too much heat to get rid of. You can install it and forget about it if you wish.

    Value for the price is dubious of course (give me a vanilla GTX 660 and I'll be happy) unless you're a PhD who badly needs such a new toy, 6GB Cuda with 1.5 teraflops DP.

    Deal with it! It's hardware you can't afford, so pretend it's not there and you will be fine.

    I may compare it to the iphone with max flash storage, at 900 euros (included taxes). Rofl!
    that computer phone is terribly overpriced, isn't it.
     
  13. CarstenS

    CarstenS Legend Subscriber

    Did you hear of something called Overclocking? ;)
     
  14. DSC

    DSC Banned

  15. elect

    elect Newcomer

    I agree regarding the caches, I dont regarding registers

    http://www.realworldtech.com/kepler-brief/

    You can use TMU only under special conditions.
     
  16. lanek

    lanek Veteran

    Anyway, any place to get this OCL benchmark ? cant find it or only bad / removed download link. ( even by Inpei )
     
  17. elect

    elect Newcomer

    Ach so :grin:

    I skipped at the spot, this time I saw it all, very nice :wink:
     
  18. Alexko

    Alexko Veteran Subscriber

    Why don't you agree?

    [​IMG]

    Register file size per SP flop per SM(X) or CU:

    GF100: 128/64 = 2
    GF104: 128/96 = 1.33
    GK104: 256/384 = 0.67

    GCN: 256/128 = 2
     
  19. LittleJ

    LittleJ Newcomer


    Boost 2.0 probably is somewhat bugged now as it works based off temperature and less power consumption as GTX 680 boost 1.0 was. As you said wait till this is sorted via drivers. There maybe no deliberate shenanigans as you seem to be implying....
     
  20. Alexko

    Alexko Veteran Subscriber

    If it's based on temperature, lowering clocks when the card gets hot (and increasing them when it's cold) doesn't seem like a bug to me, it seems like feature doing exactly what it's supposed to.
     
Loading...

Share This Page

Loading...