Nvidia BigK GK110 Kepler Speculation Thread

Discussion in 'Architecture and Products' started by A1xLLcqAgt0qc2RyMz0y, Apr 21, 2012.

  1. tunafish

    Regular

    Joined:
    Aug 19, 2011
    Messages:
    476
    The professional segment is no longer just "high-margin", it's also high-revenue. Last quarter, the "Professional Solutions" business unit (quadros and teslas) brought in $221M, while all the rest of their GPU products brought in $621M. There's no way, no way at all, that the GF100-based products were a third of their consumer sales. I'd wager that all GTX models aren't a third of their consumer sales. The consumer GPUs really aren't needed to support the professional business anymore.
     
  2. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    12,711
    In terms of unit volume, I wouldn't be surprised if the 580/570 still sold more units than their Quadro and Tesla variants.

    However, in terms of revenue I wouldn't be surprised if the Quadro and Tesla variants brought in more revenue.

    The consumer side still serves as a good avenue to bleed off excess supply (more wafers = cheaper per wafer costs I'd imagine) as well as a good area to dump salvage chips. And additional revenue even at significantly lower margins is still additional revenue. But it probably is true that they don't absolutely need to sell their big die GPUs in the consumer market to make a profit anymore.

    I suspect that's going to be the main fuction of the consumer version of GK110 (assuming there is one). As I wouldn't be at all surprised if GK110 was only minorly faster than GK114 (assuming there's a GK114) in the majority of gaming workloads. All pure speculation, of course. :) It may or may not end up like this.

    Regards,
    SB
     
  3. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,510
    Location:
    Hamburg, Germany
    IIRC most of that revenue comes from the Quadro line. And you should have a look what kind of GPUs are sold there. That is quite a bit more than the top of the line model (they sell each crappy and slow GPU there too, they are just more expensive than the consumer versions). So I doubt that more than a third of the professional solutions revenue actually comes from GF100/110.
     
    #63 Gipsel, May 3, 2012
    Last edited by a moderator: May 3, 2012
  4. swaaye

    swaaye Entirely Suboptimal
    Legend

    Joined:
    Mar 15, 2003
    Messages:
    7,870
    Location:
    WI, USA
    So perhaps BigK is going to be heavily slanted towards GPGPU then? That would be interesting to see...
     
  5. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    12,774
    fermi was wasnt it ?
     
  6. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    9,237
    Location:
    La-la land
    Considering the brutal specs that are being bandied about, I think that's very much a given.

    And yes, it will be very interesting to see...! :) I myself would like to see proper pre-emptive task switching support on GPUs, dunno how much extra hardware that would require though, probably quite a lot since there hasn't been any such moves made so far, but it would be quite helpful in a lot of situations, if GPGPU is to move out of the fringe rut it's been stuck in so far.
     
  7. 3dilettante

    Legend

    Joined:
    Sep 15, 2003
    Messages:
    6,749
    Location:
    Well within 3d
    I thought the general directions for GCN and Nvidia's compute architectures had preemption planned as one of the next steps, if not buried somewhere in current hardware.
     
  8. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,124
    Location:
    Switzerland
    For Nvidia this is the case, but they are in a bad position with their chips in 28nm ( not their cards, the GK104 is excellent ) , for an unknown reason it seems they have not been capable to transit fermi to Kepler in this "first" ( i insist on this first ) generation of 28nm...

    Lets take the fact... the GK110 will be released before the initial Kepler GK100... the GK104 is pushed to his limit for be the flagship... The good of Kepler have been pushed in marketing like a crazy, asking to forget the bad.

    The GK110 will appear, but it is nearly clear after the compute conference of Nvidia they are not specially ( or maybe not at all ) aimed at a "mainstream" flagship ( understand a new flagship in gaming cards ) .

    This will certainly bring to the born of an hybrid card taken from the GK110 at this end of the year.. Something in between.

    I have follow the conference, and... nothing.. i was imagine like during the conference of GCN in June 2011 we will get some solid infos of what is the so called " BigK" architecture ... nothing have come .. We have been overflowed of future possibilies. but nothing solid..
     
    #68 lanek, May 5, 2012
    Last edited by a moderator: May 5, 2012
  9. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Location:
    /
    IIRC, for AMD compute preemption is planned for 2013 and graphics preemption is planned for 2014.
     
  10. Blazkowicz

    Legend Veteran

    Joined:
    Dec 24, 2004
    Messages:
    5,359
    thanks. I was going to point out they aim for relatively high volume, just like other vendors do build big, non-consumer chips. POWER7, sparc etc.

    I can see even african universities buying BigK cards. just stuff one card in a cheap PC with 32GB memory, add another card later to upgrade. it's easy.

    though I have to wonder which is easier to program for, or be able to run more simulations or stuff. GPGPU or small cluster?

    for about quadro 6000 price, you could in the near future run four consumer Haswell PCs, connected to each other with cheap dedicated 10Gb RJ45 ethernet. (one card with four ports in each PC)


    great, I've been willing for this to happen, I would like to run some multiplayer gaming for at least four players, from a single GPU :). with multi-seat and thin clients.
    I have a feeling this might require firegl/quadro drivers and windows server though, linux hacking will get there but with lower performance.
     
  11. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,335
    Why not just use 4 lower cost GPUs if that is your use case?
     
  12. denev2004

    Newcomer

    Joined:
    Apr 28, 2010
    Messages:
    143
    Location:
    China
    I guess it should be the driver's problem. If you put it in a single computer, than you'd better just use 1~2 GPU(s)
     
  13. Blazkowicz

    Legend Veteran

    Joined:
    Dec 24, 2004
    Messages:
    5,359
    lazyness and it isn't really guaranteed to work for now. nowadays you can even use a real graphics card from a virtual machine, on Intel server hardware or AMD server or consumer hardware but from what I've read it breaks down after one card. or some can't get their storage card recognised etc. ; patch this if you run an AMD card, etc.
    the tech, or the software and support are still in their infancy. I'm waiting it out a bit, I want to tinker with this but I'm sure I will get quirks and hurdles and that it will make me totally mad. :oops:
     
  14. Blazkowicz

    Legend Veteran

    Joined:
    Dec 24, 2004
    Messages:
    5,359
    I'm pretty sure it will be a quadro feature.
    nowadays you can run one instance of windows on one big PC and have it run the show for 40 $100 thin client terminals, the tech is fully included in XP pro and 7 pro but you need a Server windows with a lot of licensing.

    so yeah, one bigK running the show for the whole household, beamed on desktop, tablets, laptops, you get what I mean :) you could even store it in the attic or basement, or just in the most bad ass desktop with another GPU. a private Onlive [strike]cloud[/strike] network.
    not sure they will give it away and lose revenue on more, smaller consumer GPUs.

    well if someday I have a big household with a nice income and many kids I'll be tempted to do this and pay the big microsoft and quadro licenses, just because I want to make a point.
     
  15. Psycho

    Regular Subscriber

    Joined:
    Jun 7, 2008
    Messages:
    711
    Location:
    Copenhagen
    Why would you need preemption for that?
    Our rendering servers delivers the highest throughput with 2 instances (of our application, ie 1 d3d9 context pr instance) per graphics card, and adding another card with it's own load can barely affects performance (ie it scales nicely).
    This is Windows7 (for the particular machine with 2 cards, but 2008R2 works the same) and radeons.
    For geforces things really slow down with several contexts per card though (and the cpu load is much higher, and drivers breaks down after a few weeks, so that's not what we use in servers anyway...).

    Sure you could have more fine grained multitasking than this with preemption - but for normal realtime loads it shouldn't be necessary. Don't know how fine grained it is on the radeons, but it's more than once per present() at least.

    Yeah, seems to be some problems with acceleration and using the 2nd card from remote or service contexts - will have another look at that soon. (traditionally our app is running from an auto logged in local user to get the acceleration).
     
  16. A1xLLcqAgt0qc2RyMz0y

    Regular

    Joined:
    Feb 6, 2010
    Messages:
    807
  17. Blazkowicz

    Legend Veteran

    Joined:
    Dec 24, 2004
    Messages:
    5,359
    wow. it's a GK104 based Tesla, if you follow the link.
    that explains why there was to be a conference introducing a Tesla but with no GK110 on the horizon, and also why the GTX 690 has such an usual build quality :).

    I knew there was something wrong with forum comments (not especially on this place) that said, "gtx 680 is bad for GPGPU".
     
  18. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    6,531
  19. trinibwoy

    trinibwoy Meh
    Legend Alpha

    Joined:
    Mar 17, 2004
    Messages:
    10,314
    Location:
    New York
    Quadro based GK104's I can see but Tesla? The thing has no DP or ECC.
     
  20. Blazkowicz

    Legend Veteran

    Joined:
    Dec 24, 2004
    Messages:
    5,359
    it does have ECC, ECC on Tesla or Quadro is just a software trick. but that means there's ECC in the L1 and L2 so they planned for the eventuality.

    DP is there, it's just slow. I believe even the low end geforces support DP. but you can read about it there yourself
    http://www.brightsideofnews.com/new...esla-card3b-8gb-ecc-gddr52c-weak-dp-rate.aspx

    basically, as even hugely fast FP32 can be useful some industries appeared to say "shut up and take my money".
     

Share This Page

Loading...