NVIDIA Kepler speculation thread

Discussion in 'Architecture and Products' started by Kaotik, Sep 21, 2010.

Tags:
  1. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,062
    Likes Received:
    3,119
    Location:
    New York
    Lol, what? Why did they even bother hyping up that shit when they know people would be expecting Kepler news and hence be disappointed.
     
  2. Sinistar

    Sinistar I LIVE
    Regular Subscriber

    Joined:
    Aug 11, 2004
    Messages:
    660
    Likes Received:
    74
    Location:
    Indiana
    I suspect in a week or two another announcement will be coming, so we should all wait before buying an AMD card. Then the cycle will repeat itself, until Kepler actually comes out.
     
  3. AlexV

    AlexV Heteroscedasticitate
    Moderator Veteran

    Joined:
    Mar 15, 2005
    Messages:
    2,535
    Likes Received:
    144
    In my opinion, they did it precisely in order to get people to wait for Kepler some more. And because Tegra is trying to be moved into a core business asset (it could be argued that it already is), so piggybacking a bit on desktop users' expectations can't hurt, au contraire!
     
  4. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,062
    Likes Received:
    3,119
    Location:
    New York
  5. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    Someone needs to start a Kepler is cancelled, next stop Maxwell, rumour.
     
  6. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    Kepler and Maxwell are both cancelled as Nvidia is moving directly to mass production of their new chip "Osborne". It is so fast that new physics need to be created in order to understand what is actually going on.

    Rumours of Nvidia's 20nm yield issues due to a lack of unobtanium and wishalloy are thought to be no more than semi-accurate lies straight out of AMD HQ.
     
  7. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    746
    Likes Received:
    41
    Location:
    Copenhagen
    Not sure how much secret info he claims to have, but:
    http://vr-zone.com/articles/nvidia-...sure-on-both-high-end-and-low-end-/14937.html
     
  8. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,062
    Likes Received:
    3,119
    Location:
    New York
    I could believe that. It would explain Tahiti's otherwise questionable pricing.
     
  9. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,931
    Likes Received:
    5,533
    Location:
    Pennsylvania
    ok so they're focusing on getting their mid-range, and possibly lower-end parts out sooner? Isn't that where the money is anyway? Maybe they're just not caring about enthusiast crown anymore and focusing on mainstream, tablet and HPC markets. If they've garnished a number of large contracts and perf/$/tdp wins for HPC already with Fermi then they can ride that for quite a while longer?
     
  10. TKK

    TKK
    Newcomer

    Joined:
    Jan 12, 2010
    Messages:
    148
    Likes Received:
    0
    Who knows, maybe Charlie was right and GK100 is cancelled while GK110 is basically to GK100 what GF110 was to GF100.
     
  11. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,426
    Likes Received:
    10,320
    If true this would make 2 nodes in a row where Nvidia's big die strategy has been problematic when moving to a new node. Even if GK 100 isn't cancelled, it'll still arrive far later than the competition, similar to GF 100 compared to Cypress. Except this time there's no radical changes like Fermi. If there was there'd have been white papers released already to get HPC companies briefed on compute changes similar to Fermi.

    Regards,
    SB
     
  12. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,022
    Likes Received:
    122
    Wasn't that (GK104 launching first, GK110 later) what the rumors suggested for quite a while already (though the timeframe might not be right)?
    I can't really see the news here. The only question seemed to be if GK104 can beat (or equal) Tahiti or not. Ok so VR-Zone thinks it can't but don't say why. I'm not quite sure anymore it really can't given that it most likely is a chip with very similar size compared to Tahiti, factor in (in contrast to Tahiti) it won't have any features just for compute and it looks even better. Given the relatively conservative clocks/TDP AMD have chosen that might enable nvidia to compete quite easily with it (at least with the 7950).
     
    #1812 mczak, Feb 20, 2012
    Last edited by a moderator: Feb 20, 2012
  13. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,247
    Likes Received:
    4,465
    Location:
    Finland
    Though if you consider the past, AMD has had leaps and bounds better perf/mm^2 before - the real question is can they hold it with GCN-architecture?
     
  14. whitetiger

    Newcomer

    Joined:
    Feb 5, 2012
    Messages:
    57
    Likes Received:
    0
    There's always a small but finite chance that Charlie is right
    - even a broken clock is right twice a day ...
    - eventually a proton will decay ...

    :lol:
     
  15. whitetiger

    Newcomer

    Joined:
    Feb 5, 2012
    Messages:
    57
    Likes Received:
    0
    I agree, it's just saying what has been rumored for several weeks now
    - the GK104 will likely not beat the HD7970, but maybe the HD7950....
    - and the GK110 is due in the Autumn

    Here's my guess - the GK104 used TSMC's 28HP process
    - however, they canned the GK100 as used too much power
    - and the GK114 & GK110 will move to the 28HPL process, just like AMD are using for Tahiti...
     
  16. whitetiger

    Newcomer

    Joined:
    Feb 5, 2012
    Messages:
    57
    Likes Received:
    0
    AMD had better perf/mm^2 in the past, mainly because
    1) They didn't have all the GPGPU gunk that nVidia chose to put in
    2) They didn't have a hot clock

    With this generation
    1) AMD has now gone down the GPGPU route, much like Fermi
    2) nVidia has allegedly dropped the hot-clock

    So, I would expect things to be much closer now
    - though AMD would still have an advantage if they are on 28HPL, and NV is on 28HP...
    Edit: Actually, that's a perf/w issue, not a perf/mm^2 issue, AFAIK
     
    #1816 whitetiger, Feb 20, 2012
    Last edited by a moderator: Feb 20, 2012
  17. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,022
    Likes Received:
    122
    That depends on the exact chip you're looking at, it wasn't really all that much recently. Cayman vs. GF114 is just a tiny bit better in perf/mm². Barts, being just very slightly larger than GF116, is of course better there wrt all of Cayman, GF114, GF116, quite massively so compared to GF116, though this one seems to be larger than it should be (if you compare how it scales down from GF114, I guess the mostly unnecessary 192bit bus when coupled with gddr5 and the 8 excess ROPs are at least partly to blame). Juniper is also much better in perf/mm² than GF116 (but again, GF116 just seems too big).
    Turks (or Redwood) against GF108, nvidia seems at a disadvantage again, but the die size advantage is small and performance quite close if equipped with ddr3.
    GF119 vs. Caicos, I don't know who wins this. Caicos is smaller and faster with gddr5, but paired with ddr3 (which is the only version you can buy) GF119 easily wins the "best of the crap" title.
    So nvidia catching up with GCN in terms of perf/area seems quite doable. I think some performance "sacrifice" for GCN in overall perf/area was expected in exchange for the much more predictable compute performance (and easier compiler too...) whereas I don't see any reason why it would change a lot for Kepler from Fermi (apart from dropping hot clock and twice as many alus which should result in larger area but potentially also higher possible clocks).
     
  18. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Part of the answer could be computed from gaming perf/mm² when comparing Cayman and Tahiti.
     
  19. itsmydamnation

    Veteran

    Joined:
    Apr 29, 2007
    Messages:
    1,349
    Likes Received:
    470
    Location:
    Australia
    but thats just a massive generalization in itself.

    first off you have our beloved :lol: saying:
    2nd if you look in GCN thread lots of people where boarderline orgasmic about GCN's ALU architecture specifically around scheduling and how it is far simpler then fermi but almost as functional.

    3rd when has more cache ever been a bad thing?

    4th. you cant blame hot clock, its an assumption that NV can get better performance per mm of shader ALU without a hot clock. As everyone says ALU's are cheap, effectively moving data around is much more expensive.

    5th we dont really have a real GCN driver yet so let hope that comes soon and we can see what we really can expect, it could still be current level or we could get a nice boost.

    6th ALU's take up what 25-30% of the die on most GPU's?
     
    #1820 itsmydamnation, Feb 20, 2012
    Last edited by a moderator: Feb 20, 2012
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...