Nvidia Turing Speculation thread [2018]

Discussion in 'Architecture and Products' started by Voxilla, Apr 22, 2018.

Tags:
Thread Status:
Not open for further replies.
  1. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    Several people stated that this "long time" phrase was actually an attempt to shut down the hordes of journalists asking the same question repeatably, as no company is stupid enough to announce their next gen product is a long time away! especially not NVIDIA.
     
  2. entity279

    Veteran Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,332
    Likes Received:
    500
    Location:
    Romania
    lying hurts bussines. So i, otoh, say they aren't that stupid to do it.
     
  3. McHuj

    Veteran Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,613
    Likes Received:
    869
    Location:
    Texas
    Can you define a long time?

    Its a vague term that different people take to mean different things.
     
  4. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,418
    Likes Received:
    10,312
    Yup, for me, a long time is 1 year or more.

    However, my nephew last night let me know that 2 hours is a VERY long time. :p

    Regards,
    SB
     
  5. entity279

    Veteran Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,332
    Likes Received:
    500
    Location:
    Romania
    Obviously vague. I already stated my personal definition in the context. >3 months.
    The context being that of a corporate entity
     
  6. CSI PC

    Veteran

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    You also have the semantics between actual launch (still vague to have several interpretations), the review window before that, and potentially a soft launch and announcement, looking at each individually it changes the context of when.
     
  7. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    Or, entirely plausibly, they've given up 12nm as a bad node to put a year+ worth of products into when AMD seems adamant in getting out products on the much, much better "7nm" node. Little sense, perhaps, in investing in an entire product lineup that could be outdated within less than a year. Especially when Nvidia is already on top.

    Why not wait till sometime next year, when TSMC's (already ahead of everyone else) 7nm should be mature enough, and have enough production capacity, to meet large scale large die manufacturing. AMD already announced its (no doubt very limited and expensive) TSMC 7nm card would be shipping by the end of the year. What's good news for them is good news for Nvidia too, as far as scheduling is concerned.
     
    nnunn likes this.
  8. CSI PC

    Veteran

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    A primary driver will be whether they feel they can sustain GP102 and GP104 for Inferencing rather than going to Tensor Gx102 and Gx104 version, along separately with what solutions are required for datacenters/clouds.
    There is a lot of competition out there that is beyond Geforce gaming, and like I mentioned earlier it is very difficult to truly split Geforce/Quadro and Tesla (apart from possibly node size and one could expect Tesla at 7nm), as can be seen that no Tesla models are yet announced apart from V100 (Tesla can be announced-presented months in advance before launch) and V100 does not fit all the Tesla segment requirements/deployments; I can see though you could argue this point fits your post but would be more Geforce, also need to consider the HotChips schedule recent change due to high profile visibility.
    But yeah I agree Nvidia with the lateness of the cycle are now entering closer to the phase of 7nm for all models and not just certain Tesla, but they also have their obligated contracts for GDDR6 for this year (probably not the only supply chain commitment) so from a logistics/line process it can become a headache; there is a logistical formula used for the ideal logistics-manufacturing line efficiency by quite a few of the larger international tech companies.
    I think multiple factors have caused this delay from technical-maturity to BOMs (GDDR6 fitting in both as one example and maybe GDDR5X was a misstep) to situations such as mining craze that will cause a business drop (no point selling new model into the mining craze and better yet when sales start to dry up from a business growth narrative), then from a Tesla side it is getting the clear narrative on their recent announcements and steps for their high margin even by Nvidia standards all-in-one solutions.

    But as you say they are in the situation where they may had to change their strategy very late in the day.
    Beyond your point/context that are all valid I would say it would be very difficult to conclude/predict either way with what has been said publicly so far.
     
    #88 CSI PC, Jun 6, 2018
    Last edited: Jun 6, 2018
    iMacmatician likes this.
  9. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    832
    Likes Received:
    505
    I was under the impression that AMD uses the 7 nm GloFo (EUV) process:
    "In a fairly unexpected move, AMD formally demonstrated at Computex its previously-roadmapped Vega GPU made using GlobalFoundries’ 7 nm (7LPP) process technology"
    Nvidia may not have access to such an advanced process. AFAIK TSMC 7 nm (non EUV) process is for relative small and low power smartphone SoCs such as the A12(X).
     
    Picao84 likes this.
  10. Samwell

    Newcomer

    Joined:
    Dec 23, 2011
    Messages:
    149
    Likes Received:
    183
    I think it's wrong. GF was never mentioned as far as i know and gf themself said they are a bit behind. This year no other factory than TSMC can deliver 7nm products. Epyc is also 7nm TSMC because TSMC is ready first.
     
    pharma likes this.
  11. CSI PC

    Veteran

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    I think that might be underplaying the 1st stage CLN7FF from TSMC, remember it is taped out by quite a number of tech manufacturers, and it is being compared as a 1st stage replacement to 16FF+ by TSMC and others.
    The issue may be more to do with manufacturing/scaling costs-performance due to DUV/multipatterning but still viable as stage 1, albeit a move to EUVL next year still has some performance/current/scaling improvements over the CLN7FF.
    Separately a recentish article at EEtimes.
    May 2nd article: https://www.eetimes.com/document.asp?doc_id=1333244
     
    pharma likes this.
  12. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    In their last quarterly investor call they worded it pretty straight that first 7nm chips would be TSMC. They will be using both GloFo and TSMC 7nm regardless.
    edit: also, wasn't GloFo supposed to start risk production early H2, which would rule out working 7nm silicon from them
     
  13. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    The last slide of the deck posted at anandtech says, AMD has working silicon of Zen2 in the lab:
    https://www.anandtech.com/Gallery/Album/6399#24
    Zen2 was supposed to be fabbed at GloFo, no?
     
    #93 CarstenS, Jun 6, 2018
    Last edited: Jun 6, 2018
  14. Samwell

    Newcomer

    Joined:
    Dec 23, 2011
    Messages:
    149
    Likes Received:
    183
    I think it was on their last conference call, when lisa mentioned zen2 will come from tsmc and GF and it seems Epyc will be produced at tsmc.
     
  15. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    832
    Likes Received:
    505
    If AMD already can demonstrate a working GPU on 7 nm, more than likely Nvidia also has a working one in their labs.
    That would be a post Volta HPC GPU in a first instance.
    Speculating on 6 HBM2 stacks for this one.
    Question is when will it be announced.
    Feeling the heat now from AMD that might be sooner than later (hotchips?)
     
    iMacmatician likes this.
  16. Samwell

    Newcomer

    Joined:
    Dec 23, 2011
    Messages:
    149
    Likes Received:
    183
    March 18 2019 is the date for the Volta successor announcement. No reason to announce earlier.
     
    pharma likes this.
  17. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    832
    Likes Received:
    505
    Volta V100 was announced at May GTC 2017
    Pascal P100 was announced at April GTC 2016

    So either Nvidia is not in a hurry or something else is going on.
     
    BRiT likes this.
  18. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Their roadmaps seem to be very … let's say flexible. In one earlier iteration, Maxwell was supposed to be the big leap from Kepler in terms of DGEMM/W. Pascal appeared only lateron and originally, Volta was scheduled to introduce stacked DRAM.
     
  19. CSI PC

    Veteran

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Volta was also to introduce Tensor Cores according to one of the R&D CUDA engineers even if it was not in the public presentations, or at least became part of it in the early stages of design function scope.
    Makes sense as it is not something they could develop and integrate as cleanly as they have unless it has been part of the R&D design-scope feed for the Volta project for awhile, although fair to say the push was mixed-precision narrative for a long time with regards to Pascal-to-Volta.
    Possibly part of the reason to introduce Pascal 1st without that, while also ticking some technical risk milestones with Pascal for their high profile commitments.
     
    pharma and nnunn like this.
  20. Wynix

    Veteran

    Joined:
    Feb 23, 2013
    Messages:
    1,052
    Likes Received:
    57
    Perhaps Nvidia have done what AMD done, focus on AI(or something else) at the expense of gaming.
    Nvidia do appear to have their fingers in many pies lately.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...