Nvidia Pascal Speculation Thread

Discussion in 'Architecture and Products' started by DSC, Mar 25, 2014.

Tags:
Thread Status:
Not open for further replies.
  1. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    What was your point again?
    The whole discussion about whether or not it's a mockup or not is just stupid.
     
  2. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    A1 or A0 is the first spin?

    Nothing to do with this discussion btw. Just wanted to know, cause I know there is a difference in the spin count based on the first numbers from AMD and nV, just don't remember which one was which.
     
    #602 Razor1, Jan 13, 2016
    Last edited: Jan 13, 2016
  3. Benetanegia

    Newcomer

    Joined:
    Sep 4, 2015
    Messages:
    222
    Likes Received:
    136
    Nah, forget about it.

    My point is that although it's most likely a 980M, the proofs presented are weak, so jumping to the conclusion that it is definitely not Pascal and thus Pascal is in trouble as CharlieD said, is stupid. It's the jumping to conclusions part that bugs me, so maybe I brought up a few strawman arguments to bring balance. Hehe.

    One of the weak evidence examples is, that if for example, they were using an hypothetical 256-bit Pascal (GP106) with the exact same TDP as the 980M, the module could look the same. It may not, it probably wouldn't, but people's response to me was basically that it's completely imposible and that I'm stupid for even suggesting it. Also a few components in the top are in a different position and being a noob that looks like a difference to me, but instead of explaining me why that doesn't matter...

    Another stupid suggestion I brought was that maybe they made a 28nm Pascal chip in order to have the Drive PX tested and qualified by system integrators and most importantly, software being developed a year in advance, so as to try to take a hold of this lucrative market the sooner the better. Apparently making a 28nm Pascal chip, for this or any other purpose would automatically make Nvidia go under. And the one main reason this is imposible, of all things, is because Anandtech quotes them saying the Pascal chips in the PX2 are 16FF+. The idea here is that Nvidia is lying about them being Pascal altogether, but apparently the 16FF+ part needs to be taken at face value. Instead they are still 28nm, only they are Maxwell instead of Pascal, but I am an idiot for suggesting that maybe they are 28nm Pascal chips, because they clearly state they are 16FF+.
     
  4. Rys

    Rys PowerVR
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,156
    Likes Received:
    1,433
    Location:
    Beyond3D HQ
    A0 is very commonly first, and due to the complexity of these things it's almost unheard of to be able to go to mass production with it, so usually you see A1 or A2 make it out to market. If the design needs fixes you can't do in metal, off you go to B0. I believe NV stick to that nomenclature.

    If the chips really were GM204, I'd like to understand the thinking behind the decision that lets them to misrepresent them as Pascal. It's 2016, and while the number of people who really care about this stuff is probably at an all time low, it's just cheap wool to pull over folks' eyes.
     
    Lightman and Razor1 like this.
  5. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    331
    Likes Received:
    85
    So far as I can tell they never said specifically that the board they showed had a pair of Pascal GPUs on it, just implied. The motivation for which might be that they believe and/or want others to believe Pascal GPUs will be ready by the time the board ships. Showing off an "actual physical product in hand!" is better for investors and/or potential customers rather than waving around airily and saying "it'll be ready, just imagine I have something in my hand!"
     
  6. Benetanegia

    Newcomer

    Joined:
    Sep 4, 2015
    Messages:
    222
    Likes Received:
    136
    GM204 took only about 6 months from when it was "reported" as taped out to the day it launched and it's A1 right? Is there time for a respin in that timeframe? Or did it tape out a lot earlier than it was "reported"?
     
  7. Rys

    Rys PowerVR
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,156
    Likes Received:
    1,433
    Location:
    Beyond3D HQ
    There's time for a spin, especially on a process they knew so well at that point (which implies all manner of things about really knowing tooling, etc), and which didn't have capacity issues and wasn't ramping through risk production to mass production, using an evolutionary GPU design.

    Pascal on FinFETs fits precisely none of that.
     
  8. Rys

    Rys PowerVR
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,156
    Likes Received:
    1,433
    Location:
    Beyond3D HQ
    If I'm a big automotive module supplier, I already know it's not a real product because NV told me the PX2 roadmap ages ago. So NV are left with misleading the public and investors. Golf clap.
     
    Malo likes this.
  9. dbz

    dbz
    Newcomer

    Joined:
    Mar 21, 2012
    Messages:
    98
    Likes Received:
    41
    That was my understanding while watching the live stream. Jen-Hsun was talking about Pascal but didn't actually say that he was holding the finished article. He was quick to note that the board wasn't going into production until towards the end of the year and Volvo were the launch partner (both these tidbits came towards the end of the presentation). Talking about Pascal and holding the module implies that there is direct linkage, but he didn't say that he was actually holding the finished article as far as hardware fit out was concerned - either the Pascal GPUs or the SoC's on the reverse side of the module.
     
  10. Benetanegia

    Newcomer

    Joined:
    Sep 4, 2015
    Messages:
    222
    Likes Received:
    136
    I didn't know it could be so fast. I've definitely heard that it takes 2-3 months to get silicon back, as a minimum. Shouldn't believe the internet.

    After a bit of searching, I'm leaning more towards Nvidia using a different nomenclature tho. GK110 can be found in two revisions A1 and B1. Is there any compelling reason why they could have f up a B0? B1 which I believed appeared with the 780 Ti, came pretty late in the game too.
     
  11. psurge

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    939
    Likes Received:
    35
    Location:
    LA, California
    Clueless non-noob here: are there reasons to hide an existing production board at this stage? I'm thinking along the lines of wanting to stop a competitor from getting a rough die size estimate and using that together with announced specs to get an idea of product placement and perf/watt.

    On another note - based on the table in the presentation, it seems like perf/watt isn't going up much relative to Titan X, at least where SP is concerned. Is that something that is affected much by the higher ambient temperature and reliability requirements of an automotive environment?
     
  12. tunafish

    Regular

    Joined:
    Aug 19, 2011
    Messages:
    542
    Likes Received:
    171
    2-3 months is about right for a major revision, that is A0 vs B0. Minor revisions (A0 vs A1) happen faster. The difference is that minor revisions update only some of the metal layers, while major revisions can have changes at all layers, including the transistors. If you only update the metal layers, you can use the older revision wafers that have had the transistors baked in. This greatly reduces turnaround time, as transistors are the most time-consuming part of chip manufacturing. For this reason, all modern designs incorporate spare transistors on die wherever there is empty room to facilitate fixing mistakes without having to touch the transistor layer.

    Yes, many. Chip design is very hard. Chips that work at 0 minor revision either didn't have much changes made into them or the designers got fantastically lucky.
     
  13. Benetanegia

    Newcomer

    Joined:
    Sep 4, 2015
    Messages:
    222
    Likes Received:
    136
    I know there are plenty of reasons. My question was more towards what's more likely all things considered.

    I think the answer is that Nvidia starts with A1 anyway. CharlieD stated as much in this (in)famous piece:

    http://semiaccurate.com/2010/02/17/nvidias-fermigtx480-broken-and-unfixable/

    I don't take CharlieD very seriously in most things involving Nvidia, but I think we can trust him on that 100%, he wouldn't have missed the opportunity to say that Nvidia was on the 4th spin, if that was actually the case.
     
  14. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    Did anyone consider does a die size to FLOP scaling calculation?
     
  15. dbz

    dbz
    Newcomer

    Joined:
    Mar 21, 2012
    Messages:
    98
    Likes Received:
    41
    Sounds like a fun exercise - all you have to do is estimate clock speed and ALU count, specify the precision, and what your baseline is. Nvidia's own specification uses only the base clock. i.e. 4.29TF FP32 for the K40 is obviously based on the 745MHz base clock, since 875MHz boost works out to be 5.04TF. Obviously the boost frequency isn't guaranteed, but how many Nvidia cards that are boost capable operate at their base clock?
     
  16. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    331
    Likes Received:
    85
    To hide something from being photographed? Absolutely, AMD definitely showed off one, if not two, Polaris GPUs at CES but didn't allow any photos. How you price, and spin PR, and etc. can all change depending on how good your product is compared to a direct competitors. A photo can allow estimates of die size, overall size, RAM size and type, etc. etc.

    Which doesn't necessarily mean Nvidia's Pascal isn't facing problems.But given that they were showing off chip for the auto industry, and the GPU itself wasn't in the spotlight so to speak, it doesn't necessarily mean it's delayed either. It just means the PX-2 itself isn't coming out any time soon, and Nvidia didn't say it was.
     
  17. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    715
    Likes Received:
    220
    Location:
    india
    Nope. You jump in with a comment that is wrong from the get go and you don't bother to see that the die is from jan 2015.

    You don't have a point, they do.

    Considering your visual prowess in telling us that that 480M was almost same component wise, forgive me for not giving it any consideration at all when you think that die is 5% smaller.

    The lady doth protest too much.

    If you didn't get that I was mocking the nonsensical scenarios you're building up with your magical thinking, then it's worthless to bother with you any further.

    [/quote]
    I said a lot of things which fall within the realm of what's posible, regardless of how crazy or unlikely they are.[/quote]

    And as I said, there's speculation and then there's lala land.

    A 28nm 'prototype' Pascal for drive px(2) from almost a year ago so that they can get to market faster....

    The epicycles don't end, do they? I thought razor1 was a waste of time on this forum, you've exceeded him with spectacular ease.
     
  18. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    I consider the option that they made a 28nm Pascal version to be nil. Especially if you take this into account: https://mobile.twitter.com/scottgray76/status/601900512741498880

    Nvidia could develop all there software on a Maxwell-based PX 2 and then just plug in a Pascal board and get the speed they need.
     
  19. Benetanegia

    Newcomer

    Joined:
    Sep 4, 2015
    Messages:
    222
    Likes Received:
    136
    But it wouldn't have all the enhanced deep learning capabilities, whatever it does to be able to perform 3 DL ops per SP op, I'd guess it involves some sort of vectoring at least. Or some sort of specialized instruction (set?) on the other side of the spectrum.

    Thanks for continuing to be useless. Will care to contribute anything anytime? As you can see silent_guy agrees with you that the idea is crazy, but he has something to say, which is something for me to learn, as did Rys and tunafish on other matters. You sir, are as useless as I am, if not more, which makes it all the more shameful, because of your supposed higher knowledge.

    Despite my ignorance, at least my short conversation with Rys and tunafish, resulted in knowing that Nvidia's naming convention for first silicon is A1 insted of A0, which funilly enough, with all the info provided by them regarding fab time, it could mean that technically, if very very very unlikely, it could actually be first silicon 16FF+ Pascal, if even by a stretch. And with this I'm not claiming that it is Pascal, just that the evidence for it not being Pascal, is not as strong as it was made to be. I've never challegenged the idea that it is actually a 980M (I've stated that it most probably is multiple times), what I've challenged is the thought process by which you reached that conclusion, with such a certainty (where there is none to be had) without taking everythng into consideration. Don't bother making another "you're a noob" post, with nothing else to contribute. You made your opinion on that point abundantly clear.
     
  20. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    When silicon comes back from the fab, everything has already been validated in simulation and emulation. Compilers have already been written. Programs have already been executed and tested for correctness.

    Whatever these DL related capabilities may be, they're probably minor additions in the grand scheme of things. Nobody would ever consider spending $20M on a piece of silicon that will never see mass production to make sure that a minor feature really, really works.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...