Nvidia Pascal Announcement

Discussion in 'Architecture and Products' started by huebie, Apr 5, 2016.

Tags:
  1. CSI PC

    Veteran

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    You know that R&D budget is rather diverse and not all of it is hardware, also Pascal R&D would also includes the massive amount spent on auto/Tegra X2/etc.
    But even then it does not matter because all sales segments-markets profit (this comes down to margins) contributes to R&D even if it is different HW.
    Also part of this is building for the future, the value of HPC-ML-auto is meant to be massive but time will tell.
    To see how well HPC and ML influences growth and revenue you need to watch it from Q3 FY16 onwards, due to the various projects coming on-line or new ones signed since P100 released.
    Also do not forget profit margin.
    The GP102 is $1200 (and trend shows only around 1% goes to consumer), the cheapest PCIE GP100 card is around $5,300 and the largest projects can be direct sales (meaning all margins go to Nvidia and none shared with sales channel partners/integrators such as Cray).
    So factor that in your revenue, especially as even the higher profit margin gaming cards only have a consumer demograph of 0.8% to 1% each historically; that is the 980 and 980ti and also Titan X.
    So greatest revenue in gaming is from the lower margin models while net profit between segments would be much closer.
    The interesting slide would be profit for each segment-market.
    Just from a HW context it could be seen HPC generates 3x to 4x more profit margin (yeah generalising not usually a good idea I agree but it is going to be high), but this is made more complex by 'soft costs' of software, which is applicable to both HPC-ML and gaming in different ways but optimisation-API-libraries-coding teams exist in each.
    Cheers
     
    #1941 CSI PC, Jul 30, 2016
    Last edited: Jul 30, 2016
  2. smw

    smw
    Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    113
    Likes Received:
    43
    Again, considering how close Pascal is architecturally to Maxwell and that Volta is supposedly much more different and is already spoken for for several supercomputers and hence is on a tight schedule I would expect that even graphics R&D was about split between Pascal and Volta. But 3 years ago also overlaps with Maxwell development (Gen 2 released less than 2 years ago), and taking into account their other ventures 66% of the budget for the last three years going into Pascal seems highly unlikely. And the things that they did improve like critical path optimization allowing for higher clocks would most probably be shared with future arches, unless you think that they will scale Volta back to 1GHz :)

    Now, developing a whole new arch only for HPC is also probably not financially viable, if for no other reason than the fact that current graphics development is also going heavily in compute direction. Hence, any major improvements in compute that you would make for the HPC arch would probably be beneficial for GPUs as well. However, I don't see why, once the arch is ready, making a single offshoot HPC chip for say further $10-20 mil R&D would be something extraordinary.

    Anyway, my original point was just that considering the fraction $2.5B is from their total R&D budget, it seems the statement from the chief engineer is incorrect and most likely he meant the total R&D budget for the duration. However, as I personally don't have any idea how NV spends their budget, it is entirely possible that they spent it all on Pascal :)
     
  3. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Does GP102 (the one in consumer titan x) have double precision? How fast is it? 1/2 rate, 1/3 rate, 1/32 rate?
     
  4. RecessionCone

    Regular Subscriber

    Joined:
    Feb 27, 2010
    Messages:
    505
    Likes Received:
    189
    It's the same as the other GP10x chips. So I think that's 1/32.
     
  5. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Wow, I don't know how I'd managed that, but I'd completely missed the Radeon Pro Duo. I did not know that thing existed.
     
  6. kalelovil

    Regular

    Joined:
    Sep 8, 2011
    Messages:
    568
    Likes Received:
    104
  7. Voxilla

    Regular

    Joined:
    Jun 23, 2007
    Messages:
    832
    Likes Received:
    505
    Have been wondering about the specs of a 1080Ti.
    It could be based on the GP102 with 1 disabled GPC, and 5 channel GDDR5X.
    Would result in 3200 shaders and 10 GB memory at 400 GB/s.
     
  8. homerdog

    homerdog donator of the year
    Legend Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,294
    Likes Received:
    1,075
    Location:
    still camping with a mauler
    I posted this in the AMD thread but since it's way OT there I thought to put it here. Does anyone know what's up with TR's results here? Which one is correct for the 980? The random texture results look fine but look at the black texture results. Man I hope I'm not missing something plainly obvious...

    From RX480 Review:
    [​IMG]

    From GTX1080 Review:
    [​IMG]
     

    Attached Files:

    Alexko likes this.
  9. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    FWIW, in my tests (avg. of multiple runs) I'm seeing results more akin to TR's RX480 review - assuming that's the test with 1 random/1 black texture.
     
    homerdog and Lightman like this.
  10. xpea

    Regular

    Joined:
    Jun 4, 2013
    Messages:
    551
    Likes Received:
    783
    Location:
    EU-China
  11. snc

    snc
    Veteran

    Joined:
    Mar 6, 2013
    Messages:
    2,115
    Likes Received:
    1,745
    xEx likes this.
  12. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    It's not a rumor. It's related to the small Pascal chips which haven't been announced yet and not any refresh.

    Final frequencies of those chips should clarify that picture. Let's say it IS worse, it doesn't come with any manufacturing constraints for sure.
     
  13. xpea

    Regular

    Joined:
    Jun 4, 2013
    Messages:
    551
    Likes Received:
    783
    Location:
    EU-China
    Well Apple A9 doesn't say that.
    We will see if Samsung 14nm is really worst than TSMC 16FF+ or if AMD is just too much behind Nvidia in Power efficiency optimization (uarch and/or die layout). My guess is that TSMC 16FF+ and Samsung 14LPP are very close...
    The interesting question is to know if Nvidia will use 14LPH (High Performance) that will give some nice improvement or if they will stay with 14LPP

    Edit: BTW I think Nvidia moving to Samsung is to be free of TSMC limited wafers capacity (thanks to Apple...)
     
    #1954 xpea, Aug 12, 2016
    Last edited: Aug 12, 2016
    pharma and BRiT like this.
  14. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    808
    Likes Received:
    276
    Yep..aka GP107 and GP108.
    From what I've heard..speeds are similar. Aside from wafer availability..another possibility is lower per transistor costs.
     
    #1955 Erinyes, Aug 12, 2016
    Last edited: Aug 12, 2016
  15. iMacmatician

    Regular

    Joined:
    Jul 24, 2010
    Messages:
    797
    Likes Received:
    223
    From WCCFTech: "NVIDIA GeForce GTX 1060 3 GB Specifications Leaked – Aiming $199 US Price, GTX 1050 Expected in October."

    [​IMG]

    Other slides at the link show that the GPU is GP106-300. The specs are a bit higher than I expected (I thought it would have 1024 SPs), and I'm surprised that the TDP is the same.

    As for the rumored "1050," is it more likely that it uses a GP107 or a GP106 given the rumored October release date, TweakTown's claim that "Chosun Biz said that Samsung will make the next-gen GPUs using its 14nm process before the end of the year, based on the Pascal architecture," and Ailuros's and Erinyes's posts above? If the GPUs are made in late 2016 then wouldn't they be released a few months later (so not October)?
     
    pharma and spworley like this.
  16. mczak

    Veteran

    Joined:
    Oct 24, 2002
    Messages:
    3,022
    Likes Received:
    122
    Since this is a GTX 1060 and not a GTX 1050Ti or some such, it would be sort of surprising if it would perform quite a bit worse. Ditching only one SM and keeping the same TDP should mean it performs at least similarly (an OC 3GB 1060 would still be able to beat a non-OC 6GB 1060).
    Honestly I'm more surprised to see the same 8Ghz memory clock if anything...
     
  17. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    808
    Likes Received:
    276
    They really shouldnt be naming it GTX 1060 if it is a cut down part. Marketing at its worst. Regarding the TDP, the boost clocks is similar at 1.7 Ghz so I'm not surprised that the TDP is the same.
    Well "before the end of the year" could even mean tomorrow really. Either ways..the GPUs are actually already in production. We should see them by October.
     
    liolio, pharma and iMacmatician like this.
  18. snc

    snc
    Veteran

    Joined:
    Mar 6, 2013
    Messages:
    2,115
    Likes Received:
    1,745
    A9 on samsung has lover scores in benchmarks and 10% worse battery life.
     
  19. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Is it the same for the A9X on the Ipad ? .... And will it be the same on the A10 ? because for what i know Samsung still have a good part of the deal for them ( shared with TSMC ).

    This said, we have seen all and nothing, here TH see a difference in performance who seems averaged around 0.5% difference, with a lower difference at 0.1% and one figure at 3.7% ( strange case ).

    But battery life is better on Samsung ... http://www.tomshardware.com/news/iphone-6s-a9-samsung-vs-tsmc,30306.html

    Honestly this one review, i have not been able to find the same numbers on different reviews, result seems all over the place.. because smartphone are nearly impossible to benchmark and to been tested correctly.

    benchmarks suites for smartphone are a nightmare, let alone the batter life tests.
     
    #1960 lanek, Aug 13, 2016
    Last edited: Aug 13, 2016
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...