Nvidia Ampere Discussion [2020-05-14]

Discussion in 'Architecture and Products' started by Man from Atlantis, May 14, 2020.

Tags:
  1. DegustatoR

    DegustatoR Veteran

  2. Scott_Arm

    Scott_Arm Legend

    Man from Atlantis likes this.
  3. Digidi

    Digidi Regular

    The main issue is the silicon. Nvidia had for testing some very good samples, thats why the thought they can push the the card to this limit. Now in production they have silicion which makes clearly issues with the voltage curve. Thats why you have good silicon on bad capcitators which runs without any failure and you have good caps but the card is still crashing. It's all about silicon lottery and that Nvidia this time go to the limit of the silicon.
     
    LeStoffer likes this.
  4. pharma

    pharma Veteran

    It seems people who purchased the card (day one) also had very good samples. They pushed the card with no problems (using the quadro driver). :lol:
     
    Lightman and PSman1700 like this.
  5. Ext3h

    Ext3h Regular

    That means nothing. A crash to desktop means usually that the Windows kernel complained about a triggered watchdog, and decided to kill the driver. Call it a lesson learned back from the times of Windows Vista, and a certain vendors drivers being responsible for the majority of crashes blamed on Windows.

    The naked, proprietary driver under Linux continuous to operate under conditions where Windows would long have bailed out. You could even survive unstable PCIe links without even noticing, and a significant number of other hardware failures resulting in corrupted results long before you would get a "device lost" message.
     
    Pete, Lightman, Malo and 2 others like this.
  6. Jawed

    Jawed Legend

  7. iroboto

    iroboto Daft Funk Legend Subscriber

    Serious note though; We are increasingly moving away from this because of how inefficient it is. See UE5, once this is mainstream the FF hardware is going to largely unused in comparison to how much it was used this generation. But if you’re benchmarking older titles; this matters a great deal.

    it may be a PITA for nvidia or even AMD to showcase a heavy compute GPU; unable to really distance themselves from older GPUs on older benchmarks, but it seems to be forward looking for graphics.
     
  8. Digidi

    Digidi Regular

  9. DegustatoR

    DegustatoR Veteran

  10. trinibwoy

    trinibwoy Meh Legend

    Some of the many passes required to render a frame are shader bound but others aren’t. E.g. shadow map and gbuffer creation basically do zero shading but need tons of fillrate and bandwidth. Also like Degustator said even shading heavy passes aren’t necessarily ALU heavy. Shading requires a lot of bandwidth too.
     
    Digidi likes this.
  11. Kaotik

    Kaotik Drunk Member Legend

    BRiT likes this.
  12. Benetanegia

    Benetanegia Regular

    Nah, you are overcounting, there's no way it's anything but GA104 and it's 392.5mm2 according to the appendix in GA102 whitepaper. 17.4 billion transistors.
     
    pharma and PSman1700 like this.
  13. Kaotik

    Kaotik Drunk Member Legend

    Oh it's mentioned there? Must have missed it, pixel counting from angled shot (even after correcting the angle somewhat) can easily be off
     
  14. trinibwoy

    trinibwoy Meh Legend

    Hmmm the obvious answer is that Nvidia felt it had no choice but to push the envelope to fend off AMD. Clearly this decision was made a while ago given the new (and effective) cooler design that can handle a lot more watts.

    Given a 68 SM 3080 with 10GB GDDR6x pulls ~250w at 1850Mhz it'll be interesting to see how high the 220w 3070 boosts considering it's more svelte 46 SMs and 8GB GDDR6.
     
    Lightman likes this.
  15. Putas

    Putas Regular

    We don't know what Nvidia knows. They might have sacrifice efficiency for higher margins. Bet all on the performance leap over previous gen.
     
  16. Benetanegia

    Benetanegia Regular

    To me it looks more like really bad silicon lottery and Nvidia playing safe, we saw that from AMD in the past, tho the range of voltage in Ampere seams way higher than what I recall for i.e Fiji.

    Also, if Nvidia indeed pushed the envelope a bit too far, a motivation could simply be being able to claim "biggest generational leap ever" which is true but barely with some other generations including Pascal close on its heels.

    The fact that it's 6 GPCs just like the 3080 is likely to be a very interesting data point.
     
  17. trinibwoy

    trinibwoy Meh Legend

    Yes it's possible that they simply decided that coolers have advanced sufficiently to support higher power targets and didn't want to leave performance on the table.

    Are we seeing large variances in stock voltages on retail cards?
     
  18. DegustatoR

    DegustatoR Veteran

  19. Benetanegia

    Benetanegia Regular

    Sorry, I didn't mean that. Some outlets are getting 10% lower performance decrease when undervolting to 0.8V, like for example the hardwareluxx one above (still pretty good results), while others are getting less than a 1% decrease and there's even users who are reporting higher performance when undervolting mildly. Just the fact that you can undervolt by more than 20% and still keep it in the same performance range is crazy. Like I said I don't remember such a massive range for such small impact from any previous cards, including the ones that were notorious for it like Fury.

    EDIT: And of course I don't believe Nvidia engineers are just idiots who put the voltage 10-20% higher than needed just for the sake of it. We might not be seing it but there's likely many dies which probably get a much bigger hit when undervolting.
     
  20. trinibwoy

    trinibwoy Meh Legend

    Gotcha. It certainly seems to be worth trying if you can actually get your hands on a card.
     
Loading...

Share This Page

Loading...