AMD: Navi Speculation, Rumours and Discussion [2019]

Discussion in 'Architecture and Products' started by Kaotik, Jan 2, 2019.

  1. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,705
    Likes Received:
    956
    Apple reportedly balked at 7nm+ pricing. And with 6nm offering a path to EUV for 7nm, I don’t see why consoles would want 7nm+.
     
  2. dobwal

    Legend Veteran

    Joined:
    Oct 26, 2005
    Messages:
    5,153
    Likes Received:
    1,168
    6nm doesn’t offer better density or power efficiency than 7nm+. It’s an upgrade path for 7nm products that’s cheaper than migrating to 5nm or 7nm+.
     
  3. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,106
    Likes Received:
    1,071
    Lets add some more information.
    As far as I can see, the source of this is probably this article at Motley Fool, who in turn quoted Bluefin Research Partners who wrote this:

    Note that there is nothing about actual manufacturing choices in that quote, only a claim that TSMC had wanted a price premium for 7nm+ wafers that Apple found a bit much. Ashraf Eassa (at Motley Fool) then went through options for Apple - mainly stay at their current process or negotiate the pricing.
    Can anyone bring any other source to the table, that actually claims that manufacturing of Apples next SoC will not use EUV for some layers?
     
  4. DmitryKo

    Regular

    Joined:
    Feb 26, 2002
    Messages:
    714
    Likes Received:
    616
    Location:
    55°38′33″ N, 37°28′37″ E
    For all we know, the architecture will be 'Navi' and it will include raytracing. I'm assuming hardware BVH tree traversal, like in DirectX Raytracing pipeline. That would align the release of RDNA2 desktop GPU part with console APU part (parts).

    Console APUs have to use the most current process available in early 2020 - AMD needs to build the inventory for the November launch, so they can't wait for the '7nm+' process to mature.

    I take it as 'Navi plus hardware raytracing based on BVH tree traversal'.

    I would be perfectly OK with vague references to the architecture used. I assume they have a selection of IP building blocks to suit a specific application.

    Looking at the changes from GCN to RDNA, the improvements are mainly in task scheduling, memory caches, and fixed-function blocks for color compression and geometry culling. The compute front-end - the register model and the instruction set of GCN compute units - did not
    change much, it's really a hybrid of GCN and 'next gen'.

    Hardware raytracing puts a lot of stress on memory bandwidth, so we will probably see further improvements to caches and memory and ither fixed-function blocks. This would warrant 'RDNA2' / 'next-gen' designation, even if compute blocks don't change much.


    IMHO there was too much emphasis on 'GCN' in AMD marketing. It's cool to know every little detail of the architecture down to instruction mnemonics, but the focus on compute units downplays other significant changes in the architecture. They need to start marketing specific GPU generations, even when the changes between them are incremental, so we don't go like 'Oh, they made yet another GCN part... meh.'
     
    #1004 DmitryKo, Jun 19, 2019
    Last edited: Jun 19, 2019
  5. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,705
    Likes Received:
    956
    No, but it’s an optical shrink compatible with 7nm. That’s what makes it attractive given the 7 to 7+ gains are meager.

    This is the path 590 took with 12nm.

    Ming Chi Kuo has not stated which process they’re on. He’s the only source I’d trust unfailingly.
     
    #1005 anexanhume, Jun 19, 2019
    Last edited: Jun 19, 2019
  6. del42sa

    Newcomer

    Joined:
    Jun 29, 2017
    Messages:
    173
    Likes Received:
    93
    that´s why they call it RDNA :wink4:
     
  7. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    189
    Likes Received:
    35
    RDNA is scalable. There is nothing from stopping AMD from merging 4 units, instead of 2. Or changing the ratio's of compute vs rasterize, etc. Or in tying in more special function units, etc.

    Imagine those NAVI ring buses extending out to more "cores", to make RDNA chips as big One would deem. And something like 80 CU's, will not use double the space and would be knocking on the 2080ti's door (in Games).
     
  8. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,060
    Likes Received:
    3,063
  9. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,705
    Likes Received:
    956
    Pretty sure an 80 CU RDNA would beat a 2080 ti with margin barring scaling issues (bottlenecks, BW)
     
  10. dobwal

    Legend Veteran

    Joined:
    Oct 26, 2005
    Messages:
    5,153
    Likes Received:
    1,168
    Yes. 6nm makes sense when you have a 7nm product and you don’t want to spend the investment of going to 7nm+ or 5nm.

    7nm to 6nm is cheaper than 7nm to 7nm+ but that doesn’t mean 7nm+6nm is cheaper than 7nm+.

    Apple may have scoffed at 7nm+ but Apple doesn’t want 7nm+ in 2020. It wants it now which comes at premium if you want to have one of the first products on a leading edge node.
     
    #1010 dobwal, Jun 19, 2019
    Last edited: Jun 19, 2019
  11. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,508
    Likes Received:
    928
    It might also melt a hole into the motherboard, however.
     
  12. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    572
    Likes Received:
    256
    Clock it low enough and you're fine.
     
    anexanhume likes this.
  13. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,709
    Likes Received:
    122
    Then it wouldn't beat the 2080ti. It is a simple matter of power efficiency and max power consumption. Pick any max power target, the design with greater perf/w will always provide greater performance at equal consumption.
     
    Cuthalu, milk, xpea and 2 others like this.
  14. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,705
    Likes Received:
    956
    Perf/Watt changes with clocks. Clocks are a linear scaling. Power scales somewhere between the square and cube of the voltage.
     
    w0lfram and no-X like this.
  15. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,326
    Likes Received:
    287
    Radeon Fury Nano offered 86-89 % of Fury X performace (ComputerBase, 4k-1440p) and consumed 45-75 % of X's power (depending on load type, TechPowerUp, card only). If Navi behaves in a similar manner, losing 10-15 % of its performace should lower the PowerConsumption of 40 CUs + 64 ROPs from 225 to 150 watts. Hypothetical product with 80 CUs + 128 ROPs should offer 170-180 % of 5700XT's performance. Using HBM, which would reduce die size and power-consumption a bit, maybe a few % more. GeForce RTX 2080 Ti has 165 % performance of GeForce RTX 2070 (4k, ComputerBase), so 300W Navi could beat it by 3-12 % (if 5700XT and RTX 2070 perform identically). For a high-end product with high margins AMD could also use a bit more agressive binning and pick low-leakage chips operating well at lower voltage.

    Anyway, such product released in summer 2020 would be quite late, Nvidia's 7nm generation will likely offer this level of performance at sub-200W TDP.
     
    Alexko likes this.
  16. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,709
    Likes Received:
    122
    Doesn't matter. Perf/w is Perf/w. There is no getting around what I wrote; the design with the best perf/w will provide the most performance at a given power budget.

    Yeah, I am aware. I can add and subtract, sometimes even multiply and divide when I get real fancy.
    Also already aware of that..... perhaps something I don't know. But it wouldn't matter anyway.
     
    Cuthalu likes this.
  17. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,705
    Likes Received:
    956
    And 2080 ti is a fixed entity. It has one perf/Watt point and that’s the clock and voltage profile already chosen by Nvidia.
     
  18. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,709
    Likes Received:
    122
    And that doesn't matter.

    I mean if one wants to operate in fantasy land.... by all means, but you can do that for any vendor on any process ;).
     
    Cuthalu and A1xLLcqAgt0qc2RyMz0y like this.
  19. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    195
    Likes Received:
    170
    Just a note. Scaling of RDNA gaming performance past 40CUs to 64 or even 80CUs remains to be seen.
     
    DavidGraham likes this.
  20. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,060
    Likes Received:
    3,063
    I HIGHLY contest these numbers (especially the 45% part), they are simply NOT LOGICAL.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...