AMD: Navi Speculation, Rumours and Discussion [2017-2018]

Discussion in 'Architecture and Products' started by Jawed, Mar 23, 2016.

Tags:
Thread Status:
Not open for further replies.
  1. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,435
    Likes Received:
    263
    I don't know where this Shanghai vs. Markham thing comes from. There's no such competition.
     
    Lightman likes this.
  2. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    178
    Likes Received:
    147
    AMD has shown that VEGA 20@~1.2GHz would have pulled 50% power compared to VEGA 10@~1.2GHz.

    Vega64 + 15% leads to 1.8GHz frequency, therefore not 150W.
    [​IMG]
     
  3. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,151
    Likes Received:
    571
    Location:
    France
    Why do you compare Navi and Vega ? I don't believe that Vega@7nm is a good representation of what 7nm can do, because Vega was not designed with 7nm in mind from the start, and Vega 20 a "translation" for 7nm, but not a native 7nm product.

    And even if Navi is GCN, it doesn't mean It's a tweaked Vega. In my mind, Vega 10 is kind of broken anyway, with some functions announced and never used, so I don't know, something is off with that chip, reminds me of R600... I guess they will pull another RV770 with Navi... I hope anyway, GPU market needs competition, and can't wait 2020 or 2021 for Intel...
     
    Lodix, AstuteCobra and Lightman like this.
  4. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    If I heard the video right, the Navi12 rumor goes further by linking it to the desktop graphics Zen2 chiplet products.
    To get a picture of the range this apparently single chip encompasses:
    A 15-CU 65W Ryzen with graphics, a 20-CU 95W Ryzen with graphics. Maybe 40-50 GB/s bandwidth of DDR4, shared with CPU.

    Then the GPUs stated to use the same GPU:
    A 75W GDDR6 GPU matching a 36-CU 256 GB/s RX 580.
    A 120W GDDR6 GPU matching a 56-CU 410 GB/s Vega 56.

    That's a very wide swath in terms of silicon and bandwidth. Even if talking chiplet that's a lot to ask of one design that has to cater to all of them without incurring the overhead and cost of the most expensive feature along each axis.
    AMD has stated that multi-GPU is not a prospect yet, and yet the rumor has a chiplet plugging into very different systems. The CPUs were allowed to bring 1-8 fellow chiplets and a big IO die (not sure the client one needs to have an IO die that big), whereas this rumor has the GPU on its own matching Vega 56 while still fitting in a slot taken up by one Zen2 chiplet. Navi is the "scalable" design, but I'm not sure about this.
     
    Lightman, pharma, iroboto and 2 others like this.
  5. yuri

    Newcomer

    Joined:
    Jun 2, 2010
    Messages:
    178
    Likes Received:
    147
    TBH, AMD toyed with the idea of a "HPC APU" which would seemingly involve a separate CPU(s) and GPU dies. In that case it was 1-2 8 core Zen1 CPUs and 1 VEGA 10/20 GPU with 2 HBM stacks.

    All this was way before the whole chiplet stuff.
     
  6. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    156
    Likes Received:
    32
    But the Mi60 isn't a shrunk down mi25. (Look at the specs)

    Given the same, just reduced to 7nm::
    https://www.overclock3d.net/gfx/articles/2018/06/05235221134l.jpg
     
  7. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    With Vega, IF took up a fair amount of space and didn't leave the chip. Navi could be offloading certain IF clients with a chiplet approach and keeping the GPU as the IO die. Especially any clients that Ryzen may also contain: display engine, media encode/decode, front-end. Perhaps a two socket Epyc style GPU could be doable.
     
  8. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    324
    Likes Received:
    84
    Oh, I wasn't aware they'd quote CU stuff, didn't see it written down.
    But the binning width needed between a 36CU and 56CU bin is ridiculous. Binning costs money, how much is AMD planning to spend on this?

    Also how is a 56CU chip on current 7nm going to sell for just $250? Dropping 40% or more of the cost between generations of the same fundamental arch doesn't seem believable. At this point it might be best to ignore the rumors and wait till the official announcement.
     
    DavidGraham likes this.
  9. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    At least for the MI60 that was the initial subject of analysis, at that level of bandwidth the non-core area is dominated by memory PHY and the associated controller/data fabric. The fabric was a mesh in Vega 10, and in Vega 20 it's some unspecified topology--that happens to run a circle around the whole GPU. The bottom of the drawing of the MI60 has two rather large xGMI blocks and possibly some fabric stops. That bandwidth may require a decent amount of area on both the GPU and any separate die, although not as high as the memory traffic.
    The top portion of the die appears to be where most of the ancillary logic might be, or some fraction of that area that's not data fabric. At least for a big GPU, it's a pretty minor area savings, though their lower bandwidth demands may allow for a reasonably compact and lower-power inter-die path.

    Where the dis-integration might happen if Navi 12 has a GPU and IO die is unclear. The upper tier has data transport needs much closer to MI60 than to a Ryzen CPU with graphics, so there's a tension between how compact and efficient the GPU can be if there's a large amount of off-die communication, and yet at the same time it's content to fit in an AM4 socket with 40GB/s memory and a chiplet footprint. Having two GPU chiplets in play could resolve some of that contradiction, however AMD has indicated this is not the current plan.
    Polaris 10 as a comparison point is about 2/3 core GPU area, with the rest being GDDR and other IO/silicon. I think the core GPU area could shrink down to something approximating the chiplet area, but if this means the memory bus is off-die some amount of area gets tacked back on, enough to cover that order of magnitude range in bandwidth. Tacking on an extra 30-40mm2 might keep memory bandwidth on-die, although in a Ryzen product perhaps it can rely on xGMI for the dis-integrated blocks and the limited DDR4 bandwidth. That's larger than a CPU chiplet, but still compact.

    The AdoredTV video has a table of purported Ryzen products, including two G variants with either 15 or 20 CUs. The rumor goes further and states that they are using the same Vega 12 as the Vega 12 discrete products. While the max CU count of Vega 12 isn't stated, the two Vega 12 discrete products have the RX 580 and Vega 56 listed as their equivalent in performance--not necessarily CU count. Either 20 or so CUs goes much further, or there's a significant amount of silicon and memory bandwidth not in play with the Ryzen G products. It's not explained what would be done to get this broad range of power/cost/performance/bandwidth.
     
  10. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,057
    Likes Received:
    1,020
    If we accept AdoredTV as being a Bearer of Truth (*cough...*), the wide span could easily be explained by the usual sloppy/generous comparisons on for instance PR slides of product positioning between generations. Wouldn't read too much into it.
    Unless you want to point to it as an indication that the claims are made up, in which case I fully understand you.
     
    psolord likes this.
  11. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    Within the sphere of discrete cards and APUs, if evaluated separately, I can think of various ways to get the sort of scaling done--if you squint at the results at just the right angle. There are potential objections or weaknesses in each, but for example I can see a possible Navi implementation that might be balanced somewhere around Polaris in CUs and other resources, with games being played with GDDR6 speed grades and very low or very high clocks forcing performance into the 580 or Vega 56 ranges for bandwidth or compute performance, with the usual error bars and marketing.
    Similarly, there are potential objections to a 20 CU APU, with possible workarounds or compromises.

    Combining the two regimes and the order of magnitude spread is awkward in how even the objections do not play well together, and for me enter the realm of needing extraordinary proof or more effort in explanation if they are taken to be close to truth.
    I'd like to be pleasantly surprised by some recombination of features, or some new tech that gives a significant architectural improvement.

    One irony to the idea of a ~20 CU GPU scaling so far up and down is that if true it would highlight a continued shortcoming of AMD's client CPU graphics options. Intel's managed to have a GPU standard for pretty much forever on its CPU silicon. In this rumor, AMD now has a GPU so effective that it wouldn't need 20 CUs that can match Vega 56 to max out out what the socket's bandwidth can provide. A right-sized GPU with these super-effective resources could be quite small and could make its way everywere, and yet despite the hyped manufacturing revolution there's a rumor with just 2 APU SKUs.


    That's one of my leading scenarios. I don't have enough information to state outright that it's wrong because of X, Y, or Z. However, even though I cannot definitively state any one objection is irrefutable, it doesn't seem like they can all be explained away with what is given, and many solutions for one will worsen the others.

    I'd like to be pleasantly surprised with some new technology or an explanation of new economic or manufacturing trends, however.

    The more negative interpretation that might be interesting is that the rumor is substantially true, and the answer to the objection "this might not be true because it may create significant problems along various axes" is "it is true and has problems along these axes". (Also known as the Bulldozer rejoinder.)
     
  12. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    16,062
    Likes Received:
    5,013
    Yes, no matter what one might think of the GPU inside of the Intel CPUs, Intel has to be commended for having a GPU in basically all of the consumer CPUs. Something you would think AMD should be capable considering that they are both a CPU and GPU company.

    Not to mention then being able to use Adaptive Sync easily with an NV GPU if I wanted. :)

    Regards,
    SB
     
  13. sonen

    Newcomer

    Joined:
    Jul 13, 2012
    Messages:
    53
    Likes Received:
    33
    Is what made 8700k an easy choice for me over Ryzen 1800/x.

    What do you mean? Render with NV GPU and output via Intel GPU with Adaptive Sync working?
     
  14. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,166
    Likes Received:
    1,836
    Location:
    Finland
    No, Render with NV GPU and output via AMD GPU if AMD put GPU in all their CPUs.
    But also yes with the Intel option if Intel implements adaptive-sync on their current IGPs at some point
     
    sonen likes this.
  15. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,980
    Likes Received:
    3,063
    Location:
    Pennsylvania
    Out of curiosity, what do you need the iGPU for at that level of CPU?
     
  16. sonen

    Newcomer

    Joined:
    Jul 13, 2012
    Messages:
    53
    Likes Received:
    33
    For everything that's not GPU heavy gaming: 2D gaming, summer season gaming (under 100W total heat output), movies, www, OS rescue, system diagnostics etc...
    When I'm into heavy gaming, I switch to R9 290, which again is undervolted ( and underclocked if FPS is hitting refresh limit) keeping the that total heat <250W. I'll even alternate between 60Hz and 75Hz on my FreeSync monitor depending on the game.
    I really the don't like the unnecessary heat

    [​IMG]
     
  17. BoMbY

    Newcomer

    Joined:
    Aug 31, 2017
    Messages:
    68
    Likes Received:
    31
    Or use the iGPU for your primary system (Linux) and use the dGPU for your gaming VM (passthrough).
     
    Anarchist4000 likes this.
  18. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    156
    Likes Received:
    32
    I think you are doing that, just to be complicated. GPUs scale down in power when not using them in games.

    Secondly, AMD has CPUs w/graphics (Ryzen). Given the level of power needed for your gaming, you might not even need to use your 5 year old R9.
     
  19. Theeoo

    Newcomer

    Joined:
    Nov 13, 2017
    Messages:
    132
    Likes Received:
    64
    Maybe there should be an Nvidia optimus or equivalent for desktops.
     
  20. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,166
    Likes Received:
    1,836
    Location:
    Finland
    Windows has built-in support to pick GPU per application. I think AMD and NVIDIA drivers have it too
     
    BRiT likes this.
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...