AMD: Navi Speculation, Rumours and Discussion [2019]

Discussion in 'Architecture and Products' started by Kaotik, Jan 2, 2019.

  1. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,528
    Likes Received:
    698
  2. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,062
    Likes Received:
    1,021
    Damned if I know.
    That said, I did ask an AMD engineer about the costs of doing ”derivative” chips such as for instance halved or doubled from a given design, and he confirmed that such projects were a lot less costly to do. Give that there is a sizeable low end and laptop market, I would assume those chips will pay off. The larger the chip, the murkier the proposition of course. Then again, the PC market is an upgraders market and AMD and nVidia need to keep those upgrades happening. That the tail end of a lithographic process has the largest chips makes sense on many levels.
     
  3. glow

    Joined:
    May 6, 2019
    Messages:
    8
    Likes Received:
    7
    Oh, right. I just realized a simple Turing - Volta comparison is made a bit harder by the fact that Volta only has a single chip, basically. A single chip with a completely different memory PHY and greater DP floating point acceleration. That being said, Turing did gain Volta's "dual-issue" SM, Tensor cores, NVLink connector (not the specific model chosen for the comparison, but the larger dies did get NVLink), doubled L2 cache size, increase the TMU count (a bit), reorganized the SM to contain half the number of cores (so some things like total GPU L1 cache was more than doubled, when comparing similar core counts; core/SM count of Pascal was already half of Maxwell), and there is now a L0 (microops?) cache.

    So the cores themselves are already a bit harder to compare, especially since Pascal doesn't have separate INT32 cores, whereas Volta and Turing do (Volta also has separate FP64 cores).

    At any rate, I now venture Turing's RT cores probably don't take up all that much space, though Tensor cores probably do.

    Even the Turing chip that has all that stripped out, TU116, show they take a ~25% increase in transistor count per "core."

    I'd also want to see a TU117 vs Pascal analog comparison, since the smallest Turing also removes the HEVC B-frame encoder, in favor of the Volta HEVC encoder. I'd have to imagine that was done to save die space, not licensing costs. Though the licensing costs are reportedly a factor in Google, Amazon, Microsoft, Netflix, etc choosing to fund their own competitor.

    ---

    Rats, I cannot edit yet. I think my previous post came off as being sarcastic in the first few lines. I'm sorry, that wasn't the intent.

    I also intended to link a Nvidia page comparing the GP106 to TU116, to support the second-to-last "paragraph."
    https://www.nvidia.com/en-us/geforc...ti-advanced-shaders-streaming-multiprocessor/

    ---

    Whoops, attempting to add a link in my apology&clarification nuked it.


    ModMerge
     
    #843 glow, Jun 11, 2019
    Last edited by a moderator: Jun 11, 2019
  4. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    157
    Likes Received:
    33
    PCIe4.0 has to play a role here too, at some point. And the new Infinity Fabric 2.0 is 2x the bandwidth.
     
  5. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,997
    Likes Received:
    4,570
    Indeed, RDNA 1 / Navi seems like an architecture that was developed for release in GPUs in late 2017 / early 2018 originally, probably to one or more canceled nodes. E.g. during early 2016 they had planned it for late 2017 release using GF 10nm before knowing the fab would skip directly to 7nm, then until mid 2017 they had planned it for Q3/Q4 2018 using GlobalFoundries' 7nm DUV before knowing the fab would drop out of the high-end nodes altogether.
    Mix that with the incredibly low R&D budget they've had for GPUs until 2018 and they just couldn't keep up the redesigns to keep the GPU from getting delayed over and over, forcing AMD to compete in the gaming segments using GCN GFX9 GPUs.

    End result is a GPU that (finally) competes well for the power segment because it's gaming focused and it's using a recent node, but isnt' bundling any technology or standard that would be expected for a GPU released in Q3 2019 like HDMI 2.1, VirtualLink, variable rate shading and hardware acceleration for DXR .

    The pricing is a bit of a let-down to me but it seems to be solely based on nvidia's offerings though. I can't see anything that says the 5700 XT couldn't be sold for $300, and it probably will after 3 or more quarters when the RDNA 2 higher end card and nvidia's 7nm cards come up.
    Without any new hardware features, I wonder if people won't find the currently discounted Vega 10 cards to have a better price/performance ratio than the 5700 family, and if reviewers will point out the poor price/performance comparison relative to their predecessors like they did with the RTX series against Pascal.
    Every single AMD RTG graphics card release has been a monkey paw wish (i.e. there's always some negative factor that takes center stage), and I'm thinking those prices might be their undoing. Especially with the rumors of heavy price cuts on nvidia parts.


    I wonder what the originally planned "January 2018 Navi" looked like. There would be no 7nm and no GDDR6. GDDR5X seems to have been almost exclusive to nvidia. 384bit GDDR5 and more CUs to compensate for lower clocks?


    What exactly in a high resolution picture of a single-chip GPU makes you believe it's using a chiplet setup?
     
    mahtel, Cuthalu and Lightman like this.
  6. xEx

    xEx
    Regular Newcomer

    Joined:
    Feb 2, 2012
    Messages:
    939
    Likes Received:
    398
    Completely agree. I was looking for a GPU between 200 and 300 bucks seems now that the mid range are 400 -500 dollars. fuck if this is continuing I will just pay something like Stadia in the future.

    With this pricing I will I'm going for a 1660 or 1660Ti...specially now that they support FreeSync.
     
  7. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,726
    Likes Received:
    5,819
    Location:
    ಠ_ಠ
    IF is more about a coherent interconnect between components (e.g. within an APU or even within a just between core/uncore stuff). Chiplets/multi-die setups are just another use case for Infinity Fabric across physical dies.

    https://www.overclock3d.net/news/gp...ega_utilises_their_new_infinity_fabric_tech/1

    https://en.wikichip.org/wiki/amd/infinity_fabric
     
    #847 AlBran, Jun 11, 2019
    Last edited: Jun 11, 2019
    Globalisateur likes this.
  8. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    1,566
    Likes Received:
    400
    Location:
    Earth
    On the positive side gpu's age slowly nowdays. I have 2+years old 1080ti and for performance side of things it really is holding up very well. It's probably around 5+ years it currently takes to get double the performance for same pricepoint(fire/crypto bust sales excluded).

    To me the smart money is in getting higher end gpu and upgrade less often. Also g-sync/freesync monitor makes giant difference on gaming experience.
     
    DavidGraham likes this.
  9. xEx

    xEx
    Regular Newcomer

    Joined:
    Feb 2, 2012
    Messages:
    939
    Likes Received:
    398
    I disagree, For me the smart buy is in the mid range between 200 and 300. you spend how much 800 bucks? I can update my lets say 250 VGA 3 times and keep up with new tech like RT. but with the new mid range at 400 bucks this is getting ridiculous. Now is the completely opposite than before, now on CPU AMD is killing it while on GPU is killing us.
     
  10. Globalisateur

    Globalisateur Globby
    Veteran Regular

    Joined:
    Nov 6, 2013
    Messages:
    2,949
    Likes Received:
    1,669
    Location:
    France
    It was infinity fabric. I think it's the first time it's used for a monolithic APU without HBM ram ?

    Ok I have another question. Is the 10 CUs for one shader engine design mandatory for RDNA ? Is it possible to use for instance 8CUs (well 4 Dual compute units) by SE ?
     
  11. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,997
    Likes Received:
    4,570

    I think a more pressing concern from IHVs pushing these prices on mid-range cards is a mass migration of PC gamers to consoles. These "neo mid-range" $500 cards paired with $200-300 CPUs and $150 worth or RAM aren't offering a lot more than the mid-gen consoles that are now selling for less than $400.

    With these prices, I feel like both IHVs are shooting themselves in the foot.

    And if nvidia came out admitting their RTX line didn't make them nearly as much money as they had hoped, AMD trying to pull the same price range because of a 5-10% performance advantage seems a bit stupid IMO.
    Unless they're producing Navi 10 in low amounts so they're not really interested in selling it for the masses.
     
  12. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    1,566
    Likes Received:
    400
    Location:
    Earth
    I paid a little under 700$. Planning to keep it for ~5 years. As of today I would guess 1080ti is only bested by 2080ti and trades blows with 2080 and radeon vii. Pretty good for old piece of junk.

    I'm much more in the camp buy high end and use it long time to get value rather than getting new mid tier crap every 1.5 years. There was time when updating often made sense but that is not anymore.
     
    #852 manux, Jun 11, 2019
    Last edited: Jun 11, 2019
    DavidGraham, pharma and Bludd like this.
  13. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,997
    Likes Received:
    4,570
    Purchasing top end graphics cards has only ever been for the premium experience of getting the best of the best, and never because the price/performance per year made more sense than buying mid ranges more often.

    These past 3 years are the sole historical exception due to crypto boom inflating everything up and then IHVs trying to ride the inflated price wave after the crypto crash, to recoup the costs of competing with their own products that ate being sold in the 2nd hand market.
     
  14. Love_In_Rio

    Veteran

    Joined:
    Apr 21, 2004
    Messages:
    1,452
    Likes Received:
    110
    So, in the end we have our next gen super simd architecture?. RDNA 2, the so called next gen is then this plus Ray tracing ?. It has sense to evolve first in rasterizing architecture and after add the RT support.
     
    #854 Love_In_Rio, Jun 11, 2019
    Last edited: Jun 11, 2019
  15. xEx

    xEx
    Regular Newcomer

    Joined:
    Feb 2, 2012
    Messages:
    939
    Likes Received:
    398
    Also streaming services...IIRC you can pay 6 years of Google Stadia with the money they ask you for a Radeon 7(to play at 4K) And we all now it will evolve with new tech with time so you are not limited with a solid piece of hardware. Yes we will have to see the latencies but the difference is absurd.
     
  16. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    The RX5700 (non-XT) is a 36CU, 180W board.

    64/36 * 180 = 320W.

    Replace the GDDR6 with HBM2/3, maybe reduce the clockspeed a bit, and you've got a PCIE-compliant, 64CU RDNA GPU.
     
    w0lfram likes this.
  17. Love_In_Rio

    Veteran

    Joined:
    Apr 21, 2004
    Messages:
    1,452
    Likes Received:
    110
    If consoles use both of them RDNA 2 with more CUs they must have signed less voltage chips or there is no way to fit the power envelope.
     
    #857 Love_In_Rio, Jun 11, 2019
    Last edited: Jun 11, 2019
  18. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    Has the number of CUs in consoles been communicated?
     
  19. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    1,943
    Likes Received:
    1,089
    And VRS.
    Don't think we know anything more about RDNA 2.

    I'm assuming hdmi 2.1
     
    Cuthalu likes this.
  20. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,528
    Likes Received:
    698
    That’s TBP. 150W is chip TDP.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...