AMD: Navi Speculation, Rumours and Discussion [2019]

Discussion in 'Architecture and Products' started by Kaotik, Jan 2, 2019.

  1. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,042
    Likes Received:
    441
    GDDR6 inits in their usual BIOS <> driver interface added for N21.
     
  2. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,165
    Likes Received:
    2,685
    Location:
    Germany
    12, 24 or 48 GByte.

    It would indeed be very sad, if AMD had to pick the xx70 as competition.

    If I were AMD, I would write some really weird shit in there, just for the LULz.
     
  3. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,818
    Likes Received:
    749
    Location:
    msk.ru/spb.ru
    Why not throw 96 and 192 GB there as well while we're at 48?

    This is getting a bit ridiculous. 10-12 may be there at lower acceptable boundary for 4K+RT but anything higher that 16 won't be used in gaming till PS6 or so.
     
    pharma and Lightman like this.
  4. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,165
    Likes Received:
    2,685
    Location:
    Germany
    Because the question was, what kind of limit a 384 bit bus imposes. And 48 GByte is what's possible and has been done already with todays tech, not some fancy yet-to-be-produced memory dies.
     
  5. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,042
    Likes Received:
    441
    That's the point of silly VRAM configs.
     
  6. pTmdfx

    Regular Newcomer

    Joined:
    May 27, 2014
    Messages:
    341
    Likes Received:
    283
    Why 384-bit? Why not throwing 512-bit into the party, since it lines up with rumours of 8GB/16GB configurations and having 16 memory channels (the “HBM confirmed” patch)? :p
     
  7. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,042
    Likes Received:
    441
    Because.
    Because not possible with G6, also just stack 3 HBMs side to side if you're this desperate (and AMD's squarely mobile first uArch is obviously not).
    That's N22.
    (also how the fuck do I make 8gig setup on 512b with G6s? No 4Gb G6 IC exists.)
     
  8. DDH

    DDH
    Newcomer

    Joined:
    Jun 9, 2016
    Messages:
    36
    Likes Received:
    38
    How is AMD going to feed a supposedly 80cu monster with a 384 bit bus? What speed gddr6 do you imagine them pairing with that?
     
  9. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,042
    Likes Received:
    441
    Miracles and magic.
    And cache subsystem improvements.

    They gotta push APU and lower end GPU performance, aka segments where bandwidth won't ever rain from the skies.
    14/16/17Gbps depending on segment (or AIC vendor in question).
     
  10. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    9,350
    Likes Received:
    3,340
    Location:
    Finland
    I'm pretty sure there's no 4 Gbit GDDR6 so 512-bit would mean 16GB at minimum
     
  11. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,042
    Likes Received:
    441
    At this point I feel like we need 12Gb GDDRs for oddball but needed 6/9/12/15/18GB configs.
     
  12. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,112
    Likes Received:
    1,202
    Location:
    London
    RTX 2080 has the same bandwidth as RX 5700 XT, yet is from 10-30% faster (excluding 1080p), usually well over 20% faster. e.g. Doom Eternal from:

    https://www.techspot.com/article/1999-doom-eternal-benchmarks/

    In my opinion that tells us that RX 5700 XT was severely unbalanced. If nothing else these new Navi GPUs should fix that ridiculous waste of bandwidth.
     
    Lightman and BRiT like this.
  13. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,042
    Likes Received:
    441
    They had to hit 8 gigs VRAM and 128b isn't really an option.
    No, we really-really need 12Gb ICs.
     
  14. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    572
    Likes Received:
    266
    It had room for improvement, and there's been some sort of improvement as per the Series X SOC talk; specifically one SE to one 64bit bus no longer need be the case and more instructions can issued per clock. But that it, they didn't really go into anything informative for the GPU otherwise. The presenter even went out of their way to avoid talking about TDP during the Q&A, like they signed an NDA for it.

    AMD playing shit close real close to the vest.
     
    Lightman likes this.
  15. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,042
    Likes Received:
    441
    Yeah and the actual SoC overview was much shorter vs comparable Renoir session aka neither MS nor AMD wanted to really talk the details.
     
    Lightman likes this.
  16. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,112
    Likes Received:
    1,202
    Location:
    London
    3070 appears to have the same bandwidth as 2080 and RX 5700 XT. And it performs like a 2080 Ti. So Navi now has to catch up "2x". It has to get 50%+ more efficient with its bandwidth.

    That seems extremely unlikely to me.
     
    PSman1700 likes this.
  17. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,165
    Likes Received:
    2,685
    Location:
    Germany
    3070's got an ace up its sleeve.
     
  18. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,042
    Likes Received:
    441
    Everything about AMD's new GPU IP is in extremely unlikely tier yet it is real.
    Being fair, not as extremely unlikely as was AMD sticking the boot in mobile yet it also happened.
     
  19. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    717
    Likes Received:
    170
    Except respins are typically required to fix bugs and not for clock speed reasons (a notable exception being the ATI R520 GPU, though that was also caused due to a bug). Clock speed is more a function of the design, process and physical implementation. And I highly doubt the PS5 APU has a significantly different one than XSX. MS chose a fixed clock and Sony seemingly went higher up the curve and with some boost. Why the disbelief about 7nm clockspeeds? Look at Renoir desktop clocks.
    Has anyone ever tried downclocking the memory of either 2070/RX5700 and seeing how much the performance drops? I'd wager its not linear. Conversely, memory overclocking never gets you a linear performance increase. While it may be a bit short on bandwidth with a 384 bit GDDR6 memory bus, I dont expect it to have a significant impact on the performance of Big Navi.
     
  20. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,042
    Likes Received:
    441
    5600XT is the living example.
    6% less perf for 25% memory b/w chopped off.
    Literally made 5700 obsolete (unless you need extra 2GB VRAM).
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...