AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Discussion in 'Architecture and Products' started by Kaotik, Jan 2, 2019.

Thread Status:
Not open for further replies.
  1. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    That would be the most obvious motivation for such a significant increase in cache.

    This Nvidia paper claims a BVH size of 105MB compressed (300MB uncompressed) for a 12M triangle scene. I have no idea how many triangles there are in a typical scene these days though.
     
    Jawed and Lightman like this.
  2. DDH

    DDH
    Newcomer

    Joined:
    Jun 9, 2016
    Messages:
    36
    Likes Received:
    39
    Maybe it has 128cus?
     
  3. T2098

    Newcomer

    Joined:
    Jun 15, 2020
    Messages:
    55
    Likes Received:
    115
    It's not the strangest idea - kind of like the GTX 970 and 3.5GB + 0.5GB. The vast majority of titles will fit happily into 12GB and have full and uniform memory bandwidth. In the rare situation where you overflow that 12GB, keeping the extra data in the (slower) remaining 4GB is still going to be a heck of a lot faster than swapping it back and forth over the PCIe bus to system memory.

    Depending on the cost difference between 1GB vs 2GB GDDR6 it might make sense. Costs you nothing from a BOM or PCB design perspective, and also means you can release a cost-reduced 12GB card without making any physical changes to the PCB layout. The toughest part would be effectively marketing it.
     
  4. Erinyes

    Regular

    Joined:
    Mar 25, 2010
    Messages:
    808
    Likes Received:
    276
    Of course, but usually rumors have at least some grain of truth to them. And I'm sure none of us expect Big Navi to be around a 3080 at half the die size (different processes I know). It should be somewhere in that ballpark by all accounts.
    Theoretically they could even do 16 GB with a 320 bit interface just like the XSX. But there must be some downsides to it, or we'd have seen more of these asymmetric memory configurations over the years. Heck Nvidia could have given the 3080 12 GB and totally avoided the "10GB is less than 11GB 1080/2080Ti" criticism.
    I don't think most people would refer to $700 card as midrange for sure. Up until Pascal, we had a reasonable price to performance ratio from both parties, and with better perf at each price point with every generation. Of course with Turing it stagnated and this is where we saw the $700 graphics cards being "normalised" as you say. AMD's GPUs at the time could not compete beyond the upper mid-range and relegated the high end to Nvidia. We saw the "mid-range" moving from a $199 GTX 960 to a $249 GTX 1060 to a $349 GTX. One has to keep in mind inflation and price of silicon (on a $/mm2 basis) has been going up rather significantly so the prices going up is not just pure profiteering.

    And FWIW, I think AMD underpriced RV770 (not that I'm complaining, I happily bought a HD4850 to replace my 8800GT at the time), and could have easily priced it a little higher and made some more money. It's unlikely they will repeat this. As evidenced by Zen 2 and more so with Zen 3, AMD will price at a premium if they can.
    Unless navi is on 7nm+, they are both going to be on the same 7nm process. Even otherwise, its more about the total wafer allocation AMD has secured and they are competing against other players as well obviously.

    Aside from Threadripper, there is also of course EPYC. With Milan, AMD have a very strong product at a time Intel has badly faltered. With Icelake server reportedly delayed and underwhelming, AMD has to make hay until Sapphire Rapids in late 2021/early 2022. To add, AMD is also experiencing record demand for their APUs at the moment. All of these will certainly drive their wafer allocation more towards CPUs/APUs.
    IIRC Nvidia once said that the R&D for the professional parts is paid by the consumer parts. The professional lineup would not be able to sustain itself on a standalone basis, or at least it couldn't at that point (circa 2016-2017). Today perhaps Nvidia might be able to survive on professional alone, but gaming is still a majority of their revenue. It's also good to diversify your revenue sources obviously. If AMD had decided to focus only on Opteron back in the Athlon 64 days, they'd likely have died out by mid 2015 without a consumer line to keep them going as their server market share plummeted. Either ways, the current supply situation is likely short term and exacerbated due to the console ramp for the launches. It should ease by the next quarter with the capacity vacated by Huawei and Apple.
     
    #3704 Erinyes, Oct 11, 2020
    Last edited: Oct 11, 2020
    PSman1700, pjbliverpool, yuri and 2 others like this.
  5. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    You could even use the slower part of the memory as a streaming buffer, if you have enough copy engines and they work really asynchronously.

    My guess would be drivers.
     
    Lightman likes this.
  6. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    Not the same as 970, 970 was a mess and you had to either access 3.5GB or 0.5GB, couldn't access both at the same time or something along those lines. NVIDIA has used 1+2GB chips on some of their lower end models though and there's no reason AMD couldn't, but it's really unlikely for highend IMO
     
  7. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,714
    Likes Received:
    2,135
    Location:
    London
    160?

    [​IMG]

    As I posted earlier:
     
    Lightman likes this.
  8. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Don't we know for sure from drivers that it's 80?
     
  9. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,511
    Likes Received:
    24,411
    Was that 80 CUs or 80 DCUs... ;)
     
    w0lfram likes this.
  10. neckthrough

    Newcomer

    Joined:
    Mar 28, 2019
    Messages:
    138
    Likes Received:
    388
    How much memory bandwidth would you need to feed 160 CUs? If you believe Navi 10 is a reasonably balanced design, then 4x of 5700xt is 1.8GB/s.
     
  11. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,723
    Likes Received:
    242
    I'm thinking with the newest HBM2E, a 128 or even 160 cu GPU should have enough bandwidth..

    Probably wouldn't even need 1.8 TB/sec, maybe just 384-bit GDDR6 / 6X and still having the large cache.
     
  12. NightAntilli

    Newcomer

    Joined:
    Oct 8, 2015
    Messages:
    104
    Likes Received:
    131
    According to RedGamingTech, the numbers we saw for Big Navi in the Zen 3 keynote was either the 64 CU or the 72 CU part, i.e. not the full die.
     
  13. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,401
    Likes Received:
    1,845
    Location:
    France
    If it was 72cu, I don't believe that a 80cu chip will be a lot faster (without pushing the frequency) ... But if it was a 64cu, and a 80cu is planned too, this can be fun.
     
    #3713 Rootax, Oct 14, 2020
    Last edited: Oct 14, 2020
    Lightman and Krteq like this.
  14. Krteq

    Newcomer

    Joined:
    May 5, 2020
    Messages:
    149
    Likes Received:
    263
    Something seems to be "fishy" here

     
    Lightman likes this.
  15. Krteq

    Newcomer

    Joined:
    May 5, 2020
    Messages:
    149
    Likes Received:
    263
    Aaand another ROPs rumor maybe pointing to some cache subsystem redesign rumored previously
     
  16. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,511
    Likes Received:
    24,411
    Are we to trust cats on the internet now?
     
  17. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    The Xbox Hotchips presentation mentioned something about Screen Tiled Colour/Depth units.
     
    Lightman likes this.
  18. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    Seems reasonable, didn't Googles brain simulator determine that only worthwhile things in the internet are human faces and cats?
     
    Cat Merc, Erinyes, BRiT and 1 other person like this.
  19. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    That's consistent with what ROPs have always done for AMD/ATI.
     
    TheAlSpark, BRiT and pharma like this.
  20. dumbo11

    Regular

    Joined:
    Apr 21, 2010
    Messages:
    440
    Likes Received:
    7
    I suspect the XSX only actually needs that bandwidth for XB1X BC.
    - the XB1X had 326GB/s of bandwidth. (vs a paltry 218GB/s on the PS4 pro)
    - a focus of the console (or PR focus anyway) is 'better BC' - doubling title performance to 60/120fps etc.
    - if RDNA2 is using cache/compression to make up for bandwidth, then that probably isn't going to provide much of a reliable boost in BC mode.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...