AMD: Navi Speculation, Rumours and Discussion [2019]

Discussion in 'Architecture and Products' started by Kaotik, Jan 2, 2019.

  1. Benetanegia

    Regular Newcomer

    Joined:
    Sep 4, 2015
    Messages:
    343
    Likes Received:
    308
    Not quite true. The chips made on Samsung where the smallest ones which are typically the less dense ones (due to memory i/o most likely), but in Pascal they were denser than the most immediate chip.

    GP108 - 24.3 MTrans/mm^2
    GP107 - 25 MTrans/mm^2
    GP106 - 22 MTrans/mm^2
    GP104 - 22.9 MTrans/mm^2
    GP102 - 25.4 MTrans/mm^2

    Yeah and with both AMD increased clocks a lot, likely requiring optimizations to be made. Nvidia did the same with Pascal and the contention here is that they'd not need to do it for Ampere.
     
  2. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,359
    Likes Received:
    3,732
    Even if they are not taped out yet, Turing wasn't taped out till April 2018, according to our own @Erinyes, yet it was launched August the same year (in the span of just 4 months), so I don't know why you'd think NVIDIA will delay next gen till RDNA3 is released, that's just ridiculous.

     
  3. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,664
    Likes Received:
    476
    Location:
    msk.ru/spb.ru
    I'm saying that 12FFN is essentially 16FF+ in everything but name. The difference in transistor density between Pascal and Turing is non-existent.
     
  4. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,043
    Likes Received:
    440
    Poking at people is fun.
    Twice as fun when they start saying stuff.
    Oh it sure was.
     
  5. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,359
    Likes Received:
    3,732
    So no info whatsoever? figures ..
     
    Cuthalu and A1xLLcqAgt0qc2RyMz0y like this.
  6. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    550
    Likes Received:
    248
  7. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,043
    Likes Received:
    440
    Cut that back to 128CU, add some REAL GOOD SHIT and you've got yourself an Arcturus.
    Renoir is only a little flex in their newlyfound circuit design prowess.
    More to come!
     
    Leovinus and Tarkin1977 like this.
  8. del42sa

    Newcomer

    Joined:
    Jun 29, 2017
    Messages:
    184
    Likes Received:
    107
    sure, they will ditch RDNA and RDNA2 as well and switch back to Vega architecture with "secret sauce" .-) NO !
     
    Cuthalu likes this.
  9. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,629
    Likes Received:
    1,001
    Location:
    France
    Vega with working primitive shaders is coming ? :eek: (just kidding)
     
    Lightman likes this.
  10. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    9,238
    Likes Received:
    3,182
    Location:
    Finland
    Actually they are bringing at least that one more GCN in the form of Arcturus, but it's supposedly only for Radeon Instinct family
     
  11. Leovinus

    Newcomer

    Joined:
    May 31, 2019
    Messages:
    125
    Likes Received:
    53
    Location:
    Sweden
    I wonder if they won't conceivably keep Vega for a little bit longer though. It's not as flexible and efficient as Navi, but it's a more efficient use of silicon by being less complex. For small, power constrained, iGPU situations it seems to be quite suitable. Though if that means they'll be kept in more consumer oriented products I'm less sure. But thin clients and the like wouldn't really need Navi, whereas the smaller silicon footprint and higher efficiency of the improved Vega cores would be a boon. I guess it comes down to what has the highest ROI for those parts in the future. But I'm on record as having a soft spot for Vega. Underdog sympathies would be my guess. So it might be wishful thinking on my part.
     
  12. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,629
    Likes Received:
    1,001
    Location:
    France
    Is Vega less complex ? Because it seems like a lot of stuff not efficient at all (and I have one).
     
  13. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,345
    Likes Received:
    313
    Compute performace per transistor is excellent, graphics performance per transistor and clock(!) is also quite good. The only problem is performance per watt (mostly in graphics, not so much for compute). It seems, that AMD (at least partially) solved the performance per watt issue with Renoir.
     
  14. Leovinus

    Newcomer

    Joined:
    May 31, 2019
    Messages:
    125
    Likes Received:
    53
    Location:
    Sweden
    I'll be honest with you, I'm not technically expert enough to give a proper technical answer. But so far as I understand it RDNA improved on GCN's compute units drastically. The older CU's had to be targeted well to avoid bottlenecks, whereas GCN reworked them into a "workgroup" setup with two CU's each allowing for much more granular execution of code. GCN, Vega, would wait for instructions unless fed properly where RDNA, Navi, chugs along happily. There are also bandwidth improvements and scaled up SIMD units in the new workgroup setup that simply irons out most of the kinks in GCN. Making it more efficient. Thing is, if you were to code for GCN's peculiarities, you've got wasted silicon doing nothing on Navi. The CU's in GCN weren't bad so long as they were fed. Keeping them fed was the issue.

    This apparently gave rise to the "fine-wine" moniker of AMD's products. As successive games were more adept at filling the GCN CU's. Performance looked like it didn't drop as much over time as it did with, say, nVidia's products of the same vintage. Whose products are more flexible by design already. Developers simply got better at targeting GCN hardware in subsequent games, whereas the competition was "maxed out" already thanks to the efficiency of its design. Drivers sort of did the rest for AMD.

    This is why I gather that Arcturus will do well in the datacenter, where workloads will be tailored to the peculiarities of the CU's and make use of them properly. The bottleneck scenarios more commonly found in games simply don't occur to the same degree there. Or shouldn't. And in low power parts like set top boxes, thin clients, etc, you don't need super efficient workgroups. The less silicon intense Vega cores will do. They waste less silicon per die, and work just fine. Besides which GCN is a known piece of kit by this point, with stable drivers and good developer familiarity. And will be kept in mind for some time to come when coding.

    Again, that's so far as I understand it.
     
  15. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,664
    Likes Received:
    476
    Location:
    msk.ru/spb.ru
    Vega had severe issues scaling even to 64 CUs, let alone 160. And so far we don't really know how well RDNA will scale above it's current maximum of 20 WGPs. Chances are though that it will scale better than GCN ever did.
     
    del42sa and yuri like this.
  16. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,102
    Likes Received:
    2,572
    Location:
    Germany
    You mean in Graphics, right? Compute, as long as it was compute bound alright, did scale.
     
  17. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    1,664
    Likes Received:
    476
    Location:
    msk.ru/spb.ru
    On average, really. Compute on GCN scaled quite a bit better than graphics but was still limited by TDP going through the roof at some point.
     
  18. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    237
    Likes Received:
    40
    Even when you increase the die size, you increase the power usage too. Nothing come free.
    But as you've stated, when adding transistors to a shrunken die: "each drawing roughly 1/3 the power as before..". So the new tracks use more power than just for free. The cost if using extra transistors is now reduced by 65%, etc. What the others stated, about using 3x more die space, instead of 65%and/or..
     
    #1738 w0lfram, Jan 22, 2020
    Last edited: Jan 22, 2020
  19. Benetanegia

    Regular Newcomer

    Joined:
    Sep 4, 2015
    Messages:
    343
    Likes Received:
    308
    I never said it was free. Just that as long as the power reduction and density increase align, you can utilize that density increase. So 2x density @ 50% power OR 3x density @ 33% power, both work. On the other hand, 3x density @ 50% power wouldn't work as well.

    EDIT: Basically I was saying that in order to increase performance by 50% you don't need to increase clocks by 50%, in fact you don't need to increase them at all if you can increase size by 50% or more (much more) at the same power envelope. And in fact, when you can increase size by up to 3x you even have the luxury of lowering clocks and still get a large performance upgrade while increasing efficiency massively.
     
    #1739 Benetanegia, Jan 22, 2020
    Last edited: Jan 22, 2020
    w0lfram and no-X like this.
  20. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    550
    Likes Received:
    248
    It's definitely looking like it wasn't just PR that Raja was terrible at, and based on that initial Xe presentation it seems he's straight back to his old bad habits of vastly overpromising and pumping up tons of new, untried technologies in extremely optimistic timelines. Focusing on things like HBCC instead of good execution anyone?

    Also, entirely expected note but Lisa Su has explicitly confirmed RDNA2 for this year

    Been easy to assume, but nice to have it stated officially and without any wiggle room. The exact wording of the statement is weird though. Why would you "refresh" "Navi" and have a "next generation RDNA architecture" launch in the same year?
     
    #1740 Frenetic Pony, Jan 29, 2020
    Last edited: Jan 29, 2020
    Lightman likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...