AMD Navi Product Reviews and Previews: (5500, 5600 XT, 5700, 5700 XT)

Discussion in 'Architecture and Products' started by snc, Jul 4, 2019.

Tags:
  1. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    The difference is, Navi was developed from Compute-heavy GCN while NVIDIA has been focusing comparatively more on the graphics side since Kepler or so
     
  2. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
    Polaris lacks fp16, which could eventually affect seen benchmark results.
    I never used Polaris, but 2 or 3 older GCN generations. There was no big difference - performance always scaled with CU count and clock as expected.
     
  3. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
    Polaris 30 is ~7 TFlops with 5.7 bln transistors and 225W TBP. That's ~70% of Navi 10 Tflops in ~60% of transistor budget.

    Also worth noting that when AMD speak about GCN being better for compute right now they are more than likely mean solely Vega 20 which is 13,8 Tflops with 1/2 FP64 - quite a bit higher than what Navi 10 is capable of. I am 100% sure that this narrative will change as soon as they'll launch an RDNA GPU which will be faster than Vega 20 in peak Flops, at least in FP32.
     
  4. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
    ... which would mean GCN is only 'good for compute' because they have larger GCN chips available right now. That's pretty much what i would think too.

    Still, seeing GCN going EOL hurts my feelings. :D
     
  5. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
    Well, there are probably some other reasons too, like GCN using narrower SIMDs compared to RDNA and spending relatively less of a die budget on graphical features. But the main reason is likely simple - Vega 20 is faster than Navi 10 in compute right now.
     
    PSman1700 likes this.
  6. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Did engineers say that one is more compute and the other more graphics optimized or did marketing bring that up?
     
    pharma likes this.
  7. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
  8. Pressure

    Veteran

    Joined:
    Mar 30, 2004
    Messages:
    1,655
    Likes Received:
    593
    The same price the VEGA FRONTIER EDITION launched for with 16GB HBM2 memory.
     
  9. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    In the professional visualization segment, any card is no match for an RTX Quadro, there are dozens of applications that RTX accelerated right now. A Quadro RTX is the standard to get now.
     
    #329 DavidGraham, Nov 29, 2019
    Last edited: Nov 29, 2019
    jlippo and digitalwanderer like this.
  10. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    It's not like gaming, where you have multiple games over time you play and may or may not come across one that uses RTX. Most companies have their software ecosystem which does what they need it for and yes, if that happens to support raytracing and has an optix backend, then Quadro is a good choice. If it does not use raytracing and/or has no rtx interface, then you buy based on perf and cost in YOUR application.
     
  11. PizzaKoma

    Newcomer

    Joined:
    Apr 29, 2019
    Messages:
    51
    Likes Received:
    86
  12. yuri

    Regular

    Joined:
    Jun 2, 2010
    Messages:
    283
    Likes Received:
    296
    So here is the Navi 12. Thus highend SKUs are left to Navi 2x - hopefully featuring RDNA2.
     
  13. LordEC911

    Regular

    Joined:
    Nov 25, 2007
    Messages:
    877
    Likes Received:
    208
    Location:
    'Zona
  14. Flappy Pannus

    Regular

    Joined:
    Jul 4, 2016
    Messages:
    329
    Likes Received:
    567
  15. PizzaKoma

    Newcomer

    Joined:
    Apr 29, 2019
    Messages:
    51
    Likes Received:
    86
    192-bit bus mean 6x 32 bit GDDR6 memory controllers. Highly likely there is a 30 CU part in the 5600-series, but if the 5600-series is a cut down Navi 10 e.g. Navi10LE it could go as high as 36 CU in steps of two, though unlikely. From a business prospective I doubt 36 CU due to cannibalization of RX 5700 @ 1080p, 1440p.

    The max so far seen for Navi 10 is 5 WGP per 32-bit memory controller, Navi 14 has 3 WGP per controller. If this is Navi 12 then could be at 4 WGP per controller resulting in 32 CU for Navi 12. But an extra Chip in between 158 mm^2 and 251 mm^2? I doubt that makes much sense strategically from yields, wafer costs and the amount of wafers AMD can use at TMSC right now.

    I'm strongly leaning towards this being a cut down part...
     
  16. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    They're actually 16bit memory controllers
     
  17. PizzaKoma

    Newcomer

    Joined:
    Apr 29, 2019
    Messages:
    51
    Likes Received:
    86
    Can they be decoupled from two (16+16-bit) channel though, from any practical standpoint? As far as I've read on GDDR 6 they can't, they can be joined in pseudo channel mode with penalty. But I am not an engineer by trade. So even if one channel is 16-bit, the bus-width of the controller to the memory module is 2x16-bit aka 32-bit
    https://www.jedec.org/standards-documents/docs/jesd250b
     
  18. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    If anything it would probably be 4x16bit grouped as 64-bit since AMD uses 64-bit on highest level block diagram. But regardless of that, they list 16 MC's (x16 = 256bit) in RDNA Cache hierarchy -slide.
     
  19. rSkip

    Newcomer

    Joined:
    Jan 10, 2012
    Messages:
    18
    Likes Received:
    35
    Location:
    Shanghai
    Navi10 dieshot from AMD HotChips slide
    [​IMG]
     
    Lightman, no-X and iamw like this.
  20. dobwal

    Legend

    Joined:
    Oct 26, 2005
    Messages:
    5,955
    Likes Received:
    2,324
    I am not sure how to interpret that statement in regards to the debate of GCN being better than RDNA in compute.

    GCN and RDNA weren’t develop in parallel (RDNA benefitted from the compute work poured into GCN) and compute is very much a part graphics.
     
    #340 dobwal, Dec 10, 2019
    Last edited: Dec 10, 2019
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...