AMD Navi Product Reviews and Previews: (5500, 5600 XT, 5700, 5700 XT)

Discussion in 'Architecture and Products' started by snc, Jul 4, 2019.

Tags:
  1. Leovinus

    Newcomer

    Joined:
    May 31, 2019
    Messages:
    142
    Likes Received:
    73
    Location:
    Sweden
    While this might derail the topic a little, why is that exactly? Shouldn't the new workgroup design improve compute efficiency by keeping the SIMD units fed? Why AMD continues with Vega in the upcoming, headless, Arcturus release is a little beyond me unless efficiency is basically equal between rDNA and GCN5 so long as you keep the SIMD units fed. GCN was always finicky about scheduling and developer hand-holding to be used optimally. Which might be more easy to do and therefore a non-issue in HPC settings I suppose. Or are there still some advantage to GCN that isn't in rDNA for compute?
     
  2. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Did not calculate anew, but from the top of my head GCN has more raw TFLOPS per mm² than RDNA. In gaming, obviously you need to have more elaborate feeding and sorting mechanisms because clearly not everything there is compute. In Compute, well, it seems just a bit easier. And Vega has the option for faster and more memory compared to Navi10.

    Did it anway now: Radeon VII (Vega 20 salvage), despite not being a fully enabled chip, has a compute density of 41,76 GFLOPS/mm², Radeon 5700XT (Navi1o full config) is at 38,84.

    Wanted to look up the Radeon MI60, but the product page at amd.com is just a 404. When it was announced, AMD touted it as the world's fastest FP64 and FP32 capable GPU - with 29,5 FP16-TFLOPS aka 14,75 FP32-TFLOPS, which would put it at 44,56 TFLOPS/mm² with half-rate FP64 to boot. That would worsen the compute density of Navi further.
     
    #342 CarstenS, Dec 10, 2019
    Last edited: Dec 10, 2019
  3. Leovinus

    Newcomer

    Joined:
    May 31, 2019
    Messages:
    142
    Likes Received:
    73
    Location:
    Sweden
    So basically you get higher compute density by offloading scheduling and feeding on devs, which again is probably entirely alright in HPC settings. Does that density come with efficiency improvements though? Or does it "just" make for more compute units per die, lowering cost "per compute unit". A saving which probably gets gobbled up by HBM, which is a vaunted feature for HPC though so possibly another good thing in the grand scheme of things.
     
  4. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Generally, in pure compute you have an easier time with scheduling in the first place without offloading it to devs (tbh, i think it's mostly driver devs). You have for example only one kind of wavefronts (compute), you don't have to worry about rasterization, early Z-out and stuff like that, which makes it easier for the fixed cadence of feed-four SIMD16 to work under high load.

    When you power gate your rasterizers, parts of the TMUs and ROPs, you also save a substantial amount of power as well.
     
    TheAlSpark and Leovinus like this.
  5. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,400
    Likes Received:
    1,845
    Location:
    France
    I believe, behind the technical things explained by CarstenS, that it's just easier for AMD to manage products this way, for now. I think having to bring Navi to the compute/pro market would have been maybe too much to handle (drivers, communications, validations, etc), while GCN is still doing ok to very good in this field, while they really needed a new gpu for the gaming market.
     
  6. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,451
    Likes Received:
    471
    Nice find! I donwnloaded two HotChips presentations, but none of them contains this image. Which slide is it from?

    I tried to enhance the detail to be better visible:
    [​IMG]
     
    #346 no-X, Dec 10, 2019
    Last edited: Dec 10, 2019
    TheAlSpark and digitalwanderer like this.
  7. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    It's in HC31_2.13_AMD_final.pdf slide 7
     
    TheAlSpark and no-X like this.
  8. bridgman

    Newcomer Subscriber

    Joined:
    Dec 1, 2007
    Messages:
    62
    Likes Received:
    123
    Location:
    Toronto-ish
    Yep. The RDNA CU's are larger and have a definite performance advantage when running a wide mix of game shaders, but you don't get the same performance gain on typical compute workloads where most of the shader instructions are coming from optimized math libraries rather than game code.
     
    #348 bridgman, Dec 12, 2019
    Last edited: Dec 12, 2019
    Lightman, CarstenS and TheAlSpark like this.
  9. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    #349 TheAlSpark, Dec 12, 2019
    Last edited: Dec 12, 2019
  10. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    Huh, 5500 is on par with a 1660 Super at 33% lower bus width. They both have the same memory so AMD is actually beating out Nvidia on bandwidth efficiency now, at least in production which is the most important part anyway.

    Now if only they could get that TDP efficiency up a lot better. Sure the two cards are comparable, at very different silicon nodes. Regardless at nigh the same performance for $50 less I don't see why anyone would rationally choose Nvidia at this price point. Of course AMD has rationally beat Nvidia before only to lose out in sales, but it seems they've gotten a lot better in the marketing department recently.

    Isn't Navi 14 12 WGPs with the 5500 having one disabled?
     
    #350 Frenetic Pony, Dec 12, 2019
    Last edited: Dec 12, 2019
  11. Ryan Smith

    Regular

    Joined:
    Mar 26, 2010
    Messages:
    629
    Likes Received:
    1,131
    Location:
    PCIe x16_1
    Correct.
     
    TheAlSpark likes this.
  12. Benetanegia

    Regular

    Joined:
    Sep 4, 2015
    Messages:
    394
    Likes Received:
    425
  13. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,929
    Likes Received:
    5,529
    Location:
    Pennsylvania
    The 5500 4Gb looks like a decent card for the price, the 8gb version not so much.

    As Steve from GN put it, the 5500 is meh. Not terrible, not great.
     
  14. Mobius1aic

    Mobius1aic Quo vadis?
    Veteran

    Joined:
    Oct 30, 2007
    Messages:
    1,715
    Likes Received:
    293
    The pricing is bad for both. Nvidia style pricing is likely to rear it's ugly head unless AMD expects mobile binnings and high volume sales to OEMs to make up for possible lackluster desktop GPU card sales.
     
  15. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
    AMD has been the rational choice at most pricepoints below the high end for a long time now.
     
  16. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    Seriously? Looks like AMD forgot to dupe Nvidia with a price fakeout.
     
  17. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
    Do you seriously think Nvidia has had mostly better buys below the high end for the last decade? Outside of those people who buy the high end and upgrade every generation its difficult to recommend Nvidia IMO. You know they are going to age much more poorly as well.
     
  18. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    With AMD sales down sequentially for GPU's in this past quarter, it looks like buyers did determine what the better buys were.
     
  19. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
    Sales dont determine that. AMD typically offers better performance at lower prices while holding up much better as time progresses.
     
  20. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    This has only been proven true in the Kepler vs early GCN generation, Fiji generation aged badly for AMD, so much that FuryX is behaving like an RX 580/590 in a great deal of titles in the past two years.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...