AMD Navi Product Reviews and Previews: (5500, 5600 XT, 5700, 5700 XT)

Well, there are probably some other reasons too, like GCN using narrower SIMDs compared to RDNA and spending relatively less of a die budget on graphical features. But the main reason is likely simple - Vega 20 is faster than Navi 10 in compute right now.

While this might derail the topic a little, why is that exactly? Shouldn't the new workgroup design improve compute efficiency by keeping the SIMD units fed? Why AMD continues with Vega in the upcoming, headless, Arcturus release is a little beyond me unless efficiency is basically equal between rDNA and GCN5 so long as you keep the SIMD units fed. GCN was always finicky about scheduling and developer hand-holding to be used optimally. Which might be more easy to do and therefore a non-issue in HPC settings I suppose. Or are there still some advantage to GCN that isn't in rDNA for compute?
 
Did not calculate anew, but from the top of my head GCN has more raw TFLOPS per mm² than RDNA. In gaming, obviously you need to have more elaborate feeding and sorting mechanisms because clearly not everything there is compute. In Compute, well, it seems just a bit easier. And Vega has the option for faster and more memory compared to Navi10.

Did it anway now: Radeon VII (Vega 20 salvage), despite not being a fully enabled chip, has a compute density of 41,76 GFLOPS/mm², Radeon 5700XT (Navi1o full config) is at 38,84.

Wanted to look up the Radeon MI60, but the product page at amd.com is just a 404. When it was announced, AMD touted it as the world's fastest FP64 and FP32 capable GPU - with 29,5 FP16-TFLOPS aka 14,75 FP32-TFLOPS, which would put it at 44,56 TFLOPS/mm² with half-rate FP64 to boot. That would worsen the compute density of Navi further.
 
Last edited:
So basically you get higher compute density by offloading scheduling and feeding on devs, which again is probably entirely alright in HPC settings. Does that density come with efficiency improvements though? Or does it "just" make for more compute units per die, lowering cost "per compute unit". A saving which probably gets gobbled up by HBM, which is a vaunted feature for HPC though so possibly another good thing in the grand scheme of things.
 
Generally, in pure compute you have an easier time with scheduling in the first place without offloading it to devs (tbh, i think it's mostly driver devs). You have for example only one kind of wavefronts (compute), you don't have to worry about rasterization, early Z-out and stuff like that, which makes it easier for the fixed cadence of feed-four SIMD16 to work under high load.

When you power gate your rasterizers, parts of the TMUs and ROPs, you also save a substantial amount of power as well.
 
I believe, behind the technical things explained by CarstenS, that it's just easier for AMD to manage products this way, for now. I think having to bring Navi to the compute/pro market would have been maybe too much to handle (drivers, communications, validations, etc), while GCN is still doing ok to very good in this field, while they really needed a new gpu for the gaming market.
 
Navi10 dieshot from AMD HotChips slide
Nice find! I donwnloaded two HotChips presentations, but none of them contains this image. Which slide is it from?

I tried to enhance the detail to be better visible:
navi_10_dieshot.jpg
 
Last edited:
Did not calculate anew, but from the top of my head GCN has more raw TFLOPS per mm² than RDNA.

Yep. The RDNA CU's are larger and have a definite performance advantage when running a wide mix of game shaders, but you don't get the same performance gain on typical compute workloads where most of the shader instructions are coming from optimized math libraries rather than game code.
 
Last edited:
Huh, 5500 is on par with a 1660 Super at 33% lower bus width. They both have the same memory so AMD is actually beating out Nvidia on bandwidth efficiency now, at least in production which is the most important part anyway.

Now if only they could get that TDP efficiency up a lot better. Sure the two cards are comparable, at very different silicon nodes. Regardless at nigh the same performance for $50 less I don't see why anyone would rationally choose Nvidia at this price point. Of course AMD has rationally beat Nvidia before only to lose out in sales, but it seems they've gotten a lot better in the marketing department recently.

https://hexus.net/tech/reviews/graphics/137633-sapphire-radeon-rx-5500-xt-pulse/

Hm.

So Navi 14 is 11 WGPs, 32 ROPs, 128-bit, 158mm^2............. 1 NaviSE :?:
to Navi 10's 20 WGP, 64 ROPs, 256-bit, 251mm^2.... 2 NSE

55% WGP, 50% ROPs & bus, 63% size. Guess that's about right with the uncore unchanging not unexpectedly.

Isn't Navi 14 12 WGPs with the 5500 having one disabled?
 
Last edited:
The 5500 4Gb looks like a decent card for the price, the 8gb version not so much.

As Steve from GN put it, the 5500 is meh. Not terrible, not great.
 
The pricing is bad for both. Nvidia style pricing is likely to rear it's ugly head unless AMD expects mobile binnings and high volume sales to OEMs to make up for possible lackluster desktop GPU card sales.
 
Huh, 5500 is on par with a 1660 Super at 33% lower bus width. They both have the same memory so AMD is actually beating out Nvidia on bandwidth efficiency now, at least in production which is the most important part anyway.

Now if only they could get that TDP efficiency up a lot better. Sure the two cards are comparable, at very different silicon nodes. Regardless at nigh the same performance for $50 less I don't see why anyone would rationally choose Nvidia at this price point. Of course AMD has rationally beat Nvidia before only to lose out in sales, but it seems they've gotten a lot better in the marketing department recently.



Isn't Navi 14 12 WGPs with the 5500 having one disabled?
AMD has been the rational choice at most pricepoints below the high end for a long time now.
 
Seriously? Looks like AMD forgot to dupe Nvidia with a price fakeout.

Do you seriously think Nvidia has had mostly better buys below the high end for the last decade? Outside of those people who buy the high end and upgrade every generation its difficult to recommend Nvidia IMO. You know they are going to age much more poorly as well.
 
With AMD sales down sequentially for GPU's in this past quarter, it looks like buyers did determine what the better buys were.
 
With AMD sales down sequentially for GPU's in this past quarter, it looks like buyers did determine what the better buys were.

Sales dont determine that. AMD typically offers better performance at lower prices while holding up much better as time progresses.
 
while holding up much better as time progresses.
This has only been proven true in the Kepler vs early GCN generation, Fiji generation aged badly for AMD, so much that FuryX is behaving like an RX 580/590 in a great deal of titles in the past two years.
 
Back
Top